Just wanted to thank Lacey for the assistance. I set up my first BSD server
recently and installing things via ports had been going great until this.
The docs don't mention how to enable the kernel modules permanently either,
so thanks also for that additional note.
https://www.freebsd.org/doc/en_U
hi,
I want to know what kind of hash function postgresql uses while joining. I
was debugging through gdb, I found out that it is not using bob jenkins
hash function but a different hash function *hash_uint32() and
hash_any() *functions
if the joining attribute is an integer, and a different kind
Petr Novak wrote
> Three of them failed to start after pg_basebackup completed with:
>
> FATAL: could not access status of transaction 923709700
> DETAIL: Could not read from file "pg_clog/0370" at offset 237568:
> Success.
>
> (the clog file differed in each case of course..)
>
> As for PG ve
On 01/22/2015 11:04 PM, Tim Uckun wrote:
Take a look at this explain
http://explain.depesz.com/s/TTRN
I maybe missing it, but I do not see the actual query.
The final number of records is very small but PG is starting out with a
massive number of records and then filtering most of them out.
Thank you for the confirmation!
On Fri, Jan 23, 2015 at 1:24 PM, Maciek Sakrejda wrote:
> On Fri, Jan 23, 2015 at 12:08 PM, Al Eridani wrote:
>
>> Is this still true on the 9.x versions? Thanks!
>>
>
> It is, unfortunately. If your client kicks off a 12h query and immediately
> closes the TCP c
On Fri, Jan 23, 2015 at 12:08 PM, Al Eridani wrote:
> Is this still true on the 9.x versions? Thanks!
>
It is, unfortunately. If your client kicks off a 12h query and immediately
closes the TCP connection, Postgres won't notice until the query has
completed.
On 01/23/2015 08:55 AM, Petr Novak wrote:
I've checked wal_keep_segments before i've started the backup and the
xlog dir on the master contained logs from several hours ago. Which was
sufficient.
Also if this would be the case, then copying affected clog file from the
master wouldn't solve the pr
create table json_data(row_id int, json_text jsonb);
insert into json_data(1,
'[{"ID":"1","location_name":"Test"},{"ID":"2","location_name":"Examples"}]');
To search for an ID
select row_id, parsed.* from json_data, lateral
jsonb_to_recordset(json_data.json_text) as parsed("ID" text, location_na
On Jan 23, 2015, at 12:20 PM, Tim Smith wrote:
> So basically we're saying JSON in 9.4 is still a little way from where
> it needs to be in terms of real-world functionality ? Or am I being
> too harsh ? ;-)
"Doesn't meet my particular use-case exactly" is not quite the same thing.
--
-- Chri
So basically we're saying JSON in 9.4 is still a little way from where
it needs to be in terms of real-world functionality ? Or am I being
too harsh ? ;-)
On 23 January 2015 at 18:49, Adrian Klaver wrote:
> On 01/23/2015 10:15 AM, Tim Smith wrote:
>>>
>>> How does it not work?
>>> In other words
More than five years ago Tom Lane wrote
"It's not easy to tell whether a client has disconnected (particularly if
the network stack is unhelpful, which is depressingly often true).
Postgres will cancel a query if it gets told that the connection's been
dropped, but it will only discover this when
On Fri, Jan 23, 2015 at 01:56:53PM -0600, Suresh Raja wrote:
> Hi All:
>
> We are running a function with a loop in it. We tried
> commit;
> We are getting error with above command. How can i easily commit withing
> a function.
>
>
> Thanks,
> -SR
PostgreSQL functions run within a transaction
On 01/23/2015 11:56 AM, Suresh Raja wrote:
Hi All:
We are running a function with a loop in it. We tried
commit;
We are getting error with above command. How can i easily commit
withing a function.
Functions do not have transactions inside them. If you are using plpgsql
you can use EXCEPTIO
Hi
2015-01-23 20:56 GMT+01:00 Suresh Raja :
> Hi All:
>
> We are running a function with a loop in it. We tried
> commit;
> We are getting error with above command. How can i easily commit withing
> a function.
>
>
It is not possible in PostgreSQL
Regards
Pavel Stehule
>
> Thanks,
> -SR
>
Hi All:
We are running a function with a loop in it. We tried
commit;
We are getting error with above command. How can i easily commit withing
a function.
Thanks,
-SR
On 01/23/2015 10:15 AM, Tim Smith wrote:
How does it not work?
In other words what was the query you tried and what was the output?
As in, it doesn't work. Full stop
\d+ json_test
Table "public.json_test"
Column | Type | Modifiers | Storage | Stats target | De
> How does it not work?
> In other words what was the query you tried and what was the output?
As in, it doesn't work. Full stop
\d+ json_test
Table "public.json_test"
Column | Type | Modifiers | Storage | Stats target | Description
-+---+---+--
On Jan 23, 2015, at 7:40 AM, Tim Smith wrote:
> re: (a)
>
>> see the documentation pertaining to 'jsonb indexing', to wit:
>>
>> -- Find documents in which the key "company" has value "Magnafone"
>> SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company":
>> "Magnafone"}';
>
> No
I've checked wal_keep_segments before i've started the backup and the xlog
dir on the master contained logs from several hours ago. Which was
sufficient.
Also if this would be the case, then copying affected clog file from the
master wouldn't solve the problem as the requires wals would be still
mi
On 01/23/2015 08:36 AM, Petr Novak wrote:
Just the config files for the replica, all other dirs were removed.
Alright, lets look at the other end. You are using -x which is:
Using this option is equivalent of using -X with method fetch.
and
-X fetch is:
f
fetch
The transaction log files ar
Just the config files for the replica, all other dirs were removed.
P
On Jan 23, 2015 5:30 PM, "Adrian Klaver" wrote:
> On 01/23/2015 08:21 AM, Petr Novak wrote:
>
>> Hi Adrian,
>>
>> sure the command is as follows:
>>
>> pg_basebackup -h -p -D /data2/pgsql/baseb -P -v -U
>> replicator -x -c f
On 01/23/2015 08:21 AM, Petr Novak wrote:
Hi Adrian,
sure the command is as follows:
pg_basebackup -h -p -D /data2/pgsql/baseb -P -v -U
replicator -x -c fast
After that I moved the content of /data2/pgsql/baseb to actual datadir
and tried to start up the cluster.
Where there left over file
Hi Adrian,
sure the command is as follows:
pg_basebackup -h -p -D /data2/pgsql/baseb -P -v -U
replicator -x -c fast
After that I moved the content of /data2/pgsql/baseb to actual datadir and
tried to start up the cluster. On some servers it worked on the three
didn't. The servers are of the sa
On 01/23/2015 06:31 AM, Petr Novak wrote:
Hi all,
I'd like to ask for help clarifying an issue I'm having.
I've recently prepared new servers in another datacenter for some of our
databases which I want to set up as a streaming replicas. There are
several instances(clusters) with size ranging f
On 01/23/2015 07:40 AM, Tim Smith wrote:
re: (a)
see the documentation pertaining to 'jsonb indexing', to wit:
-- Find documents in which the key "company" has value "Magnafone"
SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company":
"Magnafone"}';
Nope, sorry, tried that. Doe
re: (a)
>see the documentation pertaining to 'jsonb indexing', to wit:
>
>-- Find documents in which the key "company" has value "Magnafone"
>SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company":
> "Magnafone"}';
Nope, sorry, tried that. Doesn't work for me. Hence the question.
On Fri, Jan 23, 2015 at 8:00 AM, Tim Smith
wrote:
> Hi,
>
> I've tried RTFMing on the wonderful new 9.4 jsonb features, but
> there's a little bit of a lack of examples as to how to do stuff.
>
> I've got a document loaded in to a jsonb column that looks something like :
>
> [{"ID":"1","location_n
Hi all,
I'd like to ask for help clarifying an issue I'm having.
I've recently prepared new servers in another datacenter for some of our
databases which I want to set up as a streaming replicas. There are several
instances(clusters) with size ranging from 50-150GB. Some of them were set
up with
Thank you for your input.
When doing an "ORDER BY measurement_time DESC" I had hoped for the query
planner to firstly query the most recent "child" table, i.e. the table
holding current month's data, and then move-on to the oldest table, since
it knows the partition is based on measurement_time. I
Hi,
I've tried RTFMing on the wonderful new 9.4 jsonb features, but
there's a little bit of a lack of examples as to how to do stuff.
I've got a document loaded in to a jsonb column that looks something like :
[{"ID":"1","location_name":"Test"},{"ID":"2","location_name":"Examples"}]
Anyway, the
> On 22 Jan 2015, at 15:54, Pierre Hsieh wrote:
> rule:
> 1. just one column which type is integer in table
> 2. this columns only has 1 and 2 for 50 times as following
>
> 1
> 2
> 1
> 2
> 1
> 2
> 1
> 2
> .
create table test as select b from generate_series(1,50) s1(a),
generate_series(1,2
On Fri, Jan 23, 2015 at 4:20 PM, Christophe Pettus wrote:
> I'm working on a 9.4 logical replication plugin, mostly for my own
> edification, and have run into a limit of my knowledge: How can I retrieve
> the role (either oid or textual name) associated with the operations my
> callbacks are r
On 22/01/2015 15:37, tsunghan hsieh wrote:
> Hi
>
> I have a table which just has one column as following in Original Table.
> I wanna duplicate all of data for few times and with same order as
> following in New Table. Is there anyone who can help me? Thanks
How about something like this? -
OK, so you're on 0.7.1.
Before we pursue much more, can you please update to bdr-plugin/next ? The
0.8.0 release is pending shortly, and contains a lot of changes since
0.7.1, some of which pertain to init behaviour.
After 0.7.1 the structure was changed a bit. So you will need to first
check out
Hi Chris
When running: git rev-parse --short HEAD
Output: a63674e
The source package I got from:
git clone git://git.postgresql.org/git/2ndquadrant_bdr.git
git clean -d -fx ""
git checkout bdr/0.7.1
This is the logs from node 1:
d= p=28495 a=LOCATION: ExecAlterExtensionStmt, extension.c:27
35 matches
Mail list logo