Re: [GENERAL] Slow query plan used

2017-06-01 Thread Wetzel, Juergen (Juergen)
Andreas Kretschmer writes: > please consider my plan B) and increase the stats. See my other mail. I tried that also. Combined with the partial index. But still same result. Bill Moran writes: > LIKE queries are probably challenging to plan,

Re: [GENERAL] jsonb case insensitive search

2017-06-01 Thread Guyren Howe
On Jun 1, 2017, at 13:44 , Karl Czajkowski wrote: > If I remember correctly, isn't a compound index always just using > btree? In general, I have found better luck using several smaller > btree indices than one large compound one. Unless your entire query > can be answered from

Re: [GENERAL] repmgr cascade replication node delay

2017-06-01 Thread David G. Johnston
On Thu, Jun 1, 2017 at 3:30 PM, Juliano wrote: > The objective of an extra node 24 hours delayed is to recover some error > in the server. > Is it possible to make this configuration? > A quick search turns up:​ ​​recovery_min_apply_delay

[GENERAL] repmgr cascade replication node delay

2017-06-01 Thread Juliano
Hi guys, I have a replication scenario master/slave and I would like to create one extra more node (cascade from slave) with a time delay of 24-hour replication. The objective of an extra node 24 hours delayed is to recover some error in the server. Is it possible to make this configuration?

Re: [GENERAL] dump to pg

2017-06-01 Thread Nicolas Paris
> If they aren't too big, you might get away by installing the express edition > of the respective DBMS, then import them using the native tools, then export > the data as CSV files. Thanks Thomas. Both are binaries. The oracle's one is a 30TB database... -- Sent via pgsql-general mailing

Re: [GENERAL] dump to pg

2017-06-01 Thread Thomas Kellerer
Nicolas Paris schrieb am 31.05.2017 um 16:43: Hi, I have dumps from oracle and microsoft sql server (no more details). Is it possible to load them "directly" into postgres (without oracle/mssql license)? dump -> csv -> postgtres or something ? If those are binary dumps (e.g. a DataPump dump

Re: [GENERAL] dump to pg

2017-06-01 Thread Nicolas Paris
Thanks all, The point is I only have access to dump files, no ora/mssql server instance access. I have noticed the warning around legality on that question. The best solution for me is to ask to each. Once get answer, I will come back here to provide the answer. 2017-06-01 4:14 GMT-04:00 vinny

Re: [GENERAL] jsonb case insensitive search

2017-06-01 Thread Karl Czajkowski
On Jun 01, armand pirvu modulated: > Overall could it be that the optimizer blatantly ignores a scan index which > is cheaper than a table scan, or jsonb implementation still has a long way to > come up or the way it is used in my case is not the one designed for ? > If I remember correctly,

Re: [GENERAL] jsonb case insensitive search

2017-06-01 Thread armand pirvu
I apologize before hand replying again on my own reply . I know it is frowned upon . My inline comments. > On Jun 1, 2017, at 2:05 PM, armand pirvu wrote: > > Thank you Karl and David > > Ideally as far as I can tell the index would need to be show_id, file_id, >

Re: [GENERAL] jsonb case insensitive search

2017-06-01 Thread armand pirvu
Thank you Karl and David Ideally as far as I can tell the index would need to be show_id, file_id, lower(…) The question is if this is possible ? Thanks Armand > On Jun 1, 2017, at 12:24 PM, Karl Czajkowski wrote: > > On May 31, armand pirvu modulated: > >> The idea is

Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Louis Battuello
> On Jun 1, 2017, at 12:58 PM, Scott Marlowe wrote: > > On Thu, Jun 1, 2017 at 10:42 AM, Steve Atkins > wrote: >> >>> On Jun 1, 2017, at 9:26 AM, Louis Battuello >>> wrote: >>> >>> Is the

Re: [GENERAL] jsonb case insensitive search

2017-06-01 Thread Karl Czajkowski
On May 31, armand pirvu modulated: > The idea is that I would like to avoid having an index for each key > possibly wanted to search, but rather say have the whole record then > search by the key and get the key value, thus having one index serving > multiple purposes so to speak > First,

Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Scott Marlowe
On Thu, Jun 1, 2017 at 10:42 AM, Steve Atkins wrote: > >> On Jun 1, 2017, at 9:26 AM, Louis Battuello >> wrote: >> >> Is the round() function implemented differently for double precision than >> for numeric? Forgive me if this exists somewhere in

Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Steve Atkins
> On Jun 1, 2017, at 9:26 AM, Louis Battuello > wrote: > > Is the round() function implemented differently for double precision than for > numeric? Forgive me if this exists somewhere in the documentation, but I > can't seem to find it.

Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Torsten Förtsch
This is documented in section 8.1.2 in the manual. ( https://www.postgresql.org/docs/9.6/static/datatype-numeric.html) NUMERIC rounds away from zero. IEEE 754 based data types (FLOAT, DOUBLE PRECISION) round to the closest even number. On Thu, Jun 1, 2017 at 6:26 PM, Louis Battuello

Re: [GENERAL] Slow query plan used

2017-06-01 Thread Tom Lane
Bill Moran writes: > LIKE queries are probably challenging to plan, especially when they're not > left-anchored: how can the planner be reasonalbly expected to estimate how > many rows will be matched by a given LIKE expression. Yeah, especially without any statistics.

[GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Louis Battuello
Is the round() function implemented differently for double precision than for numeric? Forgive me if this exists somewhere in the documentation, but I can't seem to find it. I've noticed with 9.6 on OSX, the .5 rounding is handled differently between the types. (I haven't tested other

Re: [GENERAL] Slow query plan used

2017-06-01 Thread Bill Moran
On Thu, 1 Jun 2017 16:45:17 +0200 Andreas Kretschmer wrote: > > Am 01.06.2017 um 14:07 schrieb Wetzel, Juergen (Juergen): > > > > Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen): > Only 130 rows out of the 3 have ARCHIVED = 0 > >>> in this case i would

Re: [GENERAL] Slow query plan used

2017-06-01 Thread Andreas Kretschmer
Am 01.06.2017 um 14:07 schrieb Wetzel, Juergen (Juergen): Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen): Only 130 rows out of the 3 have ARCHIVED = 0 in this case i would suggest a partial index: create index on (archived) where archived = 0; Thanks, Andreas. Sorry for the

Re: [GENERAL] Redo the filenode link in tablespace

2017-06-01 Thread Adrian Klaver
On 06/01/2017 03:47 AM, tel medola wrote: Did you get any help with this? /I formatted correctly before sending the email. Maybe you should ask yourself if the mail server did not remove the formatting./ I was talking about help with your relfilenode issue, I learned to deal with the

Re: [GENERAL] Slow query plan used

2017-06-01 Thread Wetzel, Juergen (Juergen)
Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen): >>> Only 130 rows out of the 3 have ARCHIVED = 0 >> in this case i would suggest a partial index: >> create index on (archived) where archived = 0; > Thanks, Andreas. > > Sorry for the confusion about the table names. > The hint

Re: [GENERAL] Redo the filenode link in tablespace

2017-06-01 Thread tel medola
Did you get any help with this? *I formatted correctly before sending the email. Maybe you should ask yourself if the mail server did not remove the formatting.* Well the relpages, reltuples are estimated values that can be updated with an ANALYZE. *I can not make analyze on a table whose

Re: [GENERAL] Build PostgreSQL With XML Support on Linux

2017-06-01 Thread Osahon Oduware
Hi Tom, Thanks a lot for your response. You made my day! Your thought in the 2nd paragraph of your response (the "make distclean" stuff) was the solution. It was an oversight on my part. Thanks once again. On Wed, May 31, 2017 at 4:49 PM, Tom Lane wrote: > Osahon Oduware

Re: [GENERAL] dump to pg

2017-06-01 Thread vinny
On 2017-05-31 16:43, Nicolas Paris wrote: Hi, I have dumps from oracle and microsoft sql server (no more details). Is it possible to load them "directly" into postgres (without oracle/mssql license)? dump -> csv -> postgtres or something ? Thanks a lot A very, *very* short trip to google