Re: [GENERAL] pg_dumping extensions having sequences with 9.6beta3

2016-07-22 Thread Michael Paquier
On Fri, Jul 22, 2016 at 6:27 PM, Philippe BEAUDOIN  wrote:
> I am currently playing with extensions. And I found a strange behaviour
> change with 9.6beta2 and 3 when pg_dumping a database with an extension
> having sequences. This looks like a bug, ... unless I did something wrong.
> [...]
> => as expected, with latest minor versions of postgres 9.1 to 9.5, the
> sequences associated to the t1.c1 and t1.c3 columns are not dumped,
>while the sequence associated to t2.c1 is dumped.
> => with 9.6beta3 (as with beta2), the 3 sequences are dumped.

Thanks for the report! I haven't looked at the problem in details yet,
but my guess is that this is owned by Stephen Frost. test_pg_dump does
not cover sequences yet, it would be visibly good to get coverage for
that. I am adding an open item as well.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Array value from table as parameter

2016-07-22 Thread Jim Nasby

On 7/22/16 7:31 AM, Charles Clavadetscher wrote:

I need to pass an array as a parameter in a function, but it is a part of a trigger that 
get a "new" value.
>
> I've tested it like
>
> select function_x(1,55,array[['x'],['y']])
> or
> select function_x(1,55,array[['x','y']])
>
> and it worked.
>
> But if I use
>
> select function_x(1,55,new.situations)
>
> it doesn't work, as the value comes like this: {"x","y"}
> I couldn't find a function that converts {} values back to [] without treat 
it as a string and use replace, what I think that is

not the ideal solution because it may can't satisfy more complex arrays.

Would it help to simply cast the argument to TEXT[]?


That would simply be masking the problem.

'{"x","y"}' is a completely valid representation of an array, 
but not one you should be getting out of code (because there's no reason 
for array_out to toss the extra "s in there). It also doesn't match 
either of your other examples.


Please create a stand-alone scenario that demonstrates the problem 
you're seeing. I suspect that in the process of doing that you're going 
to uncover a bug in your code, but if not then we'll have something 
concrete we can look at.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum repo, pgloader

2016-07-22 Thread John R Pierce

On 7/22/2016 5:50 PM, Adrian Klaver wrote:


If I where to hazard a guess it would have to do with pgloader 
migrating from being written in Python to being written in Common 
Lisp, that occurred at roughly the same time.


ah, yer right, the one in the pg 9.2 repo is 2.3.x, and indeed, pgloader 
v2 was python.


https://github.com/dimitri/pgloader/tree/pgloader-v2

I've spent all day trying to build the lisp code on centos 6 with a lot 
of frustration.   last time I even sniffed around the edges of lisp was 
in the 1970s.   I've successfully built sbcl 1.3, and it passes its self 
test.building pgloader, I'm  currently stuck at...


[package pgloader]
..
..
.;
; caught ERROR:
;   READ error during COMPILE-FILE:
;
; Symbol "*IXF-STREAM*" not found in the IXF package.
;
;   Line: 70, Column: 25, File-Position: 2414
;
;   Stream: #

debugger invoked on a UIOP/LISP-BUILD:COMPILE-FILE-ERROR in thread
#:
  COMPILE-FILE-ERROR while
  compiling #

Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
  0: [RETRY] Retry
 compiling #.
  1: [ACCEPT   ] Continue, treating
 compiling #
 as having been successful.
  2: Retry ASDF operation.
  3: [CLEAR-CONFIGURATION-AND-RETRY] Retry ASDF operation after resetting the
 configuration.
  4: [ABORT] Give up on "pgloader"
  5: [CONTINUE ] Ignore runtime option --eval "(ql:quickload 
\"pgloader\")".
  6: Skip rest of --eval and --load options.
  7: Skip to toplevel READ/EVAL/PRINT loop.
  8: [EXIT ] Exit SBCL (calling #'EXIT, killing the 
process).

(UIOP/LISP-BUILD:CHECK-LISP-COMPILE-RESULTS NIL T T "~/asdf-action::format-action/" ((# . 
#)))
0]


and have no idea how to proceed. btw,  `make` processes that go and 
download huge chunks of code from 3rd party sites scare the bejeezzus 
out of me.


maybe I'll dig out the v2 python code and try it.   it should be a lot 
easier to deploy in our overseas manufacturing production data centers, 
anyways.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] yum repo, pgloader

2016-07-22 Thread Adrian Klaver

On 07/22/2016 05:24 PM, John R Pierce wrote:

this question is mostly for devrim, who maintains the RHEL/CentOS/Fedora
PGDG yum repositories...

I note that pgloader is in the yum repo for Postgresql 9.2 on Centos 6,
but not 9.3, 9.4, 9.5, wondering if there's a reason it was dropped?


If I where to hazard a guess it would have to do with pgloader migrating 
from being written in Python to being written in Common Lisp, that 
occurred at roughly the same time.









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] yum repo, pgloader

2016-07-22 Thread John R Pierce
this question is mostly for devrim, who maintains the RHEL/CentOS/Fedora 
PGDG yum repositories...


I note that pgloader is in the yum repo for Postgresql 9.2 on Centos 6, 
but not 9.3, 9.4, 9.5, wondering if there's a reason it was dropped?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


d88a45e680327e0b22a34020d8f78252 - Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-22 Thread Attacker One
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?






[http://webmail.bitnine.co.kr/images/?attachID=cmVjcnVpdHxiaXRuaW5lX0NJXzYucG5n&cid=381831a483be2a2d269bb9df50ec4d49&date=20140911&adminId=andrewpark]
(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

2016-03-19 5:05 GMT+09:00 mailto:p...@cmicdo.com>>:

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
mailto:p...@cmicdo.com>> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100

PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078

Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100

Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov 
 > <9erthali...@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
 > benchmarks several months ago. I never noticed the cache influence back 
 > then, so I tried to 

d88a45e680327e0b22a34020d8f78252 - Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-22 Thread Attacker One
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?






[http://webmail.bitnine.co.kr/images/?attachID=cmVjcnVpdHxiaXRuaW5lX0NJXzYucG5n&cid=381831a483be2a2d269bb9df50ec4d49&date=20140911&adminId=andrewpark]
(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

2016-03-19 5:05 GMT+09:00 mailto:p...@cmicdo.com>>:

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
mailto:p...@cmicdo.com>> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100

PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078

Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100

Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov 
 > <9erthali...@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
 > benchmarks several months ago. I never noticed the cache influence back 
 > then, so I tried to 

d88a45e680327e0b22a34020d8f78252 - Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-22 Thread Attacker One
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?






[http://webmail.bitnine.co.kr/images/?attachID=cmVjcnVpdHxiaXRuaW5lX0NJXzYucG5n&cid=381831a483be2a2d269bb9df50ec4d49&date=20140911&adminId=andrewpark]
(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

2016-03-19 5:05 GMT+09:00 mailto:p...@cmicdo.com>>:

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
mailto:p...@cmicdo.com>> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100

PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078

Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100

Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov 
 > <9erthali...@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
 > benchmarks several months ago. I never noticed the cache influence back 
 > then, so I tried to 

Re: [GENERAL] unique constraint with several null values

2016-07-22 Thread Jim Nasby

On 7/20/16 1:14 PM, Mark Lybarger wrote:

This leads me to think I need to create 2^5 or 32 unique constraints to
handle the various combinations of data that I can store.


Another option would be to create a unique index of a bit varying field 
that set a bit to true for each field that was NULL WHERE field> != 0.


Let me know if you want to go that route, I could probably add that to 
http://pgxn.org/dist/count_nulls/ without much difficulty. Though, 
probably a better way to accomplish that would be to add a function to 
count_nulls that spits out an array of fields that are NULL; you could 
then do a unique index on that WHERE array != array[].


Maybe a less obtuse option would be to use a boolean array. Storage 
would be ~8x larger, but since there should be very few rows I doubt 
that matters.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


d88a45e680327e0b22a34020d8f78252 - Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-22 Thread Attacker One
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?






[http://webmail.bitnine.co.kr/images/?attachID=cmVjcnVpdHxiaXRuaW5lX0NJXzYucG5n&cid=381831a483be2a2d269bb9df50ec4d49&date=20140911&adminId=andrewpark]
(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

2016-03-19 5:05 GMT+09:00 mailto:p...@cmicdo.com>>:

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
mailto:p...@cmicdo.com>> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100

PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078

Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100

Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov 
 > <9erthali...@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
 > benchmarks several months ago. I never noticed the cache influence back 
 > then, so I tried to 

[GENERAL] pg_dumping extensions having sequences with 9.6beta3

2016-07-22 Thread Philippe BEAUDOIN

Hi all,

I am currently playing with extensions. And I found a strange behaviour 
change with 9.6beta2 and 3 when pg_dumping a database with an extension 
having sequences. This looks like a bug, ... unless I did something wrong.


Here is a test case (a simple linux shell script, that can be easily 
customized to reproduce).



# pg_dump issue in postgres 9.6beta2 when dumping sequences linked to 
extensions

#
export PGBIN="/usr/local/pg96beta3/bin"
#export PGBIN="/usr/local/pg952/bin"
export EXTDIR="/tmp"
export PGDIR="/usr/local/pg96beta3/share/postgresql/extension"
#export PGDIR="/usr/local/pg952/share/postgresql/extension"
export PGPORT=5496
#export PGPORT=5495
export PGDATABASE='postgres'

echo 
"##"

echo " "
echo "psql: prepare the initial environment: 1 schema + 2 tables with 1 
serial column in each"
echo 
"---"

$PGBIN/psql -a <<*END*
select version();
-- cleanup
DROP EXTENSION IF EXISTS myextension;
DROP SCHEMA IF EXISTS myextension CASCADE;
-- create
CREATE SCHEMA myextension;
CREATE TABLE myextension.t1 (c1 SERIAL);
CREATE TABLE myextension.t2 (c1 SERIAL);
*END*

echo "create first files for extension management"
echo "---"
cat >$EXTDIR/myextension.control <<*END*
default_version= '1'
comment= 'test'
directory= '$EXTDIR'
superuser= true
schema= 'myextension'
relocatable= false
*END*
sudo ln -s $EXTDIR/myextension.control $PGDIR/myextension.control

cat >$EXTDIR/myextension--unpackaged--1.sql <<*END*
-- for t1, the table and the sequence is added to the extension
ALTER EXTENSION myextension ADD TABLE myextension.t1;
ALTER EXTENSION myextension ADD SEQUENCE myextension.t1_c1_seq;
-- for t2, the associated sequence is not added to the extension for now
ALTER EXTENSION myextension ADD TABLE myextension.t2;
-- create a new t3 table
CREATE TABLE t3 (c1 SERIAL);
*END*

echo "psql: create the extension from unpackaged"
echo "--"
$PGBIN/psql -a <<*END*
-- create
CREATE EXTENSION myextension FROM unpackaged;
-- check
\dx
SELECT classid, c1.relname, objid, c2.relname, c2.relkind, refclassid, 
r.relname, refobjid

  FROM pg_depend, pg_class c1, pg_class r, pg_class c2
  WHERE deptype = 'e'
AND classid = c1.oid AND refclassid = r.oid AND objid = c2.oid
AND c1.relname = 'pg_class';
*END*

echo " "
echo "So we now have 3 tables having a serial column:"
echo " - t1 explicitely added to the extension, with its sequence"
echo " - t2 explicitely added to the extension, but without its sequence"
echo " - t3 directly created inside the extensione"
echo " "

echo "sequences dumped by pg_dump (pg_dump |grep 'CREATE SEQUENCE')"
echo "---"
$PGBIN/pg_dump |grep 'CREATE SEQUENCE'

echo " "
echo "=> as expected, with latest minor versions of postgres 9.1 to 9.5, 
the sequences associated to the t1.c1 and t1.c3 columns are not dumped,"

echo "   while the sequence associated to t2.c1 is dumped."
echo "=> with 9.6beta3 (as with beta2), the 3 sequences are dumped."
echo " "

echo "cleanup"
echo "---"
$PGBIN/psql <<*END*
DROP EXTENSION IF EXISTS myextension;
DROP SCHEMA IF EXISTS myextension CASCADE;
*END*

sudo rm $PGDIR/myextension.control
rm $EXTDIR/myextension*


And its output result:


##

psql: prepare the initial environment: 1 schema + 2 tables with 1 serial 
column in each

---
select version();
version
-
 PostgreSQL 9.6beta3 on i686-pc-linux-gnu, compiled by gcc (Ubuntu 
4.8.4-2ubuntu1~14.04.3) 4.8.4, 32-bit

(1 row)

-- cleanup
DROP EXTENSION IF EXISTS myextension;
NOTICE:  extension "myextension" does not exist, skipping
DROP EXTENSION
DROP SCHEMA IF EXISTS myextension CASCADE;
NOTICE:  schema "myextension" does not exist, skipping
DROP SCHEMA
-- create
CREATE SCHEMA myextension;
CREATE SCHEMA
CREATE TABLE myextension.t1 (c1 SERIAL);
CREATE TABLE
CREATE TABLE myextension.t2 (c1 SERIAL);
CREATE TABLE
create first files for extension management
---
psql: create the extension from unpackaged
--
-- create
CREATE EXTENSION myextension FROM unpackaged;
CREATE EXTENSION
-- check
\dx
List of installed extensions
Name | Version |   Schema| Description
-+-+-+--
 myextension | 1   | myextension | test
 plpgsql | 1.0 | pg_catalog  | PL/pgSQL procedural language
(2 r

d88a45e680327e0b22a34020d8f78252 - Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-22 Thread Attacker One
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?






[http://webmail.bitnine.co.kr/images/?attachID=cmVjcnVpdHxiaXRuaW5lX0NJXzYucG5n&cid=381831a483be2a2d269bb9df50ec4d49&date=20140911&adminId=andrewpark]
(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

2016-03-19 5:05 GMT+09:00 mailto:p...@cmicdo.com>>:

On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
mailto:p...@cmicdo.com>> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:

[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 100

PG Run:
---
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078

Mongo Load:
---
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 100.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 100

Mongo Run:
-
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov 
 > <9erthali...@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same 
 > benchmarks several months ago. I never noticed the cache influence back 
 > then, so I tried to 

Re: [GENERAL] Array value from table as parameter

2016-07-22 Thread Charles Clavadetscher
Hello

> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Aislan Luiz Wendling
> Sent: Mittwoch, 20. Juli 2016 19:17
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Array value from table as parameter
>
> Hi,
>
> I need to pass an array as a parameter in a function, but it is a part of a 
> trigger that get a "new" value.
>
> I've tested it like 
>
> select function_x(1,55,array[['x'],['y']]) 
> or
> select function_x(1,55,array[['x','y']]) 
>
> and it worked.
>
> But if I use 
>
> select function_x(1,55,new.situations)
>
> it doesn't work, as the value comes like this: {"x","y"}
> I couldn't find a function that converts {} values back to [] without treat 
> it as a string and use replace, what I think that is
not the ideal solution because it may can't satisfy more complex arrays.

Would it help to simply cast the argument to TEXT[]?

select function_x(1,55,(new.situations)::TEXT[]);

I am not sure if the extra parenthesis are necessary.

Regards
Charles




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-22 Thread Alex Ignatov

Ok, thanks !

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 22.07.2016 06:49, Sameer Kumar wrote:



On Thu, Jul 21, 2016 at 7:14 PM Alex Ignatov > wrote:


And what is the options you mentioned about?


I stand corrected.
I think the only ones you can avoid are OWNERSHIP and TABLESPACE 
commands.


I think sed would be the best tool here.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

On 20.07.2016 21:02, Sameer Kumar wrote:



On Thu, 21 Jul 2016, 1:17 a.m. Alex Ignatov,
mailto:a.igna...@postgrespro.ru>> wrote:

Hello everyone!

Is there any way to make pg_dump(9.5) to dump table (with
data) without
any SET command in the header of output plain sql file?


Yeah you need to use specific switch/option to disable each set
command


P.S. Yeah I know about sedding =)


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
--

Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com



--
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com





Re: [GENERAL] For storing XML version in our table.

2016-07-22 Thread Charles Clavadetscher
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of zubair alam
> Sent: Mittwoch, 20. Juli 2016 09:09
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] For storing XML version in our table.

Hi 
How i can store my xml  data with their version in postgres database table.

You can start reading here for the current version (9.5 at the time of this 
writing):

https://www.postgresql.org/docs/current/static/datatype-xml.html
https://www.postgresql.org/docs/current/static/functions-xml.html

Regards
Charles




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general