Re: CHECK Constraint Deferrable

2023-10-02 Thread Andreas Joseph Krogh


På fredag 07. juli 2023 kl. 13:50:44, skrev Dilip Kumar mailto:dilipbal...@gmail.com>>:
On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya
 wrote:
>
> Hi,
>
> Currently, there is no support for CHECK constraint DEFERRABLE in a create 
table statement.
> SQL standard specifies that CHECK constraint can be defined as DEFERRABLE.

I think this is a valid argument that this is part of SQL standard so
it would be good addition to PostgreSQL. So +1 for the feature.

But I am wondering whether there are some real-world use cases for
deferred CHECK/NOT NULL constraints? I mean like for foreign key
constraints if there is a cyclic dependency between two tables then
deferring the constraint is the simplest way to insert without error.


The real-world use case, at least for me, is when using an ORM. For large 
object-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE the 
“NOT NULLs” later.

“Rewrite the ORM” is not an option for most of us…



--

Andreas Joseph Krogh


Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails

2021-09-16 Thread Andreas Joseph Krogh

På torsdag 16. september 2021 kl. 18:57:39, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
[...]
 > FWIW; I saw this Open Item was set to fixed, but I'm still getting this 
error
 > in 388726753b638fb9938883bdd057b2ffe6f950f5

 The open item was not about that parser shortcoming, nor did this patch
 claim to fix it.

 regards, tom lane 

Ok, sorry for the noise. 



--
 Andreas Joseph Krogh 

Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails

2021-09-16 Thread Andreas Joseph Krogh

På torsdag 16. september 2021 kl. 01:40:31, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
[...]
 regression=# with recursive cte (x,r) as (
 select 42 as x, row(i, 2.3) as r from generate_series(1,3) i
 union all 
 select x, row((c.r).f1, 4.5) from cte c
 ) 
 select * from cte;
 ERROR: record type has not been registered 




FWIW; I saw this Open Item was set to fixed, but I'm still getting this error 
in 388726753b638fb9938883bdd057b2ffe6f950f5 

--
 Andreas Joseph Krogh 


ERROR: could not determine which collation to use for string comparison

2020-04-14 Thread Andreas Joseph Krogh

Guys; This errors out with: 

ERROR: could not determine which collation to use for string comparison 
 HINT: Use the COLLATE clause to set the collation explicitly.


The database is init'ed with: 
initdb -D $PGDATA -E utf8 --locale=nb_NO.UTF-8

13-dev HEAD as of 8128b0c152a67917535f50738ac26da4f984ddd9 

Works fine in <= 12 

=== 

create table person( id serial primary key, firstname varchar, lastname varchar
);insert into person(firstname, lastname) values ('Andreas', 'Krogh'); CREATE 
OR REPLACE FUNCTIONconcat_lower(varchar, varchar) RETURNS varchar AS $$ SELECT 
nullif(lower(coalesce($1, '')) || lower(coalesce($2, '')), '') $$ LANGUAGE SQL  
IMMUTABLE; select * from person pers ORDER BY concat_lower(pers.firstname, 
pers.lastname)ASC;  === 


--
 Andreas Joseph Krogh 

Sv: Just for fun: Postgres 20?

2020-02-11 Thread Andreas Joseph Krogh

This project already tried that: 
https://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95 
<https://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95> 
Didn't last long... 


--
 Andreas Joseph Krogh 

Re: Minimal logical decoding on standbys

2020-01-20 Thread Andreas Joseph Krogh

På tirsdag 21. januar 2020 kl. 03:57:42, skrev Amit Khandekar <
amitdkhan...@gmail.com <mailto:amitdkhan...@gmail.com>>: 
[...]
 Sorry for the late reply.
 This patch only supports logical decoding from standby. So it's just
 an infrastructure for supporting logical replication from standby. We
 don't support creating a publication from standby, but the publication
 on master is replicated on standby, so we might be able to create
 subscription nodes that connect to existing publications on standby,
 but basically we haven't tested whether the publication/subscription
 model works with a publication on a physical standby. This patch is
 focussed on providing a way to continue logical replication *after*
 the standby is promoted as master. 


Thanks for clarifying. 


--
 Andreas Joseph Krogh

Re: Minimal logical decoding on standbys

2020-01-16 Thread Andreas Joseph Krogh

På torsdag 16. januar 2020 kl. 05:42:24, skrev Amit Khandekar <
amitdkhan...@gmail.com <mailto:amitdkhan...@gmail.com>>: 
On Fri, 10 Jan 2020 at 17:50, Rahila Syed  wrote:
 >
 > Hi Amit,
 >
 > Can you please rebase the patches as they don't apply on latest master?

 Thanks for notifying. Attached is the rebased version. 

Will this patch enable logical replication from a standby-server? 


--
 Andreas Joseph Krogh 


Re: color by default

2019-12-31 Thread Andreas Joseph Krogh

På tirsdag 31. desember 2019 kl. 14:35:39, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Peter Eisentraut  writes:
 > With the attached patch, I propose to enable the colored output by
 > default in PG13.

 FWIW, I shall be setting NO_COLOR permanently if this gets committed.
 I wonder how many people there are who actually *like* colored output?
 I find it to be invariably less readable than plain B&W text.

 I may well be in the minority, but I think some kind of straw poll
 might be advisable, rather than doing this just because. 


It's easier to spot errors/warnings when they are colored/emphasized imo. Much 
like colored output from grep/diff; We humans have colored vision for a reason. 



--
 Andreas Joseph Krogh 


ERROR: attribute number 6 exceeds number of columns 5

2019-11-26 Thread Andreas Joseph Krogh

Run the attached script and you'll get: 

psql -f error.sql -d test 
psql:error.sql:37: ERROR: attribute number 6 exceeds number of columns 5

Splitting up the alter-table like this makes it work: 

alter table access
 add column start_timestamp timestamp not null DEFAULT CURRENT_TIMESTAMP,
 add column end_timestamp timestamp
 ;
 alter table access add column tsrange TSRANGE NOT NULL GENERATED ALWAYS AS 
(tsrange(start_timestamp, end_timestamp, '[)')) STORED
 ; 

-- 

Andreas Joseph Krogh 


error.sql
Description: application/sql


Re: ERROR: failed to add item to the index page

2019-05-02 Thread Andreas Joseph Krogh
På torsdag 02. mai 2019 kl. 21:38:02, skrev Peter Geoghegan :
 > Pushed, though final version does the test a little differently. It
 > adds the required heap TID space to itupsz, rather than subtracting it
 > from pgspc. This is actually representative of the underlying logic,
 > and avoids unsigned underflow. Thanks! 
 --
 Andreas Joseph Krogh

Sv: Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 21:26:52, skrev Andres Freund mailto:and...@anarazel.de>>: > [...]
> The standard on pg lists is to write in a manner that's usable for both > 
text mail readers and the archive. Doesn't terribly matter to the > occasional 
one-off poster on -general, but you're not that... So please > try to write 
readable mails for the PG lists.
> 
> Greetings,
 > 
> Andres Freund ACK. --
 Andreas Joseph Krogh

Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 20:59:43, skrev Andres Freund mailto:and...@anarazel.de>>: [...]
 Andreas, unfortunately your emails are pretty unreadable. Check the
 quoted email, and the web archive:

 
https://www.postgresql.org/message-id/VisenaEmail.41.51d7719d814a1f54.16a6f98a5e9%40tc7-visena

 Greetings,

 Andres Freund
I know that the text-version is quite unreadable, especially when quoting. My 
MUA is web-based and uses CKEditor for composing, and it doesn't care much to 
try to format the text/plain version (I know because I've written it, yes and 
have yet to fix the Re: Sv: Re: Sv: subject issue...). But it has tons of 
benefits CRM- and usage-wise so I prefer to use it. But - how use text/plain 
these days:-) --
 Andreas Joseph Krogh

Sv: Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 19:58:31, skrev Peter Geoghegan mailto:p...@bowt.ie>>: On Tue, Apr 30, 2019 at 9:59 AM Peter Geoghegan 
 wrote:
 > I'll start investigating the problem right away.

 I have found what the problem is. I simply neglected to make a
 conservative assumption about suffix truncation needing to add a heap
 TID to a leaf page's new high key in nbtsort.c (following commit
 dd299df8189), even though I didn't make the same mistake in
 nbtsplitloc.c. Not sure how I managed to make such a basic error.

 Andreas' test case works fine with the attached patch. I won't push a
 fix for this today.

 --
 Peter Geoghegan Nice, thanks! --
 Andreas Joseph Krogh

Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 18:48:45, skrev Peter Geoghegan mailto:p...@bowt.ie>>: On Tue, Apr 30, 2019 at 9:47 AM Tom Lane 
 wrote:
 > Andreas Joseph Krogh  writes:
 > > I have a 1.4GB dump (only one table) which reliably reproduces this error.
 > > Shall I share it off-list? --
 >
 > That's awfully large :-(. How do you have in mind to transmit it?

 I've send dumps that were larger than that by providing a Google drive
 link. Something like that should work reasonably well. I've sent you guys a 
link (Google Drive) off-list. 
 --
 Andreas Joseph Krogh

Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 16:27:05, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>: [snip] Yep, happens without 
--with-llvm also. I'll try to load only the necessary table(s) to reproduce. I 
have a 1.4GB dump (only one table) which reliably reproduces this error.
 Shall I share it off-list? --
 Andreas Joseph Krogh

Sv: Sv: Re: Sv: Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 16:03:04, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>: På tirsdag 30. april 2019 kl. 
15:53:50, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andreas Joseph Krogh  writes:
 > I built with this: make distclean && ./configure
 > --prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && 
make -j
 > 8 install-world-contrib-recurse install-world-doc-recurse

 --with-llvm, eh? Does it reproduce without that? What platform is
 this on, what LLVM version?

 > I'll see if I can create a self contained example.

 Please.

 regards, tom lane Ubuntu 19.04 $ llvm-config --version 
 8.0.0
"--with-llvm" was something I had from when pg-11 was master. It might not be 
needed anymore? I'm trying a fresh build without --with-llvm and reload of data 
now. Yep, happens without --with-llvm also. I'll try to load only the necessary 
table(s) to reproduce. --
 Andreas Joseph Krogh

Sv: Re: Sv: Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 15:53:50, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > I built with this: make distclean && ./configure
 > --prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && 
make -j
 > 8 install-world-contrib-recurse install-world-doc-recurse

 --with-llvm, eh? Does it reproduce without that? What platform is
 this on, what LLVM version?

 > I'll see if I can create a self contained example.

 Please.

 regards, tom lane Ubuntu 19.04 $ llvm-config --version 
 8.0.0
"--with-llvm" was something I had from when pg-11 was master. It might not be 
needed anymore? I'm trying a fresh build without --with-llvm and reload of data 
now. 
 --
 Andreas Joseph Krogh

Sv: Re: ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
På tirsdag 30. april 2019 kl. 15:43:16, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > With master (3dbb317d32f4f084ef7badaed8ef36f5c9b85fe6) I'm getting this:
 > visena=# CREATE INDEX origo_email_part_hdrvl_value_idx ON
 > public.origo_email_part_headervalue USING btree
 > (lower(substr((header_value)::text, 0, 1000)) varchar_pattern_ops);
 > psql: ERROR: failed to add item to the index page

 Hm, your example works for me on HEAD.

 Usually, the first thing to suspect when you're tracking HEAD and get
 bizarre failures is that you have a messed-up build. Before spending
 any time diagnosing more carefully, do "make distclean", reconfigure,
 rebuild, reinstall, then see if problem is still there.

 (In theory, you can avoid this sort of failure with appropriate use
 of --enable-depend, but personally I don't trust that too much.
 I find that with ccache + autoconf cache + parallel build, rebuilding
 completely is fast enough that it's something I just do routinely
 after any git pull. I'd rather use up my remaining brain cells on
 other kinds of problems...)

 regards, tom lane I built with this: make distclean && ./configure 
--prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && make -j 
8 install-world-contrib-recurse install-world-doc-recurse
It's probably caused by the data: visena=# select count(*) from 
origo_email_part_headervalue;
 count
 --
 14609516
 (1 row)
I'll see if I can create a self contained example. 
 --
 Andreas Joseph Krogh

ERROR: failed to add item to the index page

2019-04-30 Thread Andreas Joseph Krogh
With master (3dbb317d32f4f084ef7badaed8ef36f5c9b85fe6) I'm getting this: 
visena=# CREATE INDEX origo_email_part_hdrvl_value_idx ON 
public.origo_email_part_headervalue USING btree 
(lower(substr((header_value)::text, 0, 1000)) varchar_pattern_ops);
 psql: ERROR: failed to add item to the index page
The schema looks like this: create table origo_email_part_headervalue ( 
entity_idBIGSERIAL PRIMARY KEY, version int8 not null, header_value varchar NOT 
NULL, header_id int8 references origo_email_part_header (entity_id), value_index
int NOT NULL DEFAULT0, UNIQUE (header_id, value_index) ); CREATE INDEX 
origo_email_part_hdrvl_hdr_id_idxON origo_email_part_headervalue (header_id); 
CREATE INDEXorigo_email_part_hdrvl_value_idx ON origo_email_part_headervalue (
lower(substr(header_value, 0, 1000)) varchar_pattern_ops); (haven't tried any 
other version so I'm not sure when this started to happen) -- Andreas Joseph 
Krogh 

Sv: Re: Early WIP/PoC for inlining CTEs

2019-02-02 Thread Andreas Joseph Krogh
På lørdag 02. februar 2019 kl. 20:01:01, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: [snip]
 There's nothing particularly stopping us from accepting
 "materialized" with a D in this syntax, instead of or in addition
 to "materialize"; though I hesitate to mention it for fear of
 another round of bikeshedding.   +1 for keeping it "imperative-style" 
– MATERIALIZE – consistent with other SQL-stuff.   --
 Andreas Joseph Krogh

Sv: Re: Requesting advanced Group By support

2018-10-10 Thread Andreas Joseph Krogh
På onsdag 10. oktober 2018 kl. 18:46:15, skrev Tomas Vondra <
tomas.von...@2ndquadrant.com <mailto:tomas.von...@2ndquadrant.com>>:
Hi,

 On 10/09/2018 03:10 PM, Arun Kumar wrote:
 > Hi,
 >  From MySQL 5.7, It supports SQL standard 99 and implements the feature
 > such functional dependent on the GROUP By columns, i.e., it detects the
 > non-aggregate columns which are functionally dependent on the GROUP BY
 > columns (not included in GROUP BY) and then executes the query without
 > error.
 > For example,
 >
 > *SELECT a.sno,b.sno,a.name,b.location FROM Name AS a JOIN Location AS b
 > ON a.sno=b.sno GROUP BY a.sno,b.location *
 >
 > In this case, a.sno is a primary key so no need to include a.name in
 > GROUP By as it would be identified by the primary key and then for b.sno
 > which is again equated with a.sno (primary key) so no need to add this
 > as well but for b.location, we need to add it in GROUP BY or we should
 > use any aggregate function over this column to avoid error. For more
 > info, please check on the below link
 > https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html
 > Is there any plans on supporting this in Postgres in future versions ?
 >

 So, which part of this supposedly does not work in PostgreSQL?

 Consider this:

 test2=# create table t (id int primary key, b int, c int, d int);
 CREATE TABLE
 test2=# explain select * from t group by id, b, c;
                           QUERY PLAN
 
   HashAggregate  (cost=33.12..51.62 rows=1850 width=16)
     Group Key: id
     ->  Seq Scan on t  (cost=0.00..28.50 rows=1850 width=16)
 (3 rows)

 test2=# explain select id, count(*) from t group by id, b, c;
                           QUERY PLAN
 
   HashAggregate  (cost=37.75..56.25 rows=1850 width=20)
     Group Key: id
     ->  Seq Scan on t  (cost=0.00..28.50 rows=1850 width=12)
 (3 rows)

 So clearly we've already eliminated the functionally-dependent columns
 from the aggregation.

 regards
 
Too bad this doesn't:
 
create table t (id int NOT NULL UNIQUE, b int, c int, d int);
  
explain select * from t group by id, b, c; 
 ERROR:  column "t.d" must appear in the GROUP BY clause or be used in an 
aggregate function
 LINE 1: explain select * from t group by id, b, c;
  
 
-- Andreas Joseph Krogh
​




Sv: Re: Query is over 2x slower with jit=on

2018-09-08 Thread Andreas Joseph Krogh
På torsdag 23. august 2018 kl. 09:14:56, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>:
På torsdag 23. august 2018 kl. 03:00:42, skrev Jonathan S. Katz <
jk...@postgresql.org <mailto:jk...@postgresql.org>>:

 > On Aug 22, 2018, at 7:13 PM, Andres Freund  wrote:
 [snip]
 > For the archives sake: This likely largely is the consequence of
 > building with LLVM's expensive assertions enabled, as confirmed by
 > Jonathan over IM.

 I recompiled with the release version of LLVM. jit=on was still slower,
 but the discrepancy was not as bad as the previously reported result:

 jit = off
 Planning Time: 0.938 ms
 Execution Time: 935.599 ms

 jit = on
 Planning Time: 0.951 ms
 JIT:
   Functions: 184
   Generation Time: 17.605 ms
   Inlining: true
   Inlining Time: 20.522 ms
   Optimization: true
   Optimization Time: 1001.034 ms
   Emission Time: 665.319 ms
 Execution Time: 2491.560 ms

 However, it was still 2x+ slower, so still +1ing for open items.
 
 
I compiled with whatever switches LLVM that comes with Ubuntu 18.04 is built 
with, and without debugging or assertions.
 
With 11b3 as of 825f10fbda7a5d8a48d187b8193160e5e44e4011 I'm repeatedly 
getting these results with jit=on, after 10 runs:
 
 
 
Planning Time: 0.266 ms 
 JIT:
  Functions: 686
  Generation Time: 71.895 ms
  Inlining: false
  Inlining Time: 0.000 ms
  Optimization: false
  Optimization Time: 39.906 ms
  Emission Time: 589.944 ms
 Execution Time: 2198.928 ms
  
 
 
Turning jit=off gives this:
 
Planning Time: 0.180 ms
Execution Time: 938.451 ms
 
I can provide dataset offlist if anyone wants to look into this.
 
--
 Andreas Joseph Krogh



Re: Query is over 2x slower with jit=on

2018-08-23 Thread Andreas Joseph Krogh
På torsdag 23. august 2018 kl. 03:00:42, skrev Jonathan S. Katz <
jk...@postgresql.org <mailto:jk...@postgresql.org>>:

 > On Aug 22, 2018, at 7:13 PM, Andres Freund  wrote:
 [snip]
 > For the archives sake: This likely largely is the consequence of
 > building with LLVM's expensive assertions enabled, as confirmed by
 > Jonathan over IM.

 I recompiled with the release version of LLVM. jit=on was still slower,
 but the discrepancy was not as bad as the previously reported result:

 jit = off
 Planning Time: 0.938 ms
 Execution Time: 935.599 ms

 jit = on
 Planning Time: 0.951 ms
 JIT:
   Functions: 184
   Generation Time: 17.605 ms
   Inlining: true
   Inlining Time: 20.522 ms
   Optimization: true
   Optimization Time: 1001.034 ms
   Emission Time: 665.319 ms
 Execution Time: 2491.560 ms

 However, it was still 2x+ slower, so still +1ing for open items.
 
 
I compiled with whatever switches LLVM that comes with Ubuntu 18.04 is built 
with, and without debugging or assertions.
 
--
 Andreas Joseph Krogh



Re: Query is over 2x slower with jit=on

2018-08-22 Thread Andreas Joseph Krogh
På onsdag 22. august 2018 kl. 20:52:05, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-08-22 19:51:12 +0200, Andreas Joseph Krogh wrote:
 > I thought JITing of prepared queries happended once (in "prepare")

 No, it happens when the first JITed function is executed.


 >  so it didn't have to do the JITing every time the query is
 > executed. Isn't the previously generated bytecode usable for
 > subsequent queries?

 No, not currently. There's some reasons preventing that (primarily that
 we currently rely on addresses of certain things not to change during
 execution). There's ongoing work to change that, but that's certainly
 not going to be ready for v11.

 Greetings,

 Andres Freund
 
 
Ok, thanks for clarifying.
 
--
 Andreas Joseph Krogh



Sv: Re: Query is over 2x slower with jit=on

2018-08-22 Thread Andreas Joseph Krogh
På onsdag 22. august 2018 kl. 18:51:55, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-08-22 18:39:18 +0200, Andreas Joseph Krogh wrote:
 > Just to be clear; The query really runs slower (wall-clock time), it's not
 > just the timing.

 I bet it's not actually running slower, it "just" takes longer to start
 up due to the JITing in each worker. I suspect what we should do is to
 multiple the cost limits by the number of workers, to model that.  But
 without the fixed instrumentation that's harder to see...
 
Well, yes, that might be. By "runs" I meant from me hitting ENTER in psql to 
the time the query finishes...
 
I thought JITing of prepared queries happended once (in "prepare") so it 
didn't have to do the JITing every time the query is executed. Isn't 
the previously generated bytecode usable for subsequent queries?
 
--
 Andreas Joseph Krogh



Sv: Re: Query is over 2x slower with jit=on

2018-08-22 Thread Andreas Joseph Krogh
På onsdag 22. august 2018 kl. 18:12:41, skrev Andres Freund mailto:and...@anarazel.de>>:
Hi,

 On 2018-04-18 18:37:30 -0400, Robert Haas wrote:
 > On Wed, Apr 18, 2018 at 3:29 PM, Andres Freund  wrote:
 > > Not convinced that that is true - the issue is more likely that JIT work 
in workers is counted as execute time... Gotta add that somehow, not sure what 
the best way would be.
 >
 > Oh, that does seem like something that should be fixed.  If that's
 > what is happening here, it's bound to confuse a lot of people.
 > Probably you need to add some code to
 > ExecParallelRetrieveInstrumentation.

 I had lost track of this, and we unfortunately hadn't added an open item
 back then.  I think we should add it now?

 RMT (with me recused), do you think we should accept the new code fixing
 this would entail? And thus that this should be an open item? It's
 arguably a new feature, although I don't find that a terribly convincing
 position.

 Greetings,

 Andres Freund
 
Just to be clear; The query really runs slower (wall-clock time), it's not 
just the timing.
 
--
 Andreas Joseph Krogh



Sv: Re: JIT compiling with LLVM v12

2018-08-22 Thread Andreas Joseph Krogh
På onsdag 22. august 2018 kl. 16:36:00, skrev Peter Eisentraut <
peter.eisentr...@2ndquadrant.com <mailto:peter.eisentr...@2ndquadrant.com>>:
On 22/08/2018 08:20, Noah Misch wrote:
 > Regardless of the choice of jit={on|off} default, these numbers tell me that
 > some or all of jit_*_cost defaults are too low.

 That was also my earlier analysis.

 I'm suspicious that we haven't had much feedback about this.  We've
 heard of one or two cases where LLVM broke a query outright, and that
 was fixed and that was a good result.  But we haven't heard anything
 about performance regressions.  Surely there must be some.  There hasn't
 been any discussion or further analysis of the default cost settings
 either.  I feel that we don't have enough information.

 Another problem is that LLVM is only enabled in some versions of
 packages.  For example, in the PGDG RPMs, it's enabled for RHEL 7 but
 not RHEL 6.  So you could be in for a surprise if you upgrade your
 operating system at some point.

 I would like, however, that we make a decision one way or the other
 before the next beta.  I've been handwaving a bit to users not to rely
 on the current betas for performance testing because the defaults might
 change later.  That's bad either way.
 
FWIW; Our largest report-queries perform worse (then v10) with 
jit=on; 
https://www.postgresql.org/message-id/VisenaEmail.24.e60072a07f006130.162d95c3e17%40tc7-visena
 
Disabling JIT makes them perform slightly better than v10.
 
--
 Andreas Joseph Krogh



Sv: Re: Logical decoding currently not usable for all due to lack of LargeObject-support

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 13:38:54, skrev Andres Freund mailto:and...@anarazel.de>>:
Hi,

 On 2018-08-16 13:31:14 +0200, Andreas Joseph Krogh wrote:
 > The lack of LargeObject-support makes it unusable for installations using 
it.
 >  
 > Any plans to support LOs in logical decoding so we can replicate databases
 > using them?

 I personally am not planning to work on it, but would probably spend
 some time reviewing design & patches.

 It's not entirely clear how to do so in a nice manner, given the
 essentially arbitrary writes the API allows.
 
I understand it's not an easy task. Unfortunately without this logical 
replication is useless for us.
 
Any PG-company wants to take on this task for funding?
 
--
 Andreas Joseph Krogh



Logical decoding currently not usable for all due to lack of LargeObject-support

2018-08-16 Thread Andreas Joseph Krogh
The lack of LargeObject-support makes it unusable for installations using it.
 
Any plans to support LOs in logical decoding so we can replicate databases 
using them?
 
--
Andreas Joseph Krogh


Sv: Sv: Re: Query is over 2x slower with jit=on

2018-04-26 Thread Andreas Joseph Krogh
På onsdag 18. april 2018 kl. 18:26:03, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>:
På onsdag 18. april 2018 kl. 17:50:55, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-04-18 17:35:31 +0200, Andreas Joseph Krogh wrote:
 > With jit=on:
 > https://explain.depesz.com/s/vYB
 > Planning Time: 0.336 ms
 >  JIT:
 >   Functions: 716
 >   Generation Time: 78.404 ms
 >   Inlining: false
 >   Inlining Time: 0.000 ms
 >   Optimization: false
 >   Optimization Time: 43.916 ms
 >   Emission Time: 600.031 ms

 Any chance this is a debug LLVM build?


 > What's the deal with jit making it slower?

 JIT has cost, and sometimes it's not beneficial. Here our heuristics
 when to JIT appear to be a bit off. In the parallel world it's worse
 because the JITing is duplicated for parallel workers atm.
 
PostgreSQL is built with "--enable-debug --with-llvm". LLVM is the one which 
comes with Ubuntu-17.10.
 
Some more info;
 
Without --enable-debug (seems this didn't impact performance mutch):
 
First 5 executions (prepared-statement issued from psql)
 Planning Time: 47.634 ms
  JIT:
    Functions: 725
    Generation Time: 74.748 ms
    Inlining: true
    Inlining Time: 90.763 ms
    Optimization: true
    Optimization Time: 5822.516 ms
    Emission Time: 3089.127 ms
  Execution Time: 16375.996 ms
 
 
After 5 executions (prepared-statement issued from psql)
Planning Time: 0.385 ms
 JIT:
   Functions: 716
   Generation Time: 76.382 ms
   Inlining: false
   Inlining Time: 0.000 ms
   Optimization: false
   Optimization Time: 41.709 ms
   Emission Time: 613.074 ms
 Execution Time: 2031.830 ms
 
jit=off:
 
Planning Time: 0.171 ms
 Execution Time: 832.489 ms
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Re: Query is over 2x slower with jit=on

2018-04-18 Thread Andreas Joseph Krogh
På onsdag 18. april 2018 kl. 17:50:55, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-04-18 17:35:31 +0200, Andreas Joseph Krogh wrote:
 > With jit=on:
 > https://explain.depesz.com/s/vYB
 > Planning Time: 0.336 ms
 >  JIT:
 >   Functions: 716
 >   Generation Time: 78.404 ms
 >   Inlining: false
 >   Inlining Time: 0.000 ms
 >   Optimization: false
 >   Optimization Time: 43.916 ms
 >   Emission Time: 600.031 ms

 Any chance this is a debug LLVM build?


 > What's the deal with jit making it slower?

 JIT has cost, and sometimes it's not beneficial. Here our heuristics
 when to JIT appear to be a bit off. In the parallel world it's worse
 because the JITing is duplicated for parallel workers atm.
 
PostgreSQL is built with "--enable-debug --with-llvm". LLVM is the one which 
comes with Ubuntu-17.10.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Query is over 2x slower with jit=on

2018-04-18 Thread Andreas Joseph Krogh
Hi all.
 
I don't know whether this is expected or not but I'm experiencing over 2x 
slowdown on a large query in PG-11 with JIT=on.
 
(query is a prepared statement executed with "explain analyze execute 
myprepared(arg1, arg2, ..., argn)")
 
After 10 executions these are the results (the first 5 executed in > 16s, then 
the plan changed)
 
With jit=on:
https://explain.depesz.com/s/vYB
Planning Time: 0.336 ms 
 JIT:
  Functions: 716
  Generation Time: 78.404 ms
  Inlining: false
  Inlining Time: 0.000 ms
  Optimization: false
  Optimization Time: 43.916 ms
  Emission Time: 600.031 ms
 Execution Time: 2035.150 ms
 (385 rows)
  
With jit=off:
https://explain.depesz.com/s/X6mA
Planning Time: 0.371 ms 
 Execution Time: 833.941 ms
 (377 rows)
  
Both are master as of 55d26ff638f063fbccf57843f2c27f9795895a5c
 
The query largely consists of CTEs with aggregation which are FULL OUTER 
JOIN'ed.
 
On v10 the query executes in:
Execution time: 1159.628 ms
  
So v11 (with jit=off) is about 25% faster (due to parallel hash-join I think), 
which is nice!
 
What's the deal with jit making it slower?
 
--
Andreas Joseph Krogh



Sv: Re: WIP: Covering + unique indexes.

2018-04-07 Thread Andreas Joseph Krogh
På lørdag 07. april 2018 kl. 22:02:08, skrev Teodor Sigaev mailto:teo...@sigaev.ru>>:
Thanks to everyone, pushed.
 
Rock!
 
--
Andreas Joseph Krogh

 


Sv: Re: ERROR: Aggref found in non-Agg plan node (introducesd in commit 3bf05e096b9f8375e640c5d7996aa57efd7f240c)

2018-02-27 Thread Andreas Joseph Krogh
På tirsdag 27. februar 2018 kl. 12:16:42, skrev Jeevan Chalke <
jeevan.cha...@enterprisedb.com <mailto:jeevan.cha...@enterprisedb.com>>:
Hi,
  
On Tue, Feb 27, 2018 at 3:46 PM, Andreas Joseph Krogh mailto:andr...@visena.com>> wrote: $subject
 
The query I'm using is 14K and I have not produced a test-case, so this is 
just a heads-up.

 Rajkumar off-list shared same issue with me. And I have posted a fix for the 
same (
https://www.postgresql.org/message-id/CAM2+6=x9kxqol2zqz00e6asbt9z+rfywbomhxj0+8fpaymz...@mail.gmail.com
 
<https://www.postgresql.org/message-id/CAM2+6=x9kxqol2zqz00e6asbt9z+rfywbomhxj0+8fpaymz...@mail.gmail.com>
)

 Can you please check if that fixes your test-case or not?




 
 
Applying fix_aggref_in_non-agg_error.patch (only) fixes the issue
 
 
-- Andreas Joseph Krogh
​




ERROR: Aggref found in non-Agg plan node (introducesd in commit 3bf05e096b9f8375e640c5d7996aa57efd7f240c)

2018-02-27 Thread Andreas Joseph Krogh
$subject
 
The query I'm using is 14K and I have not produced a test-case, so this is 
just a heads-up.
 
-- Andreas Joseph Krogh
​
 



Sv: Better Upgrades

2018-02-06 Thread Andreas Joseph Krogh
På tirsdag 06. februar 2018 kl. 01:09:18, skrev David Fetter mailto:da...@fetter.org>>:
Folks,

 While chatting with Bruce about how to make something better than
 pg_upgrade, we (and by "we," I mean mostly Bruce) came up with the
 following.

 What needs improvement:

 - pg_upgrade forces a down time event, no matter how cleverly it's done.
 - pg_upgrade is very much a blocker for on-disk format changes.

 The proposal:

 - Add a new script--possibly Perl or Bash, which would:
     - Initdb a new cluster with the new version of PostgreSQL and a
       different port.
     - Start logical replication from the old version to the new
       version.
     - Poll until a pre-determined default amount of replication lag was 
observed, then:
       * Issue an ALTER SYSTEM on the new server to change its port to the old 
server's
       * Issue a pg_ctl stop -w to the old server
       * Issue a pg_ctl restart on the new server
       * Happiness!

 Assumptions underlying it:

 - Disk and similar resources are cheap enough for most users that
   doubling up during the upgrade is feasible.
 - The default upgrade path should require exactly one step.
 - Errors do not, by and large, have the capacity to violate an SLA.

 The proposal has blockers:

 - We don't actually have logical decoding for DDL, although I'm given
   to understand that Álvaro Herrera has done some yeoman follow-up
   work on Dimitri Fontaine's PoC patches.
 - We don't have logical decoding for DCL (GRANT/REVOKE)
 
- Need to add support for decoding LargeObjects.
 
-- Andreas Joseph Krogh



Sv: Re: pspg - psql pager

2017-11-17 Thread Andreas Joseph Krogh
På fredag 17. november 2017 kl. 10:09:53, skrev Marco Nenciarini <
marco.nenciar...@2ndquadrant.it <mailto:marco.nenciar...@2ndquadrant.it>>:
Il 15/11/17 10:41, Pavel Stehule ha scritto:
 > Hi
 >
 > I finished new pager, that can be interesting for postgresql expert users.
 >

 It is now on apt.postgresql.org

   http://postgr.es/m/e1efc37-0007rv...@atalia.postgresql.org

 And will be in Debian Unstable soon as it is in the NEW queue for manual
 review from Debian's FTP Master.

    https://ftp-master.debian.org/new/pspg_0.5-1.html

 Regards,
 Marco
 
 
Coolio!
 
-- Andreas Joseph Krogh
 



Sv: pspg - psql pager

2017-11-15 Thread Andreas Joseph Krogh
På onsdag 15. november 2017 kl. 10:41:45, skrev Pavel Stehule <
pavel.steh...@gmail.com <mailto:pavel.steh...@gmail.com>>:
Hi
  
 I finished new pager, that can be interesting for postgresql expert users.
  
 It demonstrate some possibilities of TUI and I hope it shows some possible 
directions of future possibilities of psql.
  
 It is available as rpm from our repository or you can get source code from 
github

https://github.com/okbob/pspg <https://github.com/okbob/pspg>
  
 I invite any discussion about this project.
  
 Pavel
  
 p.s. I hope so this pager is useful - I know some users who use it few months 
intensively. But the source code has proof concept quality. It should be 
cleaned next year.
 
 
Thanks for making this, "-X --no-mouse" made my day.
 
--
 Andreas Joseph Krogh