AI for query-planning?

2024-06-22 Thread Andreas Joseph Krogh


Hi, are there any plans for using some kind of AI for query-planning?

Can someone with more knowledge about this than I have please explain why it 
might, or not, be a good idea, and what the challenges are?



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: prevent users from SELECT-ing from pg_roles/pg_database

2024-05-28 Thread Andreas Joseph Krogh


På tirsdag 28. mai 2024 kl. 01:48:17, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
Laurenz Albe  writes:
> On Mon, 2024-05-27 at 09:33 +0200, Andreas Joseph Krogh wrote:
>> I tried:
>> REVOKE SELECT ON pg_catalog.pg_database FROM public;
>> But that doesn't prevent a normal user from querying pg_database it seems…

> It works here.

Works for me too, although you'd have to do it over in each
database where you want it to be effective. (Although
pg_database is a shared catalog, the metadata about it
is not shared.)

regards, tom lane


Ah, that's what I was missing. Thanks for pointing that out, it's working as 
expected now.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: prevent users from SELECT-ing from pg_roles/pg_database

2024-05-27 Thread Andreas Joseph Krogh


På mandag 27. mai 2024 kl. 11:10:10, skrev Laurenz Albe <
laurenz.a...@cybertec.at <mailto:laurenz.a...@cybertec.at>>:
On Mon, 2024-05-27 at 09:33 +0200, Andreas Joseph Krogh wrote:
> I tried:
>
> REVOKE SELECT ON pg_catalog.pg_database FROM public;
>
> But that doesn't prevent a normal user from querying pg_database it seems…

It works here.

Perhaps the "normal" user is a member of "pg_read_all_data".

Yours,
Laurenz Albe


Don't think so:
andreak@[local]:5432 16.3 andreak=# REVOKE pg_read_all_data from nisse; 
WARNING: role "nisse" has not been granted membership in role 
"pg_read_all_data" by role "postgres" REVOKE ROLE


Any hints welcome.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: prevent users from SELECT-ing from pg_roles/pg_database

2024-05-27 Thread Andreas Joseph Krogh


På fredag 24. mai 2024 kl. 19:02:13, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
Andreas Joseph Krogh  writes:
> Hi, is there a way to prevent a user/role from SELECT-ing from certain 
> system-tables?
> I'd like the contents of pg_{user,roles,database} to not be visible to all 
> users.

As noted, you can in principle revoke the public SELECT grant from
those views/catalogs. However, the system is not designed to hide
such information, which means you'll have (at least) two kinds of
issues to worry about:

1. Queries or tools breaking that you didn't expect to break.
It's hardly uncommon for instance for queries on pg_class to
join to pg_roles to get the owner names for tables.

2. Information leaks. For example, mapping role OID to role name
is trivial with either regrole or pg_get_userbyid(), so it
wouldn't take long to scan the plausible range of role OIDs and
get all their names, even without SQL access to the underlying
catalog.

regards, tom lane


I tried:
REVOKE SELECT ON pg_catalog.pg_database FROM public;
But that doesn't prevent a normal user from querying pg_database it seems…






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


prevent users from SELECT-ing from pg_roles/pg_database

2024-05-24 Thread Andreas Joseph Krogh


Hi, is there a way to prevent a user/role from SELECT-ing from certain 
system-tables?



I'd like the contents of pg_{user,roles,database} to not be visible to all 
users.



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

RE: Performance degradation after upgrading from 9.5 to 14

2024-04-27 Thread Andreas Joseph Krogh


På lørdag 27. april 2024 kl. 11:46:26, skrev Zahir Lalani <
ZahirLalani@oliver.agency <mailto:ZahirLalani@oliver.agency>>:

Same issue and took us ages to work out that is was JIT! The default is on, 
and setting off solves the problem. I have seen several blogs reporting the 
same and so wonder why this default is on?



I can confirm this, even in v16 we've turned JIT off.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Parallel GIN index?

2024-04-06 Thread Andreas Joseph Krogh


Any plans for $subject?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Sv: Support for dates before 4713 BC

2024-03-04 Thread Andreas Joseph Krogh


På mandag 04. mars 2024 kl. 17:01:46, skrev Koschicek-Krombholz, Bernhard <
bernhard.koschicek-krombh...@oeaw.ac.at 
<mailto:bernhard.koschicek-krombh...@oeaw.ac.at>>:

It would be very useful if PostgreSQL would support dates before 4713 BC.

Out of curiosity, are there any plans to implement it in the near future? 

https://www.postgresql.org/message-id/flat/ca438ff8331c4e109aa1b75a130948ac%40oeaw.ac.at
 
<https://www.postgresql.org/message-id/flat/ca438ff8331c4e109aa1b75a130948ac%40oeaw.ac.at>






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Joseph Krogh


På onsdag 28. februar 2024 kl. 13:55:34, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>:


Am 28.02.24 um 13:34 schrieb Jason Long:
> Hello,
> What is the use of a database in read-only mode?

a standby-database will also be in read-only mode.
Right, we have PowerBI connected to a standby-DB, streaming-replication.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Joseph Krogh


På onsdag 28. februar 2024 kl. 13:34:30, skrev Jason Long mailto:hack3r...@yahoo.com>>:
Hello,
What is the use of a database in read-only mode?

Thank you.
To be able to read, not modify, the data.

Reporting-tools like Microsoft PowerBI can connect to PG and generate nice 
reports.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Improving pg_dump performance when handling large numbers of LOBs

2024-02-05 Thread Andreas Joseph Krogh


På mandag 05. februar 2024 kl. 21:10:53, skrev Wyatt Tellis <
wyatt.tel...@gmail.com <mailto:wyatt.tel...@gmail.com>>:

No, we don't have the ability to make schema changes and the schema in the 
PG15 copy needs to match what's in the PG 12 versions

Well then, I guess it boils down to how badly the ones in charge wants this 
migration…






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Improving pg_dump performance when handling large numbers of LOBs

2024-02-05 Thread Andreas Joseph Krogh


På mandag 05. februar 2024 kl. 20:55:43, skrev Wyatt Tellis <
wyatt.tel...@gmail.com <mailto:wyatt.tel...@gmail.com>>:

Yes, the LOBs themselves are tiny, but there are a lot of them (~150 million) 
which seem to be slowing down pg_dump. Note, we did not design/build this 
system and agree that use of LOBs for this purpose was not necessary.

Well, the data is there nonetheless, is it an option to convert it to bytea 
before migration?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Daterange question

2024-01-19 Thread Andreas Joseph Krogh


På lørdag 20. januar 2024 kl. 06:35:07, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
[…]
Well, we can definitively state that the NOT makes this unindexable.
You need a WHERE clause that looks like
indexed-column indexable-operator pseudo-constant
which this isn't, nor does << have a negator operator that could
allow the NOT to be simplified out.

Wouldn't
drange && daterange(CURRENT_DATE, NULL, '[)')
serve the purpose? That should be indexable.

regards, tom lane
Yes it will, thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Daterange question

2024-01-19 Thread Andreas Joseph Krogh

create table order_line ( id serial primary key, start_date DATE NOT NULL, 
end_date DATE, drange daterange NOT NULL GENERATED ALWAYS AS 
(daterange(start_date, end_date, '[)')) STORED ); CREATE INDEX 
order_line_not_end_idx ON order_line using gist(drange); INSERT INTO 
order_line(start_date, end_date) values('2023-01-01', null); INSERT INTO 
order_line(start_date, end_date) values('2023-01-01', '2024-01-01'); INSERT 
INTO order_line(start_date, end_date) values('2024-01-01', null); INSERT INTO 
order_line(start_date, end_date) values('2025-01-01', null); set enable_seqscan 
to false; explain analyse select * from order_line WHERE (drange << 
daterange(CURRENT_DATE, NULL, '[)')); -- Uses index 
┌┐
 
│ QUERY PLAN │ 
├┤
 
│ Index Scan using order_line_not_end_idx on order_line (cost=0.14..8.15 rows=1 
width=44) (actual time=0.008..0.008 rows=1 loops=1) │ │ Index Cond: (drange << 
daterange(CURRENT_DATE, NULL::date, '[)'::text)) │ │ Planning Time: 0.043 ms │ 
│ Execution Time: 0.013 ms │ 
└┘
 
explain analyse select * from order_line WHERE NOT (drange << 
daterange(CURRENT_DATE, NULL, '[)')); -- Does not use index 
┌─┐
 
│ QUERY PLAN │ 
├─┤
 
│ Seq Scan on order_line (cost=100.00..101.07 rows=3 width=44) 
(actual time=0.007..0.008 rows=3 loops=1) │ │ Filter: (NOT (drange << 
daterange(CURRENT_DATE, NULL::date, '[)'::text))) │ │ Rows Removed by Filter: 1 
│ │ Planning Time: 0.077 ms │ │ Execution Time: 0.015 ms │ 
└─────────┘





--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Daterange question

2024-01-19 Thread Andreas Joseph Krogh


I have order-lines with start-end like this:
start_date DATE NOT NULL, end_date DATE, drange daterange NOT NULL GENERATED 
ALWAYS AS (daterange(start_date, end_date, '[)')) STORED
and have an index on using gist(drange)



I want to list all order-lines which does not have end-date set in the past, 
but want to show lines with start-dates in future.



This seems to do what I want:
NOT (drange << daterange(CURRENT_DATE, NULL, '[)'))
But this doesn't use the index.



Any idea how to write a query so it uses the index on drange?



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: How to eliminate extra "NOT EXISTS"-query here?

2023-11-25 Thread Andreas Joseph Krogh


På lørdag 25. november 2023 kl. 17:08:28, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>:
Andreas Joseph Krogh  writes:
> -- This works, but I'd rather not do the extra EXISTS
> select * from test t
> WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) 
from
> stuffs WHERE s.test_id = t.id)
> OR NOT EXISTS (
> select * from stuff s where s.test_id = t.id
> )
> )
> ;

> So, I want to return all entries in test not having any of ARRAY ['x', 'y', 
> 'z', 't'] referenced in the table stuff, and I'd like to have test.id="d" 
> returned as well, but in order to do that I need to execute the “or not 
> exists”-query. Is it possible to avoid that?

Probably not directly, but perhaps you could improve the performance of
this query by converting the sub-selects into a left join:

select * from test t
 left join
 (select s.test_id, array_agg(s.v) as arr from stuffs group by s.test_id) ss
 on ss.test_id = t.id
WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr)
 OR ss.test_id IS NULL;

Another possibility is

...
WHERE (ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr) IS NOT TRUE

but I don't think that's more readable really, and it will save little.

In either case, this would result in computing array_agg once for
each group of test_id values in "stuffs", while your original computes
a similar aggregate for each row in "test". So whether this is better
depends on the relative sizes of the tables, although my proposal
avoids random access to "stuffs" so it will have some advantage.

regards, tom lane
Excellent, thanks!






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


How to eliminate extra "NOT EXISTS"-query here?

2023-11-25 Thread Andreas Joseph Krogh


Hi, I'm testing if some dataset contains an array of elements and want to 
return all “not containing the specified array”, including entries in master 
table not being referenced.



I have the following schema:

drop table if exists stuff;
drop table if exists test;
CREATE TABLE test(
id varchar primary key
);

create table stuff(
id serial primary key,
test_id varchar NOT NULL REFERENCES test(id),
v varchar not null,
unique (test_id, v)
);

INSERT INTO test(id) values ('a');
INSERT INTO test(id) values ('b');
INSERT INTO test(id) values ('c');
INSERT INTO test(id) values ('d');

INSERT INTO stuff(test_id, v)
values ('a', 'x')
;

INSERT INTO stuff(test_id, v)
values ('b', 'x')
 , ('b', 'y')
;

INSERT INTO stuff(test_id, v)
values ('c', 'x')
 , ('c', 'y')
 , ('c', 'z')
;

select * from test t
WHERE NOT ARRAY['x']::varchar[] <@ (select array_agg(s.v) from stuff s WHERE s.
test_id= t.id)
;

select * from test t
WHERE NOT ARRAY['x', 'y']::varchar[] <@ (select array_agg(s.v) from stuff s 
WHERE s.test_id = t.id)
;

select * from test t
WHERE NOT ARRAY['x', 'y', 'z']::varchar[] <@ (select array_agg(s.v) from stuff 
s WHERE s.test_id = t.id)
;

select * from test t
WHERE NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) from 
stuffs WHERE s.test_id = t.id)
;

-- This works, but I'd rather not do the extra EXISTS
select * from test t
WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select array_agg(s.v) from
stuffs WHERE s.test_id = t.id)
OR NOT EXISTS (
select * from stuff s where s.test_id = t.id
)
 )
;



So, I want to return all entries in test not having any of ARRAY ['x', 'y', 
'z', 't'] referenced in the table stuff, and I'd like to have test.id="d" 
returned as well, but in order to do that I need to execute the “or not 
exists”-query. Is it possible to avoid that?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Sv: Base files compatibility between PG11 and PG15

2023-08-17 Thread Andreas Joseph Krogh


På torsdag 17. august 2023 kl. 11:32:47, skrev Abraham, Danny <
danny_abra...@bmc.com <mailto:danny_abra...@bmc.com>>:
Hi,

I have a database on Aurora@RDS.
It Used V11.9, and then upgraded automatically by AWS to V15.2 AWS states that 
it relies on compatibility provided by the PG community.
We now experience problems with the data.
Has anyone met similar issues? Compatibility of data files?
The data-files are nowhere near compatible between major-versions, as stated 
clearly in all release notes:

https://www.postgresql.org/docs/release/12.0/ 
<https://www.postgresql.org/docs/release/12.0/>

https://www.postgresql.org/docs/release/13.0/ 
<https://www.postgresql.org/docs/release/13.0/>

https://www.postgresql.org/docs/release/14.0/ 
<https://www.postgresql.org/docs/release/14.0/>

https://www.postgresql.org/docs/release/15.0/ 
<https://www.postgresql.org/docs/release/15.0/>



They all state the same:

“A dump/restore using pg_dumpall or use of pg_upgrade or logical replication 
is required for those wishing to migrate data from any previous release.”






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL and local HDD

2023-08-16 Thread Andreas Joseph Krogh


På onsdag 16. august 2023 kl. 05:40:40, skrev Ron mailto:ronljohnso...@gmail.com>>:
On 8/15/23 02:23, Jason Long wrote:
[snip]
> Does PostgreSQL have an option to increase speed?

Like a Turbo button?
It actually has that, but you'll have to sacrifice some safety.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL and GUI management

2023-08-15 Thread Andreas Joseph Krogh


På tirsdag 15. august 2023 kl. 22:52:48, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>:
[…]
All the nice buzzwordy things: dashboards, alarm bells, point-and-click drop 
downs for config values (with tool tip descriptions of what they might do), 
coffee dispenser. Things some feel they need in a management tool.

If you need these things, I'm sure there's a budget somewhere for investing in 
available commercial tools, some already mentioned in this thread.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: PostgreSQL and GUI management

2023-08-15 Thread Andreas Joseph Krogh


På tirsdag 15. august 2023 kl. 20:43:16, skrev Rob Sargent <
robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>>:
[…]
Adrian, 
Much as I love it, psql is not what I would call a management tool?

We have a multi-terabyte cluster in production which we manage using psql.

Locally I use IntelliJ IDEA for development (has syntax highlight, code 
completion, introspection etc.). IDEA has a PostgreSQL plugin which is only 
commercially available, and uses the same components as DataGrip, AFAIK.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Delete values from JSON

2023-03-18 Thread Andreas Joseph Krogh


Excellent!
Thanks!

På lørdag 18. mars 2023 kl. 14:26:57, skrev Boris Zentner mailto:b...@2bz.de>>:







Am 17.03.2023 um 08:56 schrieb Andreas Joseph Krogh :





Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":

{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": "LIMITED_COMPANY" 
} }, { "keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }


So that the result becomes:

{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": "LIMITED_COMPANY" 
} } ], "nisse": 123 }


Thanks.



Hi Andreas, this works for me.



➤ 2023-03-18 14:23:51 CET bz@[local]:5432/test
=# WITH data(j)
 AS (VALUES (CAST('{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", 
"details": [ { "keyInformation": { "dunsNumber": "NaN", "organizationType": 
"LIMITED_COMPANY" } }, { "keyInformation": { "dunsNumber": "123", 
"organizationType": "LIMITED_COMPANY" } } ], "nisse": 123 }' AS jsonb)))

 SELECT jsonb_pretty(jsonb_set(j
 , '{details}'
 , (SELECT jsonb_agg(CASE
 WHEN ((elem -> 'keyInformation') ->> 'dunsNumber') = 'NaN'
 THEN jsonb_set(elem
 , '{keyInformation}'
 , (elem -> 'keyInformation') - 'dunsNumber')
 ELSE elem
 END)
 FROM jsonb_array_elements(data.j -> 'details') AS elem))) AS nice_output
 FROM data
;
nice_output
{
 "nisse": 123,
 "details": [
 {
 "keyInformation": {
 "organizationType": "LIMITED_COMPANY"
 }
 },
 {
 "keyInformation": {
 "dunsNumber": "123",
 "organizationType": "LIMITED_COMPANY"
 }
 }
 ],
 "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6"
}
(1 row)
Time: 0,731 ms

--
Boris











--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


På fredag 17. mars 2023 kl. 11:56:22, skrev Romain MAZIÈRE <
romain.mazi...@sigmaz-consilium.fr <mailto:romain.mazi...@sigmaz-consilium.fr>>:
Hi,

If it is jsonb type, you can have a look at the documentation : 
https://www.postgresql.org/docs/14/functions-json.html 
<https://www.postgresql.org/docs/14/functions-json.html>

There are some examples :


jsonb - text → jsonb

Deletes a key (and its value) from a JSON object, or matching string value(s) 
from a JSON array.

'{"a": "b", "c": "d"}'::jsonb - 'a' → {"c": "d"}

'["a", "b", "c", "b"]'::jsonb - 'b' → ["a", "c"]


jsonb - text[] → jsonb

Deletes all matching keys or array elements from the left operand.

'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[] → {}


jsonb - integer → jsonb

Deletes the array element with specified index (negative integers count from 
the end). Throws an error if JSON value is not an array.

'["a", "b"]'::jsonb - 1 → ["a"]


jsonb #- text[] → jsonb

Deletes the field or array element at the specified path, where path elements 
can be either field keys or array indexes.

'["a", {"b":1}]'::jsonb #- '{1,b}' → ["a", {}]

Regards

I have looked at the docs, but it doesn't, AFAIU, show how to conditionally 
delete a key based on its value, and leave other keys in the JSONB not matching 
the value alone.

I want to delete all keys in the (pseudo) path 
details.keyInformation[*].dunsNumber if the value is "NaN".






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": "LIMITED_COMPANY" 
} }, { "keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }


So that the result becomes:


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": "LIMITED_COMPANY" 
} } ], "nisse": 123 }


Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Sv: PostgreSQL configuration in a VM

2023-02-17 Thread Andreas Joseph Krogh


På fredag 17. februar 2023 kl. 09:13:10, skrev Sebastien Flaesch <
sebastien.flae...@4js.com <mailto:sebastien.flae...@4js.com>>:
Hi!


I was wondering if the is any specific configuration setting that should be 
used with PostgreSQL, when running in a VM...


Is there anything obvious that must be set, to get best performances with such 
a config?


Sorry for this general question...
In my experience the most important parameter when running in a VM is 
random_page_cost, and for that to be set to a sane value you need to know the 
characteristics of the disk available to your VM. In other words, disk IO is 
what you should be worried about as VMs are pretty good at scaling CPU-wise.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-20 Thread Andreas Joseph Krogh


På torsdag 20. oktober 2022 kl. 10:32:44, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:
On Wed, Oct 19, 2022 at 5:05 PM Laurenz Albe  wrote:
> On Wed, 2022-10-19 at 12:48 +0200, Dominique Devienne wrote:
> > On Wed, Oct 19, 2022 at 12:17 PM Andreas Joseph Krogh  
wrote:
> > > First advice, don't do it. We started off storing blobs in DB for “TX 
safety”
> > Not really an option, I'm afraid.
> You should reconsider. Ruling out that option now might get you into trouble
> later. Large Objects mean trouble.

Andreas, Ericson, Laurenz, thanks for the advice.
I'll be sure to discuss these concerns with the team.
[…]
But before I finish this thread for now, I'd like to add that I
consider unfortunate a state of affairs where
NOT putting the data in the DB is the mostly agreed upon advice. It
IMHO points to a weak point of
PostgreSQL, which does not invest in those use-cases with large data,
perhaps with more file-system
like techniques. Probably because most of the large users of
PostgreSQL are more on the "business"
side (numerous data, but on the smaller sizes) than the "scientific"
side, which (too often) uses files and
files-in-a-file formats like HDF5.
[…]


Note that my views were not PG-specific and applies to all 
applications/architectures involving RDBMS.
>From my point of view having all data in RDBMS is (maybe) theoretically sound, 
but given that IO is not instant I consider it a design-flaw, for some reasons 
which I've already pointed out.





--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh


På onsdag 19. oktober 2022 kl. 13:21:38, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:
On Wed, Oct 19, 2022 at 1:00 PM Andreas Joseph Krogh  
wrote:
> Ok, just something to think about;

Thank you. I do appreciate the feedback.

> Will your database grow beyond 10TB with blobs?

The largest internal store I've seen (for the subset of data that goes
in the DB) is shy of 3TB.
But we are an ISV, not one of our clients, which have truly massive
scale for data.
And they don't share the exact scale of their proprietary data with me...

> If so try to calculate how long it takes to restore, and comply with SLA,
> and how long it would have taken to restore without the blobs.

Something I don't quite get is why somehow backup is no longer needed
if the large blobs are external?
i.e. are you saying backups are so much more worse in PostgreSQL than
with the FS? I'm curious now.
I'm not saying you don't need backup (or redundancy) of other systems holding 
blobs, but moving them out of RDBMS makes you restore the DB to a consistent 
state, and able to serve clients, faster. In my experience It's quite unlikely 
that your (redundant) blob-store needs crash-recovery at the same time you DB 
does. The same goes with PITR, needed because of some logical error (like 
client deleted some data they shouldn't have), which is much faster without 
blobs in DB and doesn't affect the blobstore at all (if you have a smart 
insert/update/delete-policy there).



Also, managing the PostgreSQL server will be the client's own concern
mostly. We are not into Saas here.
As hinted above, the truly massive data is already not in the DB, used
by different systems, and processed
down to the GB sized inputs all the data put in the DB is generated
from. It's a scientific data heavy environment.
And one where security of the data is paramount, for contractual and
legal reasons. Files make that harder IMHO.

Anyways, this is straying from the main theme of this post I'm afraid.
Hopefully we can come back on the main one too. --DD

There's a reason “everybody” advices to move blobs out of DB, I've learned.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh


På onsdag 19. oktober 2022 kl. 12:48:24, skrev Dominique Devienne <
ddevie...@gmail.com <mailto:ddevie...@gmail.com>>:

On Wed, Oct 19, 2022 at 12:17 PM Andreas Joseph Krogh mailto:andr...@visena.com>> wrote:



First advice, don't do it. We started off storing blobs in DB for “TX safety”

Not really an option, I'm afraid.


, but backup/restore quickly became too cumbersome so we ended up moving all 
blobs out and only store reference in DB.

This required us to make a “vacuum system” that cleans up the blob-storage 
regularly as ROLLBACK/crash can make it out of sync.

Note the fact the data is spread in many mostly independent schemas.
Might ease the backup/restore? I'm not much of a DBA though...


We chose storing as LO because with it, streaming large blobs (not using much 
memory) actually worked, with JDBC at least.


I'm in C++, with I believe efficient use of binary binds and results, and use 
of COPY as much as possible,
so as good as it gets I think (that I know of, with libpq), in terms of 
performance. Haven't looked at libpq's
new in v14 pipeline mode yet though.

Investigated Cursor vs Statement too, and it's a tradeoff between latency and 
throughput.
Not sure how good JDBC drivers for PostgreSQL are. I'm doing my best to go 
fast with libpq.

In any case, thanks for your input. But it's not really a question of "if". 
But of "how".

Putting thousands of large blobs in the file system is a no go. Assuming the 
clients
can even see the file system the server sees. This is a 2-tier system, there's 
no mid-tier
that would somehow magically handle proper security and lifetime management of 
these blobs.

Thanks, --DD
Ok, just something to think about; Will your database grow beyond 10TB with 
blobs? If so try to calculate how long it takes to restore, and comply with 
SLA, and how long it would have taken to restore without the blobs.



PS: Our blobstore is not “the file system”, but SeaweedFS.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: How to store "blobs" efficiently for small and large sizes, with random access

2022-10-19 Thread Andreas Joseph Krogh
.. I.e. with real client-case of 3K projects,
 that puts an average of only 10GB of lo's per-project (i.e. schema),
which could very well be problematic...











--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Sv: How can I set up Postgres to use given amount of RAM?

2022-06-27 Thread Andreas Joseph Krogh


På søndag 26. juni 2022 kl. 20:40:01, skrev W.P. mailto:laure...@wp.pl>>:
Question in topic:

"How can I set up Postgres to use given amount of RAM?"

I have now laptop with 8GB of RAM, i can see Linux uses no more than 2-3GB.

So my question is how to FORCE PostgreSQL

use let's say 2-4Ghow to B of RAM for caching tables I run queries on?

As I can see disk actity running queries.


W.P..
The closest thing I can think of is effective_cache_size: 
https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
 
<https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE>








--
Andreas Joseph Krogh


Re: Logical replication of large objects

2022-06-09 Thread Andreas Joseph Krogh


På torsdag 09. juni 2022 kl. 20:24:56, skrev Joshua Drake 
mailto:j...@commandprompt.com>>:

Large objects are largely considered a deprecated feature.


Though I like the idea, was there any consensus on -hackers?
Nobody seems interested in it…






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Logical replication of large objects

2022-06-05 Thread Andreas Joseph Krogh


I started this thread 5 years ago: 
https://www.postgresql.org/message-id/flat/7c70d9bd-76fc-70fa-cfec-14f00a4a49c3%40matrix.gatewaynet.com#15cbf1c82be9341e551e60e287264380
 
<https://www.postgresql.org/message-id/flat/7c70d9bd-76fc-70fa-cfec-14f00a4a49c3%40matrix.gatewaynet.com#15cbf1c82be9341e551e60e287264380>



We'd be willing to help funding development needed to support Large Object 
logical replication.

Anyone interested?






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Max sane value for join_collapse_limit?

2022-06-03 Thread Andreas Joseph Krogh


Hi, I have set join_collapse_limit = 12 in production, but I'm thinking about 
raising it to 16.

On modern HW is there a “sane maximum” for this value?

I can easily spare 10ms for extra planning per query on our workload, is 16 
too high?



Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Will Barman support restore of single database?

2022-01-28 Thread Andreas Joseph Krogh


Hi, pgBackRest supports restoring only specified database: 
https://pgbackrest.org/command.html#command-restore 
<https://pgbackrest.org/command.html#command-restore>


11.1.3
Include Database Option (--db-include)

Restore only specified databases.

This feature allows only selected databases to be restored. Databases not 
specifically included will be restored as sparse, zeroed files to save space 
but still allowPostgreSQL to perform recovery. After recovery, the databases 
that were not included will not be accessible but can be removed with thedrop 
database command.



NOTE:
built-in databases (template0, template1, and postgres) are always restored 
unless specifically excluded.
The --db-include option can be passed multiple times to specify more than one 
database to include.

See Restore Selected Databases 
<https://pgbackrest.org/user-guide.html#restore/option-db-include> for 
additional information and caveats.

Are there any plans to add support for this to Barman?




--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>

Re: Getting json-value as varchar

2022-01-06 Thread Andreas Joseph Krogh

På torsdag 06. januar 2022 kl. 14:42:21, skrev Pavel Stehule <
pavel.steh...@gmail.com <mailto:pavel.steh...@gmail.com>>: 

Hi 

čt 6. 1. 2022 v 14:33 odesílatel Andreas Joseph Krogh mailto:andr...@visena.com>> napsal: 
På torsdag 06. januar 2022 kl. 14:29:12, skrev David G. Johnston <
david.g.johns...@gmail.com <mailto:david.g.johns...@gmail.com>>: 
[..] 
The fact is that the ‘ - - > ‘ operator gives you the needed output. 

David J. 

Yeah, I think that's the correct answer for this use-case. 

It is true that some other casting function is missing. I am not sure if this 
is part of ANSI/SQL json support. 

Now, you can use helper function 

CREATE OR REPLACE FUNCTION public.to_text(jsonb)
 RETURNS text
 LANGUAGE sql
 AS $function$
 select jsonb_array_element_text($1, 0)
 $function$ 

postgres=# select to_text((jsonb '{"a":"Ahoj"}')['a']);
 ┌─┐
 │ to_text │
 ╞═╡
 │ Ahoj │
 └─────┘
 (1 row) 

Regards 

Pavel 

Thanks! 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Getting json-value as varchar

2022-01-06 Thread Andreas Joseph Krogh

På torsdag 06. januar 2022 kl. 14:29:12, skrev David G. Johnston <
david.g.johns...@gmail.com <mailto:david.g.johns...@gmail.com>>: 
[..] 
The fact is that the ‘ - - > ‘ operator gives you the needed output. 

David J. 

Yeah, I think that's the correct answer for this use-case. 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Getting json-value as varchar

2022-01-06 Thread Andreas Joseph Krogh

På torsdag 06. januar 2022 kl. 14:13:40, skrev David G. Johnston <
david.g.johns...@gmail.com <mailto:david.g.johns...@gmail.com>>: On Thursday, 
January 6, 2022, Andreas Joseph Krogh mailto:andr...@visena.com>> wrote: 


I think you misread my message. What I want is for the subscript-version: 
('{"key":"value"}'::jsonb)['key'] 
to return: 

┌──┐ 
 │ ?column? │
 ├──┤
 │ value │
 └──┘

 instead of 

┌─┐ 
 │ jsonb │
 ├─┤
 │ "value" │
 └─ 


 A given syntax/operator can only return one thing so what you want is 
fundamentally not possible. 

That's not very helpful 
Apparently I'm after a solution which either casts this to varchar or a 
function that takes JSONB as argument and outputs the first field-value as 
varchar. 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: Recommended storage hardware

2022-01-06 Thread Andreas Joseph Krogh

På torsdag 06. januar 2022 kl. 13:53:21, skrev Levente Birta <
blevi.li...@gmail.com <mailto:blevi.li...@gmail.com>>: 
Hi all

 What storage is recommended nowdays?

 Currently I'm using two of SSD DC P3700 Series add in card
 (SSDPEDMD400G4) in soft raid 1, but it's pretty old and I have to
 replace the whole server.

 Before, I used Intel too and I'm very satisfied, but now that intel sold
 the ssd business looks like I have to choose something else.

 Or how about the optane? Is worth it? 

We use Micron 9300: 
https://www.micron.com/products/ssd/product-lines/9300 

...with software RAID10 on Linux and XFS. 

Works very well! 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Getting json-value as varchar

2022-01-06 Thread Andreas Joseph Krogh

På torsdag 06. januar 2022 kl. 13:31:19, skrev Thomas Markus <
t.mar...@proventis.net <mailto:t.mar...@proventis.net>>: Hi,

Am 06.01.22 um 13:28 schrieb Andreas Joseph Krogh: 
Hi, in PG-14 this query returns "value" (with double-quotes): 
SELECT ('{"key":"value"}'::jsonb)['key']; 
 ┌─┐
 │ jsonb │
 ├─┤
 │ "value" │
 └─┘
 (1 row)


and this returns 'value' (without the quotes): 
SELECT ('{"key":"value"}'::jsonb)->> 'key'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ value │
 └──┘
 (1 row)

 How to I use the subscript syntax and get the result as varchar instead of 
JSONB, assuming Iknow the JSON-field is a String?  simply cast your value
 SELECT (('{"key":"value"}'::jsonb)->> 'key')::text;

 best regards
 Thoma 

I think you misread my message. What I want is for the subscript-version: 
('{"key":"value"}'::jsonb)['key'] 
to return: 

┌──┐ 
 │ ?column? │
 ├──────┤
 │ value │
 └──┘

 instead of 

┌─┐ 
 │ jsonb │
 ├─┤
 │ "value" │
 └─┘





-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Getting json-value as varchar

2022-01-06 Thread Andreas Joseph Krogh

Hi, in PG-14 this query returns "value" (with double-quotes): 
SELECT ('{"key":"value"}'::jsonb)['key']; 
 ┌─┐
 │ jsonb │
 ├─┤
 │ "value" │
 └─┘
 (1 row)


and this returns 'value' (without the quotes): 
SELECT ('{"key":"value"}'::jsonb)->> 'key'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ value │
 └──┘
 (1 row)

 How to I use the subscript syntax and get the result as varchar instead of 
JSONB, assuming Iknow the JSON-field is a String? 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 

Re: Best Strategy for Large Number of Images

2021-12-20 Thread Andreas Joseph Krogh

På mandag 20. desember 2021 kl. 11:29:57, skrev Estevan Rech <
softr...@gmail.com <mailto:softr...@gmail.com>>: 

How is this folder structure like 10,000 folders? and the backup of it, how 
long does it take? 

I recommend using SeaweedFS as blob-store, and store metadata (folder, size, 
filenames etc.) in DB. It has excellent HA and backup mechanisms. 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Detecting repeated phrase in a string

2021-12-09 Thread Andreas Joseph Krogh

På torsdag 09. desember 2021 kl. 15:46:05, skrev Shaozhong SHI <
shishaozh...@gmail.com <mailto:shishaozh...@gmail.com>>: 


Hi, Peter, 

How to define word boundary as either by using 
^ , space, or $ 

So that the following can be done 

fox fox is a repeat 

foxfox is not a repeat but just one word. 

Do you want repeated phrase (list of words) ore repeated words? 
For repeated words (including unicode-chars) you can do: 

(\b\p{L}+\b)(?:\s+\1)+ 

I'm not quite sure how to translate this to PG, but in JAVA it works. 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Regex for Word space Word space Word ....

2021-11-23 Thread Andreas Joseph Krogh

På tirsdag 23. november 2021 kl. 12:25:29, skrev Shaozhong SHI <
shishaozh...@gmail.com <mailto:shishaozh...@gmail.com>>:

It only matches First Street from 'My First Street'.


I was trying to make it to match words starting capital letter only.


You'll want to include unicode-characters, which [A-Z] approach doesn't handle 
well.

How about:

select regexp_matches('Åge is a Man', E'[[:upper:]]\\w+', 'g');






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


Re: Incremental backup

2021-10-28 Thread Andreas Joseph Krogh

På torsdag 28. oktober 2021 kl. 23:09:19, skrev Ron mailto:ronljohnso...@gmail.com>>: 
[...]
 It's still a bunch of transaction logs, whereas differential and incremental
 backups only backup the changed pages, no matter how many times they've been
 changed.

 That's a serious reduction in disk space, and time to apply them. 


I think everybody agrees that incremental backup per database, and not 
cluster-wide, is nice, and it would be nice if PG supported it. But, given the 
way PG is architectured, having cluster-wide WALs, that's not an easy task to 
implement. 
Repeating "other databases have it" doesn't change that. 



--
 Andreas Joseph Krogh 

Re: PostgreSQL - Ordering Table based of Foreign Key

2021-10-03 Thread Andreas Joseph Krogh

På søndag 03. oktober 2021 kl. 10:49:49, skrev Thomas Kellerer mailto:sham...@gmx.net>>: 
FOUTE K. Jaurès schrieb am 03.10.2021 um 09:48:
 > I want to order tables based on the foreign key so that I can delete
 > tables one by one without facing "ERROR: update or delete on table
 > "table" violates foreign key constraint. DETAIL: Key is still
 > referenced from table"

 You can create the foreign key constraints with the "ON DELETE CASCADE" 
option.
 Then Postgres will handle dependencies automatically for you.

 But that means that *all* DELETEs will be cascaded.

 Unfortunately, there is no DELETE FROM ... CASCADE option (similar to DROP) 

There is TRUNCATE ... CASCADE 
https://www.postgresql.org/docs/14/sql-truncate.html 



-- 
Andreas Joseph Krogh 

Re: array_cat in PG-14 changed signature breaks my custom aggregate

2021-05-24 Thread Andreas Joseph Krogh

På mandag 24. mai 2021 kl. 12:01:44, skrev David Rowley mailto:dgrowle...@gmail.com>>: 
[..]
 > Do I have to change the signature of my aggregate to take 
anycompatiblearray as argument?

 Yeah you'll need to do that or write your own transition function that
 takes an anyarray. The docs mention:

 "the sfunc must take N+1 arguments, the first being of type state_data_type"

 array_cat no longer takes anyarray.

 regression=# \dfS array_cat
 List of functions
 Schema | Name | Result data type | Argument data
 types | Type
 
+---+++--
 pg_catalog | array_cat | anycompatiblearray | anycompatiblearray,
 anycompatiblearray | func
 (1 row)

 This was changed in [1].

 David

 [1] 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9e38c2bb5093ceb0c04d6315ccd8975bd17add66


Ok, thanks. 



-- 
Andreas Joseph Krogh 


array_cat in PG-14 changed signature breaks my custom aggregate

2021-05-24 Thread Andreas Joseph Krogh


Hi, I have this, for historical reasons: 

CREATE AGGREGATE array_aggarray(anyarray) (
 SFUNC = array_cat, STYPE = anyarray); 



...which now breaks in pg-14b1: 

ERROR: function array_cat(anyarray, anyarray) does not exist 


I see the argument data-types have changed from anyarray to 
anycompatiblearray, but that doesn't really tell me anything. 

Do I have to change the signature of my aggregate to take anycompatiblearray 
as argument? 

-- 
Andreas Joseph Krogh 




RE: How to keep format of views source code as entered?

2021-01-08 Thread Andreas Joseph Krogh

På fredag 08. januar 2021 kl. 09:38:29, skrev Markhof, Ingolf <
ingolf.mark...@de.verizon.com <mailto:ingolf.mark...@de.verizon.com>>: 




Thanks for your comments and thoughts.



I am really surprised that PostgreSQL is unable to keep the source text of a 
view. Honestly, for me the looks like an implementation gap. Consider software 
development. You are writing code in C++ maybe on a UNIX host. And whenever you 
feed you source code into the compiler, it will delete it, keeping the 
resulting executable, only. And you could not even store your source code on 
the UNIX system. Instead, you'd be forced to do so in a separate system, like 
GitHub. Stupid, isn't it? Right. There are good reasons to store the source 
code on GitHub or alike anyhow. Especially when working on larger project and 
when collaborating with many people. But in case of rather small project with a 
few people only, this might be an overkill.



It shouldn't be rocket science to enable PostgreSQL to store the original 
source code as well. It's weird PostgreSQL is not doing it.

It isn't rocket-science, of couse, but I'm pretty sure it is implemented like 
this on purpose. PG doesn't store queries you feed it either, nor any other
command. It stores the resulting structure. SQL-scripts, containing DDL/DML 
should be versioned using scm like Git, not rely on the DB to store it. 




-- 
Andreas Joseph Krogh 


Re: How to specify that a trigger should fire when column is NOT in SET-clause?

2020-12-25 Thread Andreas Joseph Krogh

På fredag 25. desember 2020 kl. 17:48:39, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
On 12/25/20 5:52 AM, Andreas Joseph Krogh wrote:
 > Hi.
 > I need to set a value in a trigger if a column is explicitly NOT
 > specified in UPDATE's SET-clause.
 > Like for example having a "BEFORE UPDATE OF NOT"
 >
 > create TRIGGER my_trigger
 > BEFORE UPDATE OF NOT modified ON my_table FOR EACH ROW WHEN (OLD.val 
<>NEW.val)
 > EXECUTE PROCEDURE do_stuff();
 >
 > I want the trigger to be fired when the column "modified" is NOT
 > specified, is it possible?

 It will always be specified, it may or may not be changed. As example: 

True, but what I'm after is using the value from the "modified" column, if 
specified, else use CURRENT_TIMESTAMP 

My use-case is this; 

I have this table: 
create table person ( id serial primary key, username varchar not null unique, 
passwordvarchar not null, credentials_last_updated timestamp NOT NULL default 
CURRENT_TIMESTAMP, created timestamp NOT NULL default CURRENT_TIMESTAMP, 
modifiedtimestamp ); Then this trigger to update "credentials_last_updated" 
whenever "password" is modified.create or replace FUNCTION 
person_password_updated_tf() returns TRIGGER AS $$ BEGIN  NEW.
credentials_last_updated= NEW.modified; -- OR CURRENT_TIMESTAMP if "modified" 
isn't specified RETURN NEW; END; $$ LANGUAGE plpgsql; create TRIGGER 
person_password_updated_tBEFORE UPDATE OF password ON onp_user FOR EACH ROW WHEN
(OLD.password <> NEW.password ) EXECUTE PROCEDURE person_password_updated_tf(); 

So, I want to set "credentials_last_updated to NEW.modified if "modified" is 
specified, else toCURRENT_TIMESTAMP 



-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


How to specify that a trigger should fire when column is NOT in SET-clause?

2020-12-25 Thread Andreas Joseph Krogh

Hi. 

I need to set a value in a trigger if a column is explicitly NOT specified in 
UPDATE's SET-clause. 


Like for example having a "BEFORE UPDATE OF NOT" 
create TRIGGER my_trigger BEFORE UPDATE OF NOT modified ON my_table FOR EACH 
ROW WHEN(OLD.val <> NEW.val) EXECUTE PROCEDURE do_stuff(); 


I want the trigger to be fired when the column "modified" is NOT specified, is 
it possible? 
Or - is it possible to check for this in the trigger-function? 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 

Re: Hot backup in PostgreSQL

2020-10-22 Thread Andreas Joseph Krogh

På torsdag 22. oktober 2020 kl. 10:18:12, skrev hubert depesz lubaczewski <
dep...@depesz.com <mailto:dep...@depesz.com>>: 
On Thu, Oct 22, 2020 at 09:45:36AM +0200, W.P. wrote:
 > > There are many ways to do it. To be able to suggest proper solution we'd
 > > need to know:
 > > 1. what is the problem with pg_dump?
 > Time (I guess a bit, but copying files could be done using rsync, so much
 > faster).

 Is it *really* too slow for you? Please note that you can easily make it
 much faster by doing -Fd -j $( nproc ). 

I got curious and tried with this DB: 

andreak@[local]:5433 13.0 visena=# select 
pg_size_pretty(pg_database_size(current_database()));
 ┌┐
 │ pg_size_pretty │
 ├┤
 │ 47 GB │
 └┘
 (1 row)


nproc=16


Regular pg_dump: 

$ time pg_dump -O -d visena > ~/data/visena/visena.dmp

 real 2m43,904s
 user 0m10,135s
 sys 0m24,260s 


Parallell pg_dump: 

$ time pg_dump -OFd -j $(nproc) -f ~/data/visena/pg_backup -d visena 

 real 3m43,726s
 user 12m36,620s
 sys 0m9,537s


pg_dump with pbzip2 

$ time pg_dump -O -d visena | pbzip2 -c > ~/data/visena/visena.dmp.bz2 

 real 6m58,741s
 user 92m4,833s
 sys 2m18,565s 

Here are the sizes of all: 

7,4G pg_backup (directory with -Fd)
32G visena.dmp 
 5,8G visena.dmp.bz2 

-- 

Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


RUM and WAL-generation

2020-10-21 Thread Andreas Joseph Krogh

Hi all. 

The RUM-index is very nice, but indexes get very large and produce insane 
amounts of WAL. 
Due to the way PG works (IIUC), updating an int-column in a row produces lots 
of WAL because the whole row is duplicated, and if that row holds RUM-indexed 
columns it gets really bad... 
We hav actually run out of WAL-space in production because of this. 

I see this TODO-entry in RUM: 


 * Improve GENERIC WAL to support shift (PostgreSQL core changes). 
What is the status on this? 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 

Sv: PostgreSQL transaction aborted on SQL error

2020-08-04 Thread Andreas Joseph Krogh

På tirsdag 04. august 2020 kl. 10:44:36, skrev Urko Lekuona mailto:u...@arima.eu>>: 
Hello, 

First time writing here, I hope this is the right place to ask this kind of 
question. I've been working with PostgreSQL for a while now but i've just found 
out that PostgreSQL marks my transaction for ROLLBACK and even stops the 
execution of the transaction if an error occurs. 

I'm a Java developer and I'm using JDBC to connect to PostgreSQL. I've made a 
gist to showcase this behavior (
https://gist.github.com/UrkoLekuona/4be14e099aeec2e7d8ef616886b66fdf/ 
<https://gist.github.com/UrkoLekuona/4be14e099aeec2e7d8ef616886b66fdf/>). If 
you run it, you'd see that when the unique key constraint is violated, my 
transaction is stopped, i.e. the SELECT and DROP statements are not executed. 
The thrown exception is org.postgresql.util.PSQLException: ERROR: current 
transaction is aborted, commands ignored until end of transaction block 

I've purposely set AutoCommit to false, because in my real life use case this 
is not an option. The only workaround I've found for this exception is setting 
the connection propertyautosave to ALWAYS, (
https://jdbc.postgresql.org/documentation/head/connect.html 
<https://jdbc.postgresql.org/documentation/head/connect.html>). 

My question is: is this the correct way of solving this issue? I'd rather if 
there was a PostgreSQL flag to disable this behavior and make it work like 
other RDBMS do, where if a statement failed, the transaction could continue 
without explicitly marking a savepoint and rolling back. 

Thanks in advance for your help, it is appreciated. 

Urko 


The correct approach is to ROLLBACK the transaction in a "catch-block" instead 
of trying to execute further statements. The java.sql.Connection is "invalid" 
after an SQLException and should be rolled back. 


--
 Andreas Joseph Krogh 

Re: Postgresql HA Cluster

2020-06-29 Thread Andreas Joseph Krogh

På mandag 29. juni 2020 kl. 09:40:13, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>: 


 Am 29.06.20 um 09:33 schrieb Laurenz Albe:
 > That would not provode a multi-master solution, though. There are some
 > commercial solutions for that, but be warned that it would require 
non-trivial
 > changes to your application.

 not really with BDR3 ;-) 

Well, BDR, last time I checked, still doesn't support exclusion-constraints, 
so it's not a drop-in replacement. 


--
 Andreas Joseph Krogh 


Sv: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Andreas Joseph Krogh

På tirsdag 16. juni 2020 kl. 17:59:37, skrev Jim Hurne mailto:jhu...@us.ibm.com>>: 
We have a cloud service that uses PostgreSQL to temporarily store binary
 content. We're using PostgreSQL's Large Objects to store the binary
 content. Each large object lives anywhere from a few hundred milliseconds
 to 5-10 minutes, after which it is deleted.
 [...] 

In my experience vacuumlo, https://www.postgresql.org/docs/12/vacuumlo.html 
<https://www.postgresql.org/docs/12/vacuumlo.html>, is needed to remove large 
objects, before vacuum can remove them from pg_largeobject. 


--
 Andreas Joseph Krogh 

Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Andreas Joseph Krogh

På onsdag 03. juni 2020 kl. 20:07:24, skrev Chris Travers <
chris.trav...@gmail.com <mailto:chris.trav...@gmail.com>>: 


[...] 

Regardless of what Oracle does, I agree this would be a huge step in the right 
direction for pg-DBAs. 
I have absolutely no clue about how much work is required etc., but I think 
it's kind of strange that no companies have invested in making this happen. 

I manage database clusters where the number of databases is a reason not to do 
logical replication based upgrades, where pg_upgrade is far preferred instead. 

If this were to be the case, I would be very concerned that a bunch of things 
would have to change: 
1. Shared catalogs would have txid problems unless you stay with global txids 
and then how do local wal streams work there? 
2. Possibility that suddenly streaming replication has the possibility of 
different databases having different amounts of lag 
3. Problems with io management on WAL on high throughput systems (I have 
systems where a db cluster generates 10-20TB of WAL per day) 

So I am not at all sure this would be a step in the right direction or worth 
the work. 

I agree these are all technical issues, but nevertheless - "implementation 
details", which DBAs don't care about. What's important from a DBA's 
perspective is not whether WAL is cluster-wide or database-wide, but whether 
it's possible to manage backups/PITR/restores of individual databases in a more 
convenient matter, which other RDBMS-vendors seem to provide. 

I love PG, have been using it professionally since 6.5, and our company 
depends on it, but there are things other RDBMS-vendors do better... 


--
 Andreas Joseph Krogh 

Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Andreas Joseph Krogh

På onsdag 03. juni 2020 kl. 18:50:12, skrev Jeremy Schneider <
schnj...@amazon.com <mailto:schnj...@amazon.com>>: 
> On 6/2/20 1:30 PM, Stephen Frost wrote:
 >> No, nothing does as PG doesn't support it as we have one WAL stream for
 >> the entire cluster.

 On 6/2/20 11:38, Ron wrote:
 > Right. Making WAL files specific to a database should be high on the
 > list of priorities.

 Did Oracle change this? Last time I looked, I don't think Oracle
 supported local redo in their multitenant architecture either. 


Regardless of what Oracle does, I agree this would be a huge step in the right 
direction for pg-DBAs. 
I have absolutely no clue about how much work is required etc., but I think 
it's kind of strange that no companies have invested in making this happen. 


--
 Andreas Joseph Krogh 

Re: Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

På torsdag 28. mai 2020 kl. 15:26:42, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andreas Joseph Krogh  writes:
 > Is there a way to define "sorting-rules" on custom-types so that I can have
 > ORDER BY  and PG will pick my custom 
odering?

 You'd have to write your own type, which would be a lotta work :-(.

 A possible partial answer is to define the composite type as

 firstname citext, lastname citext, other-fields-here

 and then the regular composite-type comparison rule would give you
 approximately what you said you wanted ... but only approximately.

 regards, tom lane 


Hm, ok. I think the most non-intrusive way for me is to craft a sub-select 
producing the "varchar-string-flattened" so I can order by that alias. 


--
 Andreas Joseph Krogh

Re: Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

På torsdag 28. mai 2020 kl. 14:50:54, skrev Geoff Winkless mailto:pgsqlad...@geoff.dj>>: 
On Thu, 28 May 2020 at 13:14, Andreas Joseph Krogh  wrote:
 > This works:
 > select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by fullname;
 >
 > But this doesn't:
 > select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by lower(fullname);
 > ERROR: column "fullname" does not exist
 > LINE 1: ... as fullname from onp_crm_person p order by lower(fullname);

 Wrap the original query in either a CTE or a temporary table.
 eg
 [..] 

Yea, I was hoping to avoid that, as the query is generated and rewriting it is 
a pain... 

Is there a way to define "sorting-rules" on custom-types so that I can have 
ORDER BY  and PG will pick my custom odering? 


--
 Andreas Joseph Krogh

Order by lower(column-alias) doesn't work...

2020-05-28 Thread Andreas Joseph Krogh

Hi. 

This works: 
select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by fullname;

But this doesn't: 
select p.firstname, p.lastname, p.firstname || p.lastname as fullname from 
onp_crm_person p order by lower(fullname);
ERROR: column "fullname" does not exist 
 LINE 1: ... as fullname from onp_crm_person p order by lower(fullname);

This is just an example-query, in my real query I have a more complex query 
generating an array of a custom-type which is then referenced to as a 
column-alias, and then ORDER BY on a function tranforming this array doesn't 
work: 

SELECT ... 
ARRAY(WITH RECURSIVE t AS (SELECT 
...
) SELECT ROW(t.entity_id, t.name)::BigIntVarChar from t order by level DESC) 
as folder_parent_array


ORDER BY bigintvarchar_to_text_value_flatten(folder_parent_array) ASC; 
column "folder_parent_array" does not exist 

What bigintvarchar_to_text_value_flatten() does is to take the "varchar"-part 
out of the BigintVarchar-type and "flatten" the array by that value so that it 
sorts nicely. 

 Any way round this? 


--
 Andreas Joseph Krogh 

Sv: Practical usage of large objects.

2020-05-14 Thread Andreas Joseph Krogh

På onsdag 13. mai 2020 kl. 19:53:38, skrev Dmitry Igrishin mailto:dmit...@gmail.com>>: 
Hello all,

 As you know, PostgreSQL has a large objects facility [1]. I'm curious
 are there real systems which are use this feature? I'm asking because
 and I'm in doubt should the Pgfe driver [2] provide the convenient API
 for working with large objects or not.

 Thanks! 

Yea, we use LOs, because using JDBC bytea reallys doesn't stream (at least 
using the pgjdbc-ng driver). When retrieving bytea using JDBC it retunrs an 
InputStream but it's backed by an in-memory byte[]. With LOs and java.sql.Blob 
(which the standard pgjdbc-dirver doesn't support ,but pgjdbc-ngdoes) it 
acutally uses strams and memory is kept down to a minimum. 


--
 Andreas Joseph Krogh

Sv: Triggers and Full Text Search *

2020-04-21 Thread Andreas Joseph Krogh

På tirsdag 21. april 2020 kl. 19:24:10, skrev Malik Rumi <
malik.a.r...@gmail.com <mailto:malik.a.r...@gmail.com>>: 


[...]

I am not (yet) posting the trigger code because this post is long already, and 
if your answers are 1) yes, 2) no and 3) triggers often work / fail like this, 
then there’s no point and we can wrap this up. But if not, I will happily post 
what I have. Thank you.


This is too much prose for the regular programmer, show us the code, and point 
out what doesn't work for you, then we can help:-) 


--
 Andreas Joseph Krogh

Sv: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread Andreas Joseph Krogh

På onsdag 25. mars 2020 kl. 13:36:38, skrev J2eeInside J2eeInside <
j2eeins...@gmail.com <mailto:j2eeins...@gmail.com>>: 
Hi all,

 I hope someone can help/suggest:
 I'm currently maintaining a project that uses Apache Solr /Lucene. To be 
honest, I wold like to replace Solr with Postgre Full Text Search. However, 
there is a huge amount of documents involved - arround 200GB. Wondering, can 
Postgre handle this efficiently?
 Does anyone have specific experience, and what should the infrastructure look 
like?

 P.S. Not to be confused, the Sol works just fine, i just wanted to eliminate 
one component from the whole system (if Full text search can replace Solr at 
all) 

I see you've gotten some answers but wanted to chime in... 
We seach in ~15mill. emails and ~10 mill documents (extracted text from 
Word/PDF etc. using Java-tools), and use PG and FTS (gin, not rum) for the 
exact same reasons as Evergreen (it seems). We have to mix FTS with 
domain-specific logic/filtering and that is based on relational data in the 
database. I don't see how we could have done that using an external 
search-engine. Maybe it's easy, I don't have any experience with it. 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: How to transfer databases form one server to other

2020-01-26 Thread Andreas Joseph Krogh


På mandag 27. januar 2020 kl. 03:26:59, skrev Ron mailto:ronljohnso...@gmail.com>>: [..]
 I ran uncompressed pg_dump on multiple TB+ sized databases from v8.4 servers 
across the LAN using 9.6 binaries on the remote server. It was quite fast. 
Threading was key. 

According to the manual: https://www.postgresql.org/docs/12/app-pgdump.html 
<https://www.postgresql.org/docs/12/app-pgdump.html> 
the "directory format" is the only format which supports parallel dumps, if 
I'm not reading it wrong. 

How did threading solve "between database" dump/restore for you? Did you dump 
to "directory format" first, then restore? If so, then that requires quite a 
bit of temp-space... 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com>

Sv: Why are clobs always "0"

2019-12-01 Thread Andreas Joseph Krogh

På søndag 01. desember 2019 kl. 18:31:35, skrev Arnie Morein <
arnie.mor...@mac.com <mailto:arnie.mor...@mac.com>>: 

I have tested the most recent driver in three different SQL IDEs, and now with 
an application I'm writing that uses JDBC metadata, the comment on a field 
definition also isn't available as a string value. 

The only thing I ever see regarding data type "text" field values are either a 
0 or a 1; neither of which applies. 

So why is this happening, even from the JDBC metadata results as well? 

Have you tried the NG-driver: https://github.com/impossibl/pgjdbc-ng 
<https://github.com/impossibl/pgjdbc-ng> 
We use it with Blobs/Clobs and it's working good. 

It would help us help you if you mention which IDEs you have tried, and 
provide configuration-paramteres, error-messages etc. 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: DELETE Query Hang

2019-11-12 Thread Andreas Joseph Krogh

På tirsdag 12. november 2019 kl. 23:47:18, skrev DrakoRod <
drakofla...@hotmail.com <mailto:drakofla...@hotmail.com>>: 
Hi folks!

 I Have a question, in a database are a table with many files (bytea) stored
 (I support this database a don't design it), but we need delete many rows
 (38000 rows approx), but I when execute query:

 BEGIN;
 ALTER TABLE my_file_table DISABLE TRIGGER ALL;
 DELETE FROM my_file_table WHERE id_table <> 230;

 This query hang... 50 minutes and the query do not finish.

 Any suggestion? 

Check for locks and blocking statements: 
https://wiki.postgresql.org/wiki/Lock_Monitoring 
<https://wiki.postgresql.org/wiki/Lock_Monitoring> 

You can delete in chunks like this: 

do $_$ declare num_rows bigint; begin  loop  delete from YourTable where id in 
(select id from YourTable where id < 500 limit 100); get diagnostics num_rows = 
row_count; raise notice 'deleted % rows', num_rows; exit when num_rows = 0; end 
loop; end;$_$; 


--
 Andreas Joseph Krogh 


Re: Create a logical and physical replication

2019-11-05 Thread Andreas Joseph Krogh

På tirsdag 05. november 2019 kl. 12:15:20, skrev Deepak Pahuja . <
deepakpah...@hotmail.com <mailto:deepakpah...@hotmail.com>>: 
Yes it is possible. 


No it isn't. I think maybe this will address it for v13: 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 
<https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de>


-- 
Andreas Joseph Krogh 


Re: Having more than one constraint trigger on a table

2019-10-24 Thread Andreas Joseph Krogh


På torsdag 24. oktober 2019 kl. 16:59:42, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
On 10/22/19 8:26 AM, Andreas Joseph Krogh wrote:
 > På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver
 > mailto:adrian.kla...@aklaver.com>>:
 >
 > [snip]
 > No.
 > When I sort the triggers I get:
 >
 > test=# create table trg_str(fld_1 varchar);
 > CREATE TABLE
 > test=# insert into trg_str values ('trigger_1_update_fts'),
 > ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 > INSERT 0 4
 > test=# select * from trg_test order by fld_1 ;
 > id | fld_1
 > +---
 > (0 rows)
 >
 > test=# select * from trg_str order by fld_1 ;
 > fld_1
 > -
 > trigger_1_check_nocycle
 > trigger_1_update_fts
 > trigger_2
 > trigger_3
 >
 > Is this how you want them to fire as it does not match what you say
 > above?:
 >
 > (I know they were not /declared/ in that order, but..)
 > Yes, all "trigger_1_*" are the "actuall triggers triggering the logic",
 > trigger_2 and trigger_3 are only there as part of the "make
 > constraint-triggers fire only once"-mechanism, in which the function in
 > the first trigger is the function performing the actual logic.
 > So, being I want 2 "logical chunks" to happen I have two
 > "trigger_1"-triggers (there is no established terminilogy for this
 > AFAIK), each calling a function performing the logick which is to happen
 > only once (per row).
 >
 > "The first "main" trigger-function is update_company_fts_tf() ... The
 > second "main" trigger-function is company_parent_no_cycle()"
 >
 > It might be easier to understand if sketch out a schematic version of
 > what you are trying to achieve.
 >
 > The point is; I want to functions to be called
 > - update_company_fts_tf()
 > - company_parent_no_cycle()
 > , each only once, as constraint-triggers on the same table. So they are
 > called by the "level 1 triggers" which must fire first.

 To be clear the order they fire relative to each other is not important? 


Correct, these main functions may fire in any order. 



> Is it clearer now what I'm trying to achieve?

 Sort of, though I am still not entirely what the whole process is trying
 to achieve. What the mix of deferred and un-deferred triggers and
 'logical' and housekeeping functions are doing is not clear to me. That
 is why I suggested a schematic representation of the trigger flow would
 be helpful. Leave out the fine details and create a flow chart of what
 you want to happen. 


Normally, CONSTRAINT TRIGGERs will fire once for each UPDATE. That means, if 
you do CREATE, the 2 UPDATES, then the trigger(s) will fire 3 times for each 
row. I'm trying to make these triggers fire only ONCE per row, and at COMMIT 
(being CONSTRAINT TRIGGER). 

I'm using the trick mentioned here to achieve this: 

https://stackoverflow.com/questions/8937203/execute-deferred-trigger-only-once-per-row-in-postgresql#answer-8954058
 
<https://stackoverflow.com/questions/8937203/execute-deferred-trigger-only-once-per-row-in-postgresql#answer-8954058>


But I'm trying to have more than one CONSTRAINT TRIGGER on the same table, 
each one doing dirfferent things and reacting (triggering) on different 
columns, and I'm wondering if I can "re-use" the "cleanup-triggers" 2 and 3 as 
I mentioned, having trigger 2 firing on the sum of all involved COLUMNS (name, 
duns_number, parent_id) ? 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com>

Re: Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver <
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>>: 
[snip]
 No.
 When I sort the triggers I get:

 test=# create table trg_str(fld_1 varchar);
 CREATE TABLE
 test=# insert into trg_str values ('trigger_1_update_fts'),
 ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 INSERT 0 4
 test=# select * from trg_test order by fld_1 ;
 id | fld_1
 +---
 (0 rows)

 test=# select * from trg_str order by fld_1 ;
 fld_1
 -
 trigger_1_check_nocycle
 trigger_1_update_fts
 trigger_2
 trigger_3

 Is this how you want them to fire as it does not match what you say above?: 

(I know they were not declared in that order, but..) 
Yes, all "trigger_1_*" are the "actuall triggers triggering the logic", 
trigger_2 and trigger_3 are only there as part of the "make constraint-triggers 
fire only once"-mechanism, in which the function in the first trigger is the 
function performing the actual logic. 
So, being I want 2 "logical chunks" to happen I have two "trigger_1"-triggers 
(there is no established terminilogy for this AFAIK), each calling a function 
performing the logick which is to happen only once (per row). 


"The first "main" trigger-function is update_company_fts_tf() ... The
 second "main" trigger-function is company_parent_no_cycle()"

 It might be easier to understand if sketch out a schematic version of
 what you are trying to achieve. 

The point is; I want to functions to be called 

- update_company_fts_tf() 
- company_parent_no_cycle() 

, each only once, as constraint-triggers on the same table. So they are called 
by the "level 1 triggers" which must fire first. 

Is it clearer now what I'm trying to achieve? 


--
 Andreas Joseph Krogh

Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

Hi. 

I have the following schema (question at bottom): 
== 
CREATE TABLE company(id SERIAL PRIMARY KEY, parent_id INTEGER REFERENCES 
company(id) DEFERRABLE INITIALLY DEFERRED , name VARCHAR NOT NULL, duns_number 
VARCHAR, fts_all tsvector, t_updated BOOLEAN); CREATE or replace FUNCTION 
update_company_fts(p_company_id integer) RETURNS VOID AS $$ BEGIN  UPDATE 
companycomp SET fts_all = to_tsvector('simple' , comp.name || ' ' || coalesce
(comp.duns_number,'') ) WHERE comp.id = p_company_id; raise notice 'Running 
update of %', p_company_id; END; $$ LANGUAGE plpgsql; -- re-index all: CREATE 
OR REPLACE FUNCTIONindex_company() RETURNS VOID AS $$ DECLARE v_company_id 
INTEGER; begin  FOR v_company_id IN (SELECT id FROM company) LOOP  perform 
update_company_fts(v_company_id); END LOOP; END; $$ LANGUAGE plpgsql; create or 
replace functionupdate_company_fts_tf() returns TRIGGER AS $$ declare 
v_company_idINTEGER; BEGIN v_company_id := NEW.id; perform update_company_fts
(v_company_id);RETURN NULL; END; $$ LANGUAGE plpgsql; -- General cleanup 
functions for constraint triggersCREATE OR REPLACE FUNCTION 
trigger_function_set_updated() returns TRIGGER AS $$ BEGIN  update company set 
t_updated =TRUE WHERE id = NEW.id; RETURN NULL; END; $$ LANGUAGE plpgsql; 
CREATE OR REPLACE FUNCTIONtrigger_function_clear_updated() returns TRIGGER AS $$
BEGIN update company set t_updated = NULL WHERE id = NEW.id; RETURN NULL; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER trigger_1_update_fts AFTER INSERT 
OR UPDATE of name, duns_number ON company DEFERRABLE INITIALLY DEFERRED  FOR 
EACH ROW WHEN (NEW.t_updated IS NULL) EXECUTE PROCEDURE update_company_fts_tf();
CREATE CONSTRAINT TRIGGERtrigger_2 AFTER INSERT OR UPDATE of name, duns_number, 
parent_idON company -- NOT DEFERRED FOR EACH ROW  WHEN (NEW.t_updated IS NULL) 
EXECUTE PROCEDUREtrigger_function_set_updated(); CREATE CONSTRAINT TRIGGER 
trigger_3AFTER INSERT OR UPDATE OF t_updated ON company DEFERRABLE INITIALLY 
DEFERRED FOR EACH ROW  WHEN (NEW.t_updated) EXECUTE PROCEDURE 
trigger_function_clear_updated(); CREATE OR REPLACE FUNCTION 
company_parent_no_cycle() returns TRIGGER AS $$ BEGIN  IF (WITH recursive tr 
(id, parent_id, all_ids,cycle) AS ( SELECT id, parent_id, ARRAY [id], false  
FROMcompany tr WHERE id = NEW.id UNION ALL  SELECT t.id, t.parent_id, all_ids ||
t.id, t.id =ANY (all_ids) FROM company t JOIN tr ON t.parent_id = tr.id AND NOT 
cycle) SELECT count(*) FROM tr where cycle = true) > 0 THEN  RAISE EXCEPTION 
'Cannot have cyclic parent relations for company' USING SCHEMA = 
TG_TABLE_SCHEMA,TABLE = TG_TABLE_NAME, CONSTRAINT = TG_NAME , ERRCODE = '23514'
/*check_violation*/, COLUMN = 'parent_id'; END IF; RETURN NULL; END; $$ LANGUAGE
plpgsql;CREATE CONSTRAINT TRIGGER trigger_1_check_nocycle AFTER INSERT OR 
UPDATE ofparent_id ON company DEFERRABLE INITIALLY DEFERRED  FOR EACH ROW  WHEN 
(NEW.t_updated IS NULL) EXECUTE PROCEDURE company_parent_no_cycle(); 
== 

What I'm after is to have 2 "logical constraint-triggers" perform logic only 
once (each) on the "company"-table. 
To make constraint-triggers fire only once (in PostgreSQL) a common method is 
to have a schema with 3 triggers, and a "magic" t_updated column, and they must 
be named so they (the triggers, not the trigger-functions) are fired in lexical 
order (alphabetically). And it's important that the 2nd. trigger (here 
"trigger_2") is NOT deferred. 

In my schema above I have 2 "logical chuchks" which each perform some stuff 
and shall only do it once per row at commit-time. 
The first "main" trigger-function is update_company_fts_tf() and it updates a 
column (fts_all) of type tsvector. This is done in a trigger so that it may add 
stuff (customer-number etc.) from other tables as needed (which is not possible 
with PG-12's new STORED-columns). 
The second "main" trigger-function is company_parent_no_cycle() and assures 
there are no parent/child-cycles. 

Question: 
1. I have "re-used" trigger_2 and trigger_3, and trigger_2's "...OR UPDATE 
OF"-list is the sum of all columns updated(used) in the 2 main-triggers, that 
is "name", "duns_number" and parent_id. trigger_3 only checks t_updated. 
Is this correct usage, can I assume this will work correctly? 
2. If I need a 3rd "logical trigger", is it enough to add another trigger 
named accordingly, for instance "trigger_1_someotherstuff", and add it's column 
to the "UPDATE OF"-list of "trigger_2" (it it uses a column not already listed 
there)? 
3. Is there some easier way to do this? 

Is it clear what I'm asking about? :-) 

Thanks. 


--
 Andreas Joseph Krogh 

Sv: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Joseph Krogh

På fredag 18. oktober 2019 kl. 07:59:21, skrev Daulat Ram <
daulat@exponential.com <mailto:daulat@exponential.com>>: 

Hello All,

Can you please share some ideas and scenarios how we can do the PITR in case 
of disaster.

We use barman (https://www.pgbarman.org/ <https://www.pgbarman.org/index.html>
) for continuous streaming backup and I had to restore from it once, and it 
went like this: 

==8<=== 

$ barman recover --target-time "2018-12-06 12:20:00" --remote-ssh-command "ssh 
andreak@192.168.0.184 <mailto:andreak@192.168.0.184>" db01_11 20181130T190002 
"/home/andreak/barman-restore"
 Processing xlog segments from streaming for db01_11
 00010174002E
 00010174002F
 000101740030
 Starting remote restore for server db01_11 using backup 20181130T190002
 Destination directory: /home/andreak/barman-restore
 Doing PITR. Recovery target time: '2018-12-06 12:20:00+01:00'
 17445, dbname1, /storage/fast_ssd/11/tablespaces/dbname1
 29218, dbname2, /storage/fast_ssd/11/tablespaces/dbname2
 ... 
 29235503, dbnameX, /storage/fast_ssd/11/tablespaces/dbnameX
Copying the base backup.
 Copying required WAL segments.
 Generating recovery.conf
 Identify dangerous settings in destination directory.

 WARNING
 The following configuration files have not been saved during backup, hence 
they have not been restored.
 You need to manually restore them in order to start the recovered PostgreSQL 
instance:

 postgresql.conf
 pg_hba.conf
 pg_ident.conf

 Recovery completed (start time: 2018-12-06 13:14:53.220043, elapsed time: 4 
hours, 52 minutes, 47 seconds)

 Your PostgreSQL server has been successfully prepared for recovery! 
======8<=== 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura <
juliez...@hotmail.com <mailto:juliez...@hotmail.com>>: 
Guys, can anybody reply if they drop any postgresql database larger than 1 tb 
and how long did it take? 

Sorry, I missread you question as 1GB (not TB)... 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: drop database

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 22:44:15, skrev Julie Nishimura <
juliez...@hotmail.com <mailto:juliez...@hotmail.com>>: 
Guys, can anybody reply if they drop any postgresql database larger than 1 tb 
and how long did it take? 

Thanks 

About 280ms: 

andreak@[local]:5432 12.0 andreak=# select 
pg_size_pretty(pg_database_size('rsm'));
 ┌┐
 │ pg_size_pretty │
 ├┤
 │ 26 GB │
 └┘
 (1 row)

andreak@[local]:5432 12.0 andreak=# \timing 
 Timing is on.
andreak@[local]:5432 12.0 andreak=# drop DATABASE rsm; 
 DROP DATABASE
 Time: 280,355 ms


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Sv: Conflict between autovacuum and backup restoration

2019-10-17 Thread Andreas Joseph Krogh

På torsdag 17. oktober 2019 kl. 15:48:09, skrev Ekaterina Amez <
ekaterina.a...@zunibal.com <mailto:ekaterina.a...@zunibal.com>>: 
Good afternoon,

 We've finally made the migration+upgrade from old server with v7.14 to
 new server with v8.4 and, before test and plan an upgrade to v9.6, I'm
 checking logs to find out if there's any problem with this upgrade.
 We've been fixing things and in only remains one problem in the log that
 I don't understand and not sure how to fix, or if it has to be fixed.
 Original logs are not in english so I'm posting my translation.

 In the morning we're making a backup of -let's call it- the main
 database. To avoid intensive use of this main database, we have a second
 db that's a copy of the main one, used only to display data to some
 users that don't need it up to date and that shouldn't change anything.
 So at night we're restoring the morning backup from main database into
 this second one. The upgrade that I've mentioned has ended with both,
 main and second databases, being in the same server. The "problem"
 (because I'm not sure if it's really a problem) is that while the backup
 is restoring in the second database, it seems like autovacuum is
 launched and conflicts with this db restore. The log is this:

 [...] Several messages about checkpoints running too frequently [...]
 2019-10-16 23:01:30.904 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 7446.139 ms sentence: COPY one_table (some_columns) FROM stdin;
 2019-10-16 23:01:37.457 CEST - [13750] LOG: checkpoints are running too
 frequently ( 9 seconds)
 2019-10-16 23:01:37.457 CEST - [13750] HINT: Consider changing
 «checkpoint_segments» configuration parameter.
 2019-10-16 23:01:58.663 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 6492.426 ms sentence: CREATE INDEX another_table_index1 ON
 another_table USING btree (another_field1);
 2019-10-16 23:02:04.042 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 5378.886 ms sentence: CREATE INDEX another_table_index2 ON
 another_table USING btree (another_field2);
 2019-10-16 23:02:11.742 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 7699.246 ms sentence: CREATE INDEX another_table_index3 ON
 another_table USING btree (another_field3);
 2019-10-16 23:02:12.742 CEST - [162851] - user@[local]:secondDB - LOG: 
 sending cancel signal to blocking autovacuum with PID 162869
 2019-10-16 23:02:12.742 CEST - [162851] - user@[local]:secondDB -
 DETAIL: Process 162851 is waiting for ShareLock on relation
 3880125365 for database 3880125112.
 2019-10-16 23:02:12.743 CEST - [162851] - user@[local]:secondDB -
 SENTENCE: CREATE INDEX another_table_index4 ON another_table USING
 btree (another_field4);
 2019-10-16 23:02:12.743 CEST - [162869] ERROR: cancelling autovacuum task
 2019-10-16 23:02:12.743 CEST - [162869] CONTEXT: automatic analyze of
 «secondDB.public.another_table»
 2019-10-16 23:02:20.899 CEST - [162851] - user@[local]:secondDB - LOG: 
 duration: 9157.371 ms sentence: CREATE INDEX another_table_index4 ON
 another_table USING btree (another_field4);

 After upgrade main db to the new server, I've tuned following parameters
 in postgresql.conf (using pgtune recommendations)

 max_connections = 200
 shared_buffers = 8GB
 work_mem = 2621kB
 maintenance_work_mem = 2GB
 effective_io_concurrency = 2
 wal_buffers = 8MB
 checkpoint_segments = 32
 checkpoint_completion_target = 0.7
 effective_cache_size = 16GB
 log_min_duration_statement = 5000
 log_line_prefix = '%m - [%p] %q- %u@%h:%d - %a '
 standard_conforming_strings = on


 I've been looking for the problem with checkpoints and I've decided to
 let it be, because these messages only appear when we make the second db
 restore. The rest of the log is clean from checkpoint messages.

 But I don't understand why I'm getting those messages about autovacuum
 blocking db restore process. I guess that after one table is created
 with COPY sentence, as many rows have been inserted, autoanalyze process
 runs to gather statistics for the Execution Planner. But why is
 happening this block? Is autoanalyze running before the table gets fully
 loaded? Is this really a problem? If so, how can I handle it? This task
 is running at night, when nobody is using second database.


 Thank you for reading,

 Ekaterina 

It is normal to get these "canceling autovacuum"-messages when restoring a 
database, just ignore them. 
If it bothers you, just turn autovacuum off by setting this in postgresql.conf:

autovacuum = off 

and reload the config (SIGHUP) 



--
 Andreas Joseph Krogh

Re: Segmentation fault with PG-12

2019-10-12 Thread Andreas Joseph Krogh

På torsdag 10. oktober 2019 kl. 22:21:13, skrev Andres Freund <
and...@anarazel.de <mailto:and...@anarazel.de>>: 
On 2019-10-10 15:32:38 -0400, Tom Lane wrote:
 > Andres Freund  writes:
 > > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
 > >> Well, it shows that the failure is occurring while trying to evaluate
 > >> a variable in a trigger's WHEN clause during
 > >> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE 
entity_id IN ($3)\nRETURNING entity_id"
 > >> And I'd bet that the root cause is something to do with Andres' tuple 
slot
 > >> work. But (at least to my eye) it's not apparent exactly what's wrong.
 >
 > > It looks like this could "just" be another report of #16036, which was
 > > already fixed in:
 > > commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 > > Author: Andres Freund 
 > > Date: 2019-10-04 11:59:34 -0700
 > > Fix crash caused by EPQ happening with a before update trigger present.
 >
 > Bingo. I can reproduce the crash (using concurrent updates of the same
 > table row, in the schema Andreas sent off-list) on the predecessor of
 > that commit, but on that commit it's fine.

 Cool, and thanks for checking. 


No crashes in production after deploying the fix. 


-- 
Andreas Joseph Krogh 


Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh

På torsdag 10. oktober 2019 kl. 21:32:38, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andres Freund  writes:
 > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
 >> Well, it shows that the failure is occurring while trying to evaluate
 >> a variable in a trigger's WHEN clause during
 >> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 >> And I'd bet that the root cause is something to do with Andres' tuple slot
 >> work. But (at least to my eye) it's not apparent exactly what's wrong.

 > It looks like this could "just" be another report of #16036, which was
 > already fixed in:
 > commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 > Author: Andres Freund 
 > Date: 2019-10-04 11:59:34 -0700
 > Fix crash caused by EPQ happening with a before update trigger present.

 Bingo. I can reproduce the crash (using concurrent updates of the same
 table row, in the schema Andreas sent off-list) on the predecessor of
 that commit, but on that commit it's fine. 


That's great! 


Andreas, that's a pretty simple patch if you're in a position to
 build from source ...

 regards, tom lane 


Yes, we've built a new .deb-package from 
f224c7c11ea7be2751e3342e11317070ffb5622d in REL_12_STABLE which we'll deploy 
tonight. 
Thanks! 


-- 
Andreas Joseph Krogh 


Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
We had another crash today, and it appears to be the same: #0 
slot_deform_heap_tuple (natts=26, offp=0x5598eba0b968,
 tuple=, slot=0x5598eba0b920)
 at ./build/../src/backend/executor/execTuples.c:895 -- Andreas Joseph Krogh 

Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
På torsdag 10. oktober 2019 kl. 07:25:26, skrev Andres Freund <
and...@anarazel.de <mailto:and...@anarazel.de>>: On 2019-10-09 10:16:37 -0400, 
Tom Lane wrote:
 > Andreas Joseph Krogh  writes:
 > > Attached is output from "bt full". Is this helpful?
 >
 > Well, it shows that the failure is occurring while trying to evaluate
 > a variable in a trigger's WHEN clause during
 > "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 > And I'd bet that the root cause is something to do with Andres' tuple slot
 > work. But (at least to my eye) it's not apparent exactly what's wrong.

 It looks like this could "just" be another report of #16036, which was
 already fixed in:

 commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 Author: Andres Freund 
 Date: 2019-10-04 11:59:34 -0700

 Fix crash caused by EPQ happening with a before update trigger present.

(Tom: This mail is only viewable as text/html, to if you're reading the 
text/plain version it will seem "hashed") Aha, that whould be 
60e97d63e5d19098e11fa32431a20eea820e2ae9 in REL_12_STABLE We'll build and run 
HEAD of REL_12_STABLE, and report back. > This doesn't seem to correlate with 
your original report, btw,
 > as that claimed the crash was during COMMIT.

 That however, would be confusing, unless there's some deferred trigger
 that causes another update, which then fires a before update trigger
 causing the problem.

 Greetings,

 Andres Freund We have a deferred trigger which updates origo_email_delivery: 
CREATE OR REPLACE FUNCTIONorigo_index_email_props_tf() RETURNS TRIGGER AS $$ 
declare v_prop origo_email_message_property; BEGIN v_prop := NEW; UPDATE 
origo_email_delivery SET is_seen = v_prop.is_seen, followup_id = 
v_prop.followup_id, is_replied = v_prop.is_replied, is_forwarded = 
v_prop.is_forwarded, is_draft = v_prop.is_draft, is_done = v_prop.is_done, 
is_flagged = v_prop.is_flagged, modseq =greatest(modseq, v_prop.modseq) WHERE 
message_id = v_prop.message_idAND owner_id = v_prop.owner_id; RETURN NEW; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER origo_index_email_props_t AFTER 
INSERT OR UPDATE ON origo_email_message_property DEFERRABLE INITIALLY DEFERRED  
FOR EACH ROWEXECUTE PROCEDURE origo_index_email_props_tf(); .. and then trigger 
the following UPDATE-trigger: CREATE TRIGGER origo_email_delivery_update_t 
BEFORE UPDATE ON origo_email_delivery FOR EACH ROW  WHEN (OLD.folder_id <> NEW
.folder_idOR NEW.is_deleted <> OLD.is_deleted) EXECUTE PROCEDURE 
origo_email_delivery_update_tf(); Maybe that will trigger the bug. Thanks. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Re: Segmentation fault with PG-12

2019-10-09 Thread Andreas Joseph Krogh
På onsdag 09. oktober 2019 kl. 16:16:37, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Attached is output from "bt full". Is this helpful?

 Well, it shows that the failure is occurring while trying to evaluate
 a variable in a trigger's WHEN clause during
 "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id IN 
($3)\nRETURNING entity_id"
 And I'd bet that the root cause is something to do with Andres' tuple slot
 work. But (at least to my eye) it's not apparent exactly what's wrong.

 Can you show us the table definition and associated trigger definitions
 for origo_email_delivery?

 This doesn't seem to correlate with your original report, btw,
 as that claimed the crash was during COMMIT.

 regards, tom lane FWIW: It doesn't always happen when that UPDATE-statement 
is issued, so it's not reproducable. We'll see what the next core-dump gives us.
Is it OK if I send you the table/trigger-definitions off-list? -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Segmentation fault with PG-12

2019-10-09 Thread Andreas Joseph Krogh
På tirsdag 08. oktober 2019 kl. 17:24:21, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Will running a debug-enabled build slow things noticably down?

 gcc promises that the generated code is the same with or without debug.
 I think clang does too. With other compilers you may pay some penalty.

 > Is there a way
 > to make it dump a stack-trace (or back-trace in C-land?) on sig11?

 You should be able to get a core file from which you can extract a
 stack trace (and other info) after the fact.

 
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

 regards, tom lane Attached is output from "bt full". Is this helpful? 
Anything else I can do to help narrowing down the problem? Thanks. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>#0  slot_deform_heap_tuple (natts=24, offp=0x5598ec873d90, tuple=, slot=0x5598ec873d48)
at ./build/../src/backend/executor/execTuples.c:895
isnull = 0x5598ec8733e8
tup = 
bp = 
tupleDesc = 
values = 0x5598ec875de8
off = 
slow = 
hasnulls = 
attnum = 
tp = 
tupleDesc = 
values = 
isnull = 
tup = 
hasnulls = 
attnum = 
tp = 
off = 
bp = 
slow = 
thisatt = 
#1  tts_buffer_heap_getsomeattrs (slot=0x5598ec873d48, natts=24) at 
./build/../src/backend/executor/execTuples.c:676
bslot = 0x5598ec873d48
#2  0x5598e94534ac in slot_getsomeattrs_int 
(slot=slot@entry=0x5598ec873d48, attnum=24)
at ./build/../src/backend/executor/execTuples.c:1877
__errno_location = 
#3  0x5598e94443f1 in slot_getsomeattrs (attnum=, 
slot=0x5598ec873d48)
at ./build/../src/include/executor/tuptable.h:345
No locals.
#4  ExecInterpExpr (state=0x5598ec8776b8, econtext=0x5598ec876ea8, 
isnull=)
at ./build/../src/backend/executor/execExprInterp.c:441
op = 
resultslot = 0x0
innerslot = 
outerslot = 
scanslot = 0x0
dispatch_table = {0x5598e9443608 , 0x5598e94443f8 
, 
  0x5598e94443d0 , 0x5598e94443b0 
, 0x5598e9444380 , 
  0x5598e9444350 , 0x5598e9444328 
, 0x5598e9444318 , 
  0x5598e9444148 , 0x5598e94442e0 
, 0x5598e9444300 , 
  0x5598e94442c8 , 0x5598e9444120 
, 0x5598e94440f0 , 
  0x5598e94442a0 , 0x5598e9444270 
, 0x5598e9444250 , 
  0x5598e9444240 , 0x5598e94441d8 
, 0x5598e94441c0 , 
  0x5598e94441a8 , 0x5598e94435a8 
, 0x5598e94435af , 
  0x5598e9444168 , 0x5598e94435d0 
, 0x5598e94435d7 , 
  0x5598e94440b8 , 0x5598e94440b0 
, 0x5598e9444088 , 
  0x5598e9444080 , 0x5598e9444070 
, 0x5598e9444058 , 
  0x5598e9444028 , 0x5598e9444008 
, 0x5598e9443fe0 , 
  0x5598e9443fd0 , 0x5598e9443fb8 
, 0x5598e9443f60 , 
  0x5598e9443f90 , 0x5598e9443f38 
, 0x5598e9443d98 , 
  0x5598e9443f20 , 0x5598e9443f08 
, 0x5598e9443ef0 , 
  0x5598e9443ec0 , 0x5598e9443e28 
, 0x5598e9443dc0 , 
  0x5598e9443d60 , 0x5598e9443e48 
, 0x5598e9443ce8 , 
  0x5598e9443cd0 , 0x5598e9444598 
, 0x5598e9443cb8 , 
  0x5598e9443ca0 , 0x5598e9443c78 
, 0x5598e9443be8 , 
  0x5598e9443c18 , 0x5598e9443b98 
, 0x5598e9443b80 , 
  0x5598e9443b68 , 0x5598e9443b50 
, 0x5598e9443b38 , 
  0x5598e9443b18 , 0x5598e9443b00 
, 0x5598e9443c00 , 
  0x5598e9443ae8 , 0x5598e9443e90 
, 0x5598e9443aa0 , 
  0x5598e9443690 , 0x5598e9443ad0 
, 0x5598e9443ab8 , 
  0x5598e9443a88 , 0x5598e9443a48 
, 0x5598e9443a70 , 
  0x5598e9443a10 , 0x5598e94439f8 
, 0x5598e94439e0 , 
  0x5598e94439c0 , 0x5598e9443628 
, 0x5598e94438f8 , 
  0x5598e9443980 , 0x5598e94438a0 
, 0x5598e9443940 , 
  0x5598e94437f8 , 0x5598e9443710 
, 0x5598e94436f8 , 
  0x5598e94436e0 , 0x5598e9443608 
}
#5  0x5598e942326b in ExecEvalExprSwitchContext (isNull=0x7ffdf2aae7a7, 
econtext=, state=)
at ./build/../src/include/executor/executor.h:307
retDatum = 
oldContext = 
retDatum = 
oldContext = 
#6  ExecQual (econtext=, state=) at 
./build/../src/include/executor/executor.h:376
ret = 
isnull = false
ret = 
isnull = 
#7  TriggerEnabled (estate=estate@entry=0x5598eaeabdd0, 
trigger=trigger@entry=0x5598eaeac828, event=, 
modifiedCols=modifiedCols@entry=0x5598eb7d34b8, oldslot=0x5598ec876b88, 
newslot=0x5598ec873d48, relinfo=, 
relinfo=) at ./build/../src/backend/commands/trigger.c:3516
predicate = 
econtext = 
oldContext = 
i = 
relinfo = 
relinfo = 
oldslot = 0x5598ec876b88
modifiedCols = 0x5598eb7d34b8
estate = 0x5598eaeabdd0
   

Re: Segmentation fault with PG-12

2019-10-08 Thread Andreas Joseph Krogh
På tirsdag 08. oktober 2019 kl. 17:24:21, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Will running a debug-enabled build slow things noticably down?

 gcc promises that the generated code is the same with or without debug.
 I think clang does too. With other compilers you may pay some penalty. Nice, 
I'm using the ubuntu-packages, so I'll go ahead and installpostgresql-12-dbgsym
> Is there a way
 > to make it dump a stack-trace (or back-trace in C-land?) on sig11?

 You should be able to get a core file from which you can extract a
 stack trace (and other info) after the fact.

 
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
I'll look into that, thanks. -- Andreas Joseph Krogh CTO / Partner - Visena AS 
Mobile: +47 909 56 963 andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com>  <https://www.visena.com>

Segmentation fault with PG-12

2019-10-08 Thread Andreas Joseph Krogh
In our production-environment we get sig11 every now and then after upgrading 
to PG-12: 2019-10-08 15:45:29.654 CEST [8829-76] LOG: server process (PID 
20631) was terminated bysignal 11: Segmentation fault 
 2019-10-08 15:45:29.654 CEST [8829-77] DETAIL: Failed process was running: 
COMMIT
 2019-10-08 15:45:29.654 CEST [8829-78] LOG: terminating any other active 
server processes
Will running a debug-enabled build slow things noticably down? Is there a way 
to make it dump a stack-trace (or back-trace in C-land?) on sig11? -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com> 

Logical replicatino from standby

2019-09-26 Thread Andreas Joseph Krogh
Hi. Will the feature described here (Minimal logical decoding on standbys): 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 
<https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de>
make it possible to do logical replication from standby like I'm looking for in 
this thread:
https://www.postgresql.org/message-id/VisenaEmail.15.9f118cec79ac2589.1621cfd8405%40tc7-visena
 
<https://www.postgresql.org/message-id/VisenaEmail.15.9f118cec79ac2589.1621cfd8405%40tc7-visena>
 ? Thanks. -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com> 

Re: lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
På torsdag 26. september 2019 kl. 00:53:28, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Can anybody shed som light
 > on when negative-prefix is supposed to be respected by PG's
 > formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is 
'−'(8722),
 > not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must 
use
 > lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / 
Partner

 PG does not consider LC_NUMERIC at all when producing output from
 the standard numeric data types (and we aren't going to start).
 AFAIR the only functions that do pay attention to LC_NUMERIC are
 to_char() and friends.

 regards, tom lane Thanks for clarifying. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
Hi. We're having this thread over at 
https://github.com/impossibl/pgjdbc-ng/issues/420 
<https://github.com/impossibl/pgjdbc-ng/issues/420> Can anybody shed som light 
on when negative-prefix is supposed to be respected by PG's 
formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is '−'(8722), 
not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must use 
lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Installed PostgreSQL-11 in Ubuntu 18.04.02 Server Edition: No existing local cluster is suitable as a default target

2019-09-18 Thread Andreas Joseph Krogh
På onsdag 18. september 2019 kl. 12:25:05, skrev Marco Ippolito <
ippolito.ma...@gmail.com <mailto:ippolito.ma...@gmail.com>>: Hi Andreas, if I 
understand correctly, this is what I've done afterwards: 
postgres@pc:/home/marco$ /usr/lib/postgresql/11/bin/pg_ctl -D 
/usr/local/pgsql/data -l logfile start
 waiting for server to start/bin/sh: 1: cannot create logfile: Permission 
denied
 stopped waiting
 pg_ctl: could not start server
 Examine the log output.
 postgres@pc:/home/marco$ What am I doing wrong? You don't have permissions to 
create the logfile (named "logfile" in your command) in CWD (/home/macro). 
Specify absolute path to somewhere writable for user "postgres". -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Re: Installed PostgreSQL-11 in Ubuntu 18.04.02 Server Edition: No existing local cluster is suitable as a default target

2019-09-18 Thread Andreas Joseph Krogh
På onsdag 18. september 2019 kl. 12:13:24, skrev Marco Ippolito <
ippolito.ma...@gmail.com <mailto:ippolito.ma...@gmail.com>>: Thanks Matthias. 
Followed these steps (indicated here:
https://www.postgresql.org/docs/11/creating-cluster.html 
<https://www.postgresql.org/docs/11/creating-cluster.html> )  root# chown 
postgres /usr/local/pgsql root# su postgres postgres$ initdb -D 
/usr/local/pgsql/data postgres@pc:/home/marco$ 
/usr/lib/postgresql/11/bin/initdb -D /usr/local/pgsql/data
 The files belonging to this database system will be owned by user "postgres".
 This user must also own the server process.

 The database cluster will be initialized with locales
 COLLATE: en_GB.UTF-8
 CTYPE: en_GB.UTF-8
 MESSAGES: en_GB.UTF-8
 MONETARY: C.UTF-8
 NUMERIC: C.UTF-8
 TIME: C.UTF-8
 The default database encoding has accordingly been set to "UTF8".
 The default text search configuration will be set to "english".

 Data page checksums are disabled.

 creating directory /usr/local/pgsql/data ... ok
 creating subdirectories ... ok
 selecting default max_connections ... 100
 selecting default shared_buffers ... 128MB
 selecting default timezone ... Europe/Rome
 selecting dynamic shared memory implementation ... posix
 creating configuration files ... ok
 running bootstrap script ... ok
 performing post-bootstrap initialization ... ok
 syncing data to disk ... ok

 WARNING: enabling "trust" authentication for local connections
 You can change this by editing pg_hba.conf or using the option -A, or
 --auth-local and --auth-host, the next time you run initdb.

 Success. You can now start the database server using:

 /usr/lib/postgresql/11/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start 
But now permission denied: postgres@pc:/home/marco$ sudo 
/usr/lib/postgresql/11/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start
 [sudo] password for postgres:
 postgres is not in the sudoers file. This incident will be reported.
 postgres@pc:/home/marco$ /usr/lib/postgresql/11/bin/pg_ctl -D 
/usr/local/pgsql/data -l logfile start
 waiting for server to start/bin/sh: 1: cannot create logfile: Permission 
denied
 stopped waiting
 pg_ctl: could not start server
 Examine the log output.
 postgres@pc:/home/marco$ Start pg_ctl as postgres user, no need to sudo. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Re: PostgreSQL License

2019-09-17 Thread Andreas Joseph Krogh
På tirsdag 17. september 2019 kl. 22:55:02, skrev Peter J. Holzer <
hjp-pg...@hjp.at <mailto:hjp-pg...@hjp.at>>: On 2019-09-17 14:56:30 +0300, 
Ashkar Dev wrote:
 > but it is saying (without fee)
 > if I create a database with it to work with Web Application if want to sell 
it
 > so the buyer must have the PostgreSQL installed in his device to work 
offline
 > right?
 > "Permission to use, copy, modify, and distribute this software and its
 > documentation for any purpose, without fee, and without a written agreement 
is
 > hereby granted, provided that the above copyright notice and this paragraph 
and
 > the following two paragraphs appear in all copies."

 This means that you don't have to pay a fee or sign a written agreement
 to use, copy, modify, and distribute this software and its documentation
 for any purpose. It doesn't say that you can't charge a fee for
 distributing (although why anybody would pay you for something they can
 download themselves for free I don't know).

 hp A rule of thumb is - you can do anything you want with it (the PG software 
inc. its source), except claim you wrote it, as long as you preserve the 
original license-file(s). -- Andreas Joseph Krogh CTO / Partner - Visena AS 
Mobile: +47 909 56 963 andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com>  <https://www.visena.com>

Re: pg full text search very slow for Chinese characters

2019-09-10 Thread Andreas Joseph Krogh
På tirsdag 10. september 2019 kl. 18:21:45, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Jimmy Huang  writes:
 > I tried pg_trgm and my own customized token parser 
https://github.com/huangjimmy/pg_cjk_parser

 pg_trgm is going to be fairly useless for indexing text that's mostly
 multibyte characters, since its unit of indexable data is just 3 bytes
 (not characters). I don't know of any comparable issue in the core
 tsvector logic, though. The numbers you're quoting do sound quite awful,
 but I share Cory's suspicion that it's something about your setup rather
 than an inherent Postgres issue.

 regards, tom lane We experienced quite awful performance when we hosted the 
DB on virtual servers (~5 years ago) and it turned out we hit the write-cache 
limit (then 8GB), which resulted in ~1MB/s IO thruput. Running iozone might 
help tracing down IO-problems. --
 Andreas Joseph Krogh

RE: Primary Key Update issue ?

2019-09-06 Thread Andreas Joseph Krogh
På fredag 06. september 2019 kl. 11:25:36, skrev Patrick FICHE <
patrick.fi...@aqsacom.com <mailto:patrick.fi...@aqsacom.com>>: 
Hi Andreas,



Thanks a lot for your answer, which solves this case.

I was still a bit surprised as this is linked to transaction management while 
I have here a single statement until I saw the Compatibility Remark in 
documentation :Also, PostgreSQL checks non-deferrable uniqueness constraints 
immediately, not at end of statement as the standard would suggest.
FWIW - PostgreSQL behaves like Oracle in this regard. -- Andreas Joseph Krogh 
CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Sv: Primary Key Update issue ?

2019-09-06 Thread Andreas Joseph Krogh
På fredag 06. september 2019 kl. 11:06:04, skrev Patrick FICHE <
patrick.fi...@aqsacom.com <mailto:patrick.fi...@aqsacom.com>>: 
Hello,



While doing some testing on a Postgresql database, I encountered a strange 
behavior which is very simple to reproduce.

I just wanted to know if this is expected behavior or if it should be 
considered as an issue.



The scenario to reproduce it is the following.



CREATE TABLE Test ( pKey integer, Val integer, CONSTRAINT PK_Test PRIMARY Key( 
pKey ) );



INSERT INTO Test( pKey, Val ) VALUES ( 1, 1 );

INSERT INTO Test( pKey, Val ) VALUES ( 2, 2 );



UPDATE Test SET pKey = pKey + 1;



Here is the error that I get.



SQL Error [23505]: ERROR: duplicate key value violates unique constraint 
"pk_test"

 Detail: Key (pkey)=(2) already exists.



I was expecting pKey to be incremented for each row, which would still respect 
the unique constraint….



I’m currently using PostgreSQL 11.5 but have the same problem on PostgreSQL 
10.3 server.



Best Regards, 
It works if you add "DEFERRABLE INITIALLY DEFERRED" to the PK: CREATE TABLE 
Test ( pKey integer, Val integer, CONSTRAINT PK_Test PRIMARY Key( pKey ) 
DEFERRABLE INITIALLY DEFERRED );
andreak@[local]:5432 11.5 test=# CREATE TABLE Test ( pKey integer, Val 
integer, CONSTRAINT PK_Test PRIMARY Key( pKey ) DEFERRABLE INITIALLY DEFERRED );
 CREATE TABLE
andreak@[local]:5432 11.5 test=# INSERT INTO Test( pKey, Val ) VALUES ( 1, 1 );
 INSERT 0 1
andreak@[local]:5432 11.5 test=# INSERT INTO Test( pKey, Val ) VALUES ( 2, 2 );
 INSERT 0 1
andreak@[local]:5432 11.5 test=# UPDATE Test SET pKey = pKey + 1; 
 UPDATE 2
-- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Re: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 11:01:25, skrev Sonam Sharma <
sonams1...@gmail.com <mailto:sonams1...@gmail.com>>: It's saying gmake *** No 
rule to make Target uninstall. On Thu, Sep 5, 2019, 2:27 PM Andreas Joseph 
Krogh mailto:andr...@visena.com>> wrote: På torsdag 05. 
september 2019 kl. 10:53:01, skrev Sonam Sharma mailto:sonams1...@gmail.com>>: I have installed postgres with the source code 
option using configure --prefix and then make install. Can someone please help 
in uninstalling this. How to uninstall the postgres now. The installation was 
done as postgres user. make uninstall Strange – works for me: [
andreak@spaceballs-one] ~/dev/postgresql (REL_12_STABLE) 
 $ make uninstall
 make -C doc uninstall
 make[1]: Entering directory '/home/andreak/dev/postgresql/doc'
 make -C src uninstall
 make[2]: Entering directory '/home/andreak/dev/postgresql/doc/src'
 make -C sgml uninstall
 ... ... -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Sv: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 10:53:01, skrev Sonam Sharma <
sonams1...@gmail.com <mailto:sonams1...@gmail.com>>: I have installed postgres 
with the source code option using configure --prefix and then make install. Can 
someone please help in uninstalling this. How to uninstall the postgres now. 
The installation was done as postgres user. make uninstall -- Andreas Joseph 
Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Sv: Two Postgres master process are showing - one is on and off

2019-08-20 Thread Andreas Joseph Krogh
På tirsdag 20. august 2019 kl. 22:32:25, skrev chiru r mailto:chir...@gmail.com>>: 
Hi All,



I have observed one of our PostgreSQL DB instance showing two postgres process 
on Linux server as highlighted. The second postgres process is on and off.

We did not find any references in logs.

[...]

postgres 33438 1 0 12:41 ? 00:00:03 /u01/postgres/9.5/bin/postgres -D 
/u02/pgdata01/9.5/data



[...]



postgres 110181 33438 0 15:30 ? 00:00:00 /u01/postgres/9.5/bin/postgres -D 
/u02/pgdata01/9.5/data
Strange, the second is a child of the first... --
 Andreas Joseph Krogh

Sv: RE: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:56:28, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com <mailto:ks0c77...@techmahindra.com>>: 
How much impact on performance and disk space would this or 

set log_min_duration_statement=0

have?



I have no idea as to how common this is, or when it happens, so it would need 
to run until this reported again (or some reasonable time if it doesn’t happen).
Well, the answer here is of course "it depends"... If you have lots of 
activity the logs will fill up quite quickly, but you can easily test this in 
production and just turn off logging again by setting it to 'none' and reload 
settings (no need to restart). You can also only log modifications by setting 
log_statement = 'mod' Also watch out for triggers modifying stuff. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
 <mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Sv: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:26:29, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com <mailto:ks0c77...@techmahindra.com>>: 
Hi,



Is there a way to track “transactions” by default (i.e. without anyone having 
set up anything specific). The problem I am facing is that users are claiming 
that settings are disappearing with them doing anything to affect them. It 
would be good to be able to see what postgresql thinks is going on.

*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time.

*Subscriber stops receiving selected [tickets].

*Upon checking settings the selected Severity/BU/Service has been unselected.
Not "without anyone having set up anything specific", but you can change the 
setting in postgresql.conf to: log_statement = 'all' and reload the settings. 
You can now see all SQL executed in the log and can debug what's going on. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>

Sv: SQL queries not matching on certain fields

2019-04-03 Thread Andreas Joseph Krogh
På onsdag 03. april 2019 kl. 15:06:03, skrev Felix Ableitner mailto:m...@nutomic.com>>: 
Hello,

I'm having a very strange problem with the Postgres database for my website. 
Some SQL queries are not matching on certain fields. I am running these 
commands via the psql command.

Here is a query that works as expected:
 # SELECT id, "preferredUsername" FROM actor WHERE "preferredUsername"='emma'; 
id | preferredUsername ---+--- 48952 | emma 58672 | emma (2 
rows) 
The following query should work as well, because the username exists. But in 
fact, it consistently returns nothing:
 # SELECT id, "preferredUsername" FROM actor WHERE 
"preferredUsername"='mailab'; id | preferredUsername +--- 
(0 rows) 
There are some workarounds which fix the WHERE statement, all of the following 
work as expected:
SELECT id, "preferredUsername" FROM actor WHERE trim("preferredUsername")=
'mailab'; SELECT id, "preferredUsername" FROM actor WHERE "preferredUsername" 
ILIKE'mailab'; SELECT id, "preferredUsername" FROM actor WHERE md5(
"preferredUsername")=md5('mailab'); 


Now you might think that there is something wrong with the encoding, or the 
field contains some extra whitespace. But SHOW SERVER_ENCODING and SHOW 
SERVER_ENCODING show UTF8. Length and m5 sum are also exactly as expected. And 
I checked the individual bytes with get_byte(), all of them are in the range 
97-122.

About the setup, this is Postgres 10 running in Docker, on an Ubuntu VPS (see 
below for all versions etc). I had this problem before on the same setup, so I 
did an export to text file with pg_dump, and imported into a completely new 
database with psql. That fixed the problem for a few days, but it came back 
soon after.

The problem only seems to affect one or two specific columns, and only a few 
specific rows in those columns. Most other rows work normally. Affected columns 
also randomly start working again after a few days, and other columns get 
affected. I havent noticed any kind of pattern.

You can find the table definition here: https://gitlab.com/snippets/1840320 
<https://gitlab.com/snippets/1840320>

Version info:

Postgres Docker Image: postgres:10.7-alpine
 Docker version: 18.09.2
 OS: Ubuntu 18.04.2

Please tell me if you have any idea how to fix or debug this. I already asked 
multiple people, and no one has a clue what is going on.

Best,
 Felix Ableitner
Does disabling index-scan make a difference? SET enable_indexscan to off;
How about dumping the relevant data and reloading it into another similar (but 
smaller) table, can you reproduce it then? -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
<mailto:andr...@visena.com> www.visena.com <https://www.visena.com>  
<https://www.visena.com>

Sv: Re: Geographical multi-master replication

2019-01-25 Thread Andreas Joseph Krogh
På fredag 25. januar 2019 kl. 06:45:43, skrev Andreas Kretschmer <
andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>>: 

 Am 25.01.19 um 06:10 schrieb Jeremy Finzel:
 >
 >     The problem is that the version for BDR 1.0.7, which has an
 >     implementation for postgres 9.4, will be on end of live at the end
 >     of this year. Unfortunately the paid solution is out of our
 >     budget, so we currently have two options: find an alternative or
 >     remove the multi-region implementation. We are currently looking
 >     for alternatives.
 >
 >
 > You are missing all of the alternatives here.  Why don't you consider
 > upgrading from postgres 9.4 and with it to a supported version of
 > BDR?  There is nothing better you can do to keep your infrastructure
 > up to date, performant, secure, and actually meet your multi-master
 > needs than to upgrade to a newer version of postgres which does have
 > BDR support.
 >
 > Even "stock" postgres 9.4 is set for end of life soon. Upgrade!

 ACK!

 Sure, you have to pay for a support contract, and this isn't for free,
 but you will get a first-class support for BDR. If you really needs a
 worldwide distributed multi-master solution you should be able to buy that.


 Regards, Andreas   To my surprise I'm unable to find downloadable BDR3. I 
thought it was an open-source extention to vanilla-pg-11, isn't that the case 
anymore?   -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com <mailto:andr...@visena.com> www.visena.com 
<https://www.visena.com>  <https://www.visena.com>  

Sv: lost "left join"

2019-01-16 Thread Andreas Joseph Krogh
På onsdag 16. januar 2019 kl. 11:54:21, skrev Олег Самойлов mailto:spl...@ya.ru>>:
Hi, all.

 I got some mystic behaviour of PostgreSQL, perhaps this is a bug.
 
[snip]

 But things begin be strange if I add validation by time.

 => explain select * from node as parent left join link on 
parent.node_id=link.parent left join node as child on link.child=child.node_id 
where parent.node_id=1 and current_date <@ parent.valid and current_date <@ 
link.valid and current_date <@ child.valid;
                                         QUERY PLAN
 
--
  Nested Loop  (cost=4.50..32.35 rows=1 width=112)
    ->  Nested Loop  (cost=4.35..21.88 rows=1 width=76)
          ->  Index Scan using node_pkey on node parent  (cost=0.15..8.18 
rows=1 width=36)
                Index Cond: (node_id = 1)
                Filter: (CURRENT_DATE <@ valid)
          ->  Bitmap Heap Scan on link  (cost=4.20..13.70 rows=1 width=40)
                Recheck Cond: (parent = 1)
                Filter: (CURRENT_DATE <@ valid)
                ->  Bitmap Index Scan on link_pkey  (cost=0.00..4.20 rows=6 
width=0)
                      Index Cond: (parent = 1)
    ->  Index Scan using node_pkey on node child  (cost=0.15..8.18 rows=1 
width=36)
          Index Cond: (node_id = link.child)
          Filter: (CURRENT_DATE <@ valid)
 (13 rows)

 «Left Join»’s are lost. And in the result too:

 => select * from node as parent left join link on parent.node_id=link.parent 
left join node as child on link.child=child.node_id where parent.node_id=1 and 
current_date <@ parent.valid and current_date <@ link.valid and current_date <@ 
child.valid;
  node_id | valid | parent | child | valid | node_id | valid
 -+---++---+---+-+---
 (0 rows)
 
The moment you involve columns on "left joined" relations this way in the 
WHERE-clause, it effectively becomes a right join.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


Sv: Using psql variables in DO-blocks

2019-01-15 Thread Andreas Joseph Krogh
På tirsdag 15. januar 2019 kl. 16:51:09, skrev Andreas Joseph Krogh <
andr...@visena.com <mailto:andr...@visena.com>>:
Hi all.
 
I'm trying to use a psql variable in a DO-block, but it fails:
 
[snip] 
 
Seems I was a bit lazy, here's what works:
 
\set resource_group 'Ressurser' \set quoted_resource_group '\'' 
:resource_group '\'' set myvars.quoted_resource_group to :quoted_resource_group;
DO$$ begin  if not exists(SELECT * FROM onp_group WHERE groupname = 
current_setting('myvars.quoted_resource_group')) then  raise notice 'Group % 
not found, creating it.', current_setting('myvars.quoted_resource_group'); end 
if; end; $$; 
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


  1   2   >