Re: Query on Postgres SQL transaction

2024-03-15 Thread Greg Sabino Mullane
That's a very vague question, but you can trace exactly what is happening
by issuing

SET log_statement = 'all';

Ideally at the session level by your application, but can also set it at
the database and user level. If all else fails, set it globally (i.e.
postgresql.conf). Turn it off again as soon as possible, it will make your
logs extremely verbose. But you can track exactly what your application is
doing.

Cheers,
Greg


Re: PostGres ODBC too slow

2024-03-15 Thread vedant patel
Thanks Adrian,


I have been to the articles but it didn't help. Many of them have mentioned
this type of issues but not the solution.

And my main concern is if python library is able to perform fast operation
why ODBC is too much slow. And I have to use ODBC for my application which
is in power builder.

Thanks,

On Fri, 15 Mar, 2024, 11:09 pm Adrian Klaver, 
wrote:

>
> On 3/15/24 3:40 AM, vedant patel wrote:
>
> Hello There,
>
> Recently, we've encountered some performance issues with the ODBC driver
> provided by Postgres. Upon investigation, we noticed that switching from
> the UNICODE to ANSI driver resulted in performance improvements for some
> queries, albeit at the expense of others.
>
> To delve deeper into this matter, I conducted tests using a Python script
> with the psycopg2 library, and the results were significantly better.
> However, to address this issue comprehensively, I've explored alternative
> ODBC drivers available in the market. While some minor improvements were
> observed in a few queries with a different driver, the overall performance
> remains a concern.
>
> Given your extensive experience in this area, I would greatly appreciate
> your insights and recommendations on which ODBC driver would be most
> suitable for our use case. Any advice or suggestions you could offer would
> be immensely helpful in resolving this performance issue.
>
>
> This will probably get a better answer quicker over at the ODBC list:
>
>
> https://www.postgresql.org/list/pgsql-odbc/
>
>
>
> Let me know in case of any questions or concerns.
>
> Thanks,
>
> --
> Adrian klaveradrian.kla...@aklaver.com
>
>


Re: PostGres ODBC too slow

2024-03-15 Thread Adrian Klaver


On 3/15/24 10:58 AM, vedant patel wrote:

Thanks Adrian,


I have been to the articles but it didn't help. Many of them have 
mentioned this type of issues but not the solution.



That is why you should ask over on the ODBC list, that is where the 
folks who develop and use the driver hang out.


When you do you will need to provide more information, at a minimum:

1) Postgres version.

2) ODBC version.

3) Example code.

4) The benchmark results.



And my main concern is if python library is able to perform fast 
operation why ODBC is too much slow. And I have to use ODBC for my 
application which is in power builder.


Thanks,

On Fri, 15 Mar, 2024, 11:09 pm Adrian Klaver, 
 wrote:



On 3/15/24 3:40 AM, vedant patel wrote:

Hello There,

Recently, we've encountered some performance issues with the ODBC
driver provided by Postgres. Upon investigation, we noticed that
switching from the UNICODE to ANSI driver resulted in performance
improvements for some queries, albeit at the expense of others.

To delve deeper into this matter, I conducted tests using a
Python script with the psycopg2 library, and the results were
significantly better. However, to address this issue
comprehensively, I've explored alternative ODBC drivers available
in the market. While some minor improvements were observed in a
few queries with a different driver, the overall performance
remains a concern.

Given your extensive experience in this area, I would greatly
appreciate your insights and recommendations on which ODBC driver
would be most suitable for our use case. Any advice or
suggestions you could offer would be immensely helpful in
resolving this performance issue.



This will probably get a better answer quicker over at the ODBC list:


https://www.postgresql.org/list/pgsql-odbc/




Let me know in case of any questions or concerns.

Thanks,


-- 
Adrian Klaver

adrian.kla...@aklaver.com


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: Query on Postgres SQL transaction

2024-03-15 Thread Adrian Klaver


On 3/14/24 11:04 PM, Bandi, Venkataramana - Dell Team wrote:


Hi Team,

We are using JPA entities to persists the records into Postgres SQL DB 
and its working for all the nodes but one of the node data is not 
persisting and it’s not giving any DB related errors/exception. We 
just want to trace out this scenario on transaction level whether 
transaction is committing or not.




"... its working for all the nodes but one of the node data is not 
persisting ..." contradict.


Generally when a record does not throw an error, but is not persisted it 
means the transaction was not committed and the changes did not survive 
the session they where done in. Without the actual code it hard to say 
anything more.


Could you please let us know how we can trace out this scenario and 
let us know how transaction logs enabled in Postgres SQL DB.


Regards,

Venkat


Internal Use - Confidential


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: PostGres ODBC too slow

2024-03-15 Thread Adrian Klaver


On 3/15/24 3:40 AM, vedant patel wrote:

Hello There,

Recently, we've encountered some performance issues with the ODBC 
driver provided by Postgres. Upon investigation, we noticed that 
switching from the UNICODE to ANSI driver resulted in performance 
improvements for some queries, albeit at the expense of others.


To delve deeper into this matter, I conducted tests using a Python 
script with the psycopg2 library, and the results were significantly 
better. However, to address this issue comprehensively, I've explored 
alternative ODBC drivers available in the market. While some minor 
improvements were observed in a few queries with a different driver, 
the overall performance remains a concern.


Given your extensive experience in this area, I would greatly 
appreciate your insights and recommendations on which ODBC driver 
would be most suitable for our use case. Any advice or suggestions you 
could offer would be immensely helpful in resolving this performance 
issue.



This will probably get a better answer quicker over at the ODBC list:


https://www.postgresql.org/list/pgsql-odbc/




Let me know in case of any questions or concerns.

Thanks,


--
Adrian Klaver
adrian.kla...@aklaver.com


PostGres ODBC too slow

2024-03-15 Thread vedant patel
Hello There,

Recently, we've encountered some performance issues with the ODBC driver
provided by Postgres. Upon investigation, we noticed that switching from
the UNICODE to ANSI driver resulted in performance improvements for some
queries, albeit at the expense of others.

To delve deeper into this matter, I conducted tests using a Python script
with the psycopg2 library, and the results were significantly better.
However, to address this issue comprehensively, I've explored alternative
ODBC drivers available in the market. While some minor improvements were
observed in a few queries with a different driver, the overall performance
remains a concern.

Given your extensive experience in this area, I would greatly appreciate
your insights and recommendations on which ODBC driver would be most
suitable for our use case. Any advice or suggestions you could offer would
be immensely helpful in resolving this performance issue.

Let me know in case of any questions or concerns.

Thanks,


Query on Postgres SQL transaction

2024-03-15 Thread Bandi, Venkataramana - Dell Team
Hi Team,

We are using JPA entities to persists the records into Postgres SQL DB and its 
working for all the nodes but one of the node data is not persisting and it's 
not giving any DB related errors/exception. We just want to trace out this 
scenario on transaction level whether transaction is committing or not.

Could you please let us know how we can trace out this scenario and let us know 
how transaction logs enabled in Postgres SQL DB.


Regards,
Venkat



Internal Use - Confidential


Re: select results on pg_class incomplete

2024-03-15 Thread Adrian Klaver

On 3/15/24 03:42, Thiemo Kellner wrote:
You solve a problem that no one has. Data belonging together may still 
be divided into schemas in a database. Thus, the metadata is also 
reported and archived individually per database.


I am not sure, we are taking about the same problem, but would be 
surprised to be the only one having experienced filling disks. Maybe, I 
am just that old already that disk space has become so cheep, the 
problem does not exist any longer.


With respect to metadata and databases: The point is not that I cannot 
see the tables in another schema (I believe, did not check yet), but in 
other databases. While this actually does not matter much, I still hold 


That is backwards, schemas are namespaces within a database you can see 
their contents from the local(database) system catalogs.


it true that a disk getting filled up does not care in which database or 
schema a explosively growing table resides. So, if I have a disk getting 
filled up, I would like to get easily information on the problematic 
structures in one go. With PostgreSQL this does not seem to be possible 
out of the box. I now can query each database separately, or I can 
create auxiliary structures like dblink and views to accommodate for a 
"single" query solution. My two dimes.





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner
https://wiki.postgresql.org/wiki/Monitoring 


Thanks for the URL. I am not too keen to re-invent the wheel. Although 
it teaches me on PostgreSQL.





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner




You could also create a PostgreSQL foreign server for each of the other 
databases, which would let you issue a query to UNION together the results of a 
query on all of the catalogs.  This would require creating a foreign table for 
pg_class in the other databases.


Thanks. So many possibilities. I am more familiar with dblinks than 
foreign tables. I will read into it.





Re: select results on pg_class incomplete

2024-03-15 Thread Greg Sabino Mullane
On Fri, Mar 15, 2024 at 6:43 AM Thiemo Kellner 
wrote:

> I am not sure, we are taking about the same problem, but would be
> surprised to be the only one having experienced filling disks.

...

> So, if I have a disk getting filled up, I would like to get easily
> information on the problematic
> structures in one go.


This is a common problem, and one that has been solved before. There are
many monitoring solutions out there that can help you with this. For an
incomplete list, see:

https://wiki.postgresql.org/wiki/Monitoring

If you want to roll your own, the other thread is full of good information
on that.

Cheers,
Greg


Re: operator is only a shell - Error

2024-03-15 Thread Greg Sabino Mullane
On Fri, Mar 15, 2024 at 6:26 AM Rajesh S  wrote:

> I wanted to implement a new "=" (equal) operator with LEFTARG as numeric
> and RIGHTARG as varchar.  But after creating the function and operator,
> psql shows the error "operator is only a shell: character varying =
> numeric
>
Your operator has numeric on the left and varchar on the right. But your
query is doing numeric on the RIGHT. Probably want to make a matching one
to cover both cases.

Cheers,
Greg


Re: select results on pg_class incomplete

2024-03-15 Thread Christophe Pettus



> On Mar 15, 2024, at 03:30, Thiemo Kellner  wrote:
> Thanks for the ideas. As I would want to keep it in the database, dblink 
> would be the way to go. Maybe, I will create a prodedure that creates a view 
> in the monitor schema accessing the respective databases with union all to 
> concatenate the data.

You could also create a PostgreSQL foreign server for each of the other 
databases, which would let you issue a query to UNION together the results of a 
query on all of the catalogs.  This would require creating a foreign table for 
pg_class in the other databases.



Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner

You solve a problem that no one has. Data belonging together may still be 
divided into schemas in a database. Thus, the metadata is also reported and 
archived individually per database.


I am not sure, we are taking about the same problem, but would be 
surprised to be the only one having experienced filling disks. Maybe, I 
am just that old already that disk space has become so cheep, the 
problem does not exist any longer.


With respect to metadata and databases: The point is not that I cannot 
see the tables in another schema (I believe, did not check yet), but in 
other databases. While this actually does not matter much, I still hold 
it true that a disk getting filled up does not care in which database or 
schema a explosively growing table resides. So, if I have a disk getting 
filled up, I would like to get easily information on the problematic 
structures in one go. With PostgreSQL this does not seem to be possible 
out of the box. I now can query each database separately, or I can 
create auxiliary structures like dblink and views to accommodate for a 
"single" query solution. My two dimes.





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner




Am 14.03.2024 um 21:03 schrieb David Rowley:

Yeah, maybe dblink and a LATERAL join might be an easy way. Something like:

create extension dblink;
select d.datname,c.relname from pg_database d, lateral (select * from
dblink('dbname='||d.datname,$$select relname from pg_class where
relname = 'pg_class';$$) c(relname text)) c
(relname) where d.datallowconn;
   datname   | relname
+--
  postgres   | pg_class
  template1  | pg_class
  regression | pg_class
(3 rows)


Thanks for the ideas. As I would want to keep it in the database, dblink 
would be the way to go. Maybe, I will create a prodedure that creates a 
view in the monitor schema accessing the respective databases with union 
all to concatenate the data.





operator is only a shell - Error

2024-03-15 Thread Rajesh S

Hi,

I wanted to implement a new "=" (equal) operator with LEFTARG as numeric 
and RIGHTARG as varchar.  But after creating the function and operator, 
psql shows the error "operator is only a shell: character varying = 
numeric LINE 1: ...lect LIEN_AC_NO from deposit_lien where 
deposit_no='00021140...". I'm sharing the function and operator scripts 
for your perusal. Please advise how to proceed.


CREATE OR REPLACE FUNCTION public.num_eq_varchar(
    numeric,
    varchar)
    RETURNS boolean
AS 'select $1::NUMERIC=CAST($2 AS numeric);'
LANGUAGE SQL IMMUTABLE;

-- Operator: =;

-- DROP OPERATOR IF EXISTS public.= (numeric , varchar);

CREATE OPERATOR public.= (
    FUNCTION = num_eq_varchar,
    LEFTARG = numeric,
    RIGHTARG = varchar,
    COMMUTATOR = =,
    NEGATOR = <>,
    RESTRICT = eqsel,
    JOIN = eqjoinsel,
    HASHES, MERGES
);

CREATE OR REPLACE FUNCTION public.num_ne_varchar(
    numeric,
    varchar)
    RETURNS boolean
    LANGUAGE SQL IMMUTABLE
AS $BODY$
    select $1<>$2::numeric;
$BODY$;

-- Operator: <>;

-- DROP OPERATOR IF EXISTS public.<> (numeric , varchar);

CREATE OPERATOR public.<> (
    FUNCTION = num_ne_varchar,
    LEFTARG = numeric,
    RIGHTARG = varchar,
    COMMUTATOR = <>,
    NEGATOR = =,
    RESTRICT = neqsel,
    JOIN = neqjoinsel
);


Thanks,

Rajesh S

On 05-07-2022 13:52, Rajesh S wrote:


Hi,

We are migrating our database from Oracle to Postgresql.  In oracle we 
have used this syntax "SELECT ('1999-12-30'::DATE) - 
('1999-12-11'::DATE)" to get difference between two dates as a integer 
output (ex: 19).  But in Postgres the same query returns result as "19 
days".  Because of this we are getting errors while assigning this 
query output to a numeric variable saying "ERROR: invalid input syntax 
for type numeric: "1825 days"" and "ERROR: operator does not exist: 
interval + integer".  To avoid changing the application code in many 
places to extract the number of days alone, we tried operator 
overloading concept as below.


CREATE OR REPLACE FUNCTION public.dt_minus_dt(
    dt1 timestamp without time zone,
    dt2 timestamp without time zone)
    RETURNS integer
    LANGUAGE 'edbspl'
    COST 100
    VOLATILE SECURITY DEFINER PARALLEL UNSAFE
AS $BODY$
    days INTEGER;
BEGIN
    SELECT DATE_PART('day', dt1::timestamp - dt2::timestamp)::integer 
INTO days;

    RETURN days;
END
$BODY$;

CREATE OPERATOR public.- (
    FUNCTION = public.dt_minus_dt,
    LEFTARG = timestamp without time zone,
    RIGHTARG = timestamp without time zone
);

When we execute "SELECT ('1999-12-30'::DATE) - ('1999-12-11'::DATE)", 
we are still getting "19 days" as result and not "19" as we expect.  
The above same function works as expected for the operator + or ===.


CREATE OPERATOR public.+ (
    FUNCTION = public.dt_minus_dt,
    LEFTARG = timestamp without time zone,
    RIGHTARG = timestamp without time zone
);

SELECT ('1999-12-30'::DATE) + ('1999-12-11'::DATE)

CREATE OPERATOR public.=== (
    FUNCTION = public.dt_minus_dt,
    LEFTARG = timestamp without time zone,
    RIGHTARG = timestamp without time zone
);

SELECT ('1999-12-30'::DATE) === ('1999-12-11'::DATE)


I really appreciate anyone's help in resolving this case. Thanks in 
advance.



Rajesh S



Re: Seeing high query planning time on Azure Postgres Single Server version 11.

2024-03-15 Thread Laurenz Albe
On Fri, 2024-03-15 at 00:31 +0530, hassan rafi wrote:
> We have migrated to postgres version 16.1, but still due to very high update
> activity on our DB, we are seeing elevated response times, though now the
> planning time is less.
> 
> catalog-v2=> explain (analyze, verbose, settings, buffers) SELECT 
> products_inventory_delta.upc FROM products_inventory_delta WHERE 
> products_inventory_delta.modality = 'pickup' AND 
> products_inventory_delta.store_id = '70600372' ORDER BY upc DESC LIMIT 51 
> OFFSET 3;
>                                                                               
>               QUERY PLAN                                                      
>                                       
> --
>  Limit  (cost=1450.68..1450.73 rows=1 width=14) (actual 
> time=5049.115..5049.116 rows=0 loops=1)
>    Output: upc
>    Buffers: shared hit=33359 read=6590 dirtied=9379
>    ->  Index Only Scan Backward using products_inventory_delta_pkey on 
> public.products_inventory_delta  (cost=0.57..1450.68 rows=28606 width=14) 
> (actual time=1.056..5047.472 rows=28299 loops=1)
>          Output: upc
>          Index Cond: ((products_inventory_delta.store_id = '70600372'::text) 
> AND (products_inventory_delta.modality = 'pickup'::modality))
>          Heap Fetches: 16840
>          Buffers: shared hit=33359 read=6590 dirtied=9379
>  Settings: effective_cache_size = '192GB', jit = 'off', random_page_cost = 
> '2', work_mem = '2097151kB'
>  Query Identifier: 220372279818787780
>  Planning Time: 0.062 ms
>  Execution Time: 5049.131 ms

Your problem are probably the "Heap Fetches: 16840".

If you VACUUM the table, the performance should improve.

The best solution is to make sure that autovacuum processes that table more 
often:

  ALTER TABLE public.products_inventory_delta SET 
(autovacuum_vacuum_scale_factor = 0.01);

Yours,
Laurenz Albe