Hi Shaheed,
As pointed above by Adrian Klaver, I suspect that you did multiple attempts
that caused Database Already Exists. ( There must be data in the tables,
which the next attempt is trying to write again) . I can't think of any
scenario where restoration succeeds on one environment and
On Sat, Jun 22, 2024 at 7:28 PM Martin Goodson
wrote:
> Hello.
>
> Recently our security team have wanted to apply password complexity
> checks akin to Oracle's profile mechanism to PostgreSQL, checking that a
> password hasn't been used in x months
There would have to be a pg_catalog table
Martin Goodson writes:
> Recently our security team have wanted to apply password complexity
> checks akin to Oracle's profile mechanism to PostgreSQL, checking that a
> password hasn't been used in x months etc, has minimum length, x special
> characters and x numeric characters, mixed case
Hello.
Recently our security team have wanted to apply password complexity
checks akin to Oracle's profile mechanism to PostgreSQL, checking that a
password hasn't been used in x months etc, has minimum length, x special
characters and x numeric characters, mixed case etc.
As far as I'm
On 6/22/24 10:01, Shaheed Haque wrote:
Hi,
I am using Postgres 14 on AWS RDS and am seeing the output of pg_dump be
restored as expected by pg_restore on some database instances, and fail
with reports of duplicate keys on other database instances:
* My deployments are always a pair, one
On Sat, Jun 22, 2024 at 1:02 PM Shaheed Haque
wrote:
> Hi,
>
> I am using Postgres 14 on AWS RDS and am seeing the output of pg_dump be
> restored as expected by pg_restore on some database instances, and fail
> with reports of duplicate keys on other database instances:
>
>- My deployments
On 6/22/24 13:13, Shenavai, Manuel wrote:
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related
to the relation "pg_locks" (AccessShareLock).
Which would be the SELECT you did on pg_locks.
- pg_stat_activity
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The entry is
related to the relation "pg_locks" (AccessShareLock).
- pg_stat_activity shows ~30 connections (since the DB is in use, this is
expected)
Is there anything specific
Shaheed Haque writes:
>- The one difference I can think of between deployment pairs which work
>ok, and those which fail is that the logic VM (i.e. where the psql client
>script runs) is the use of a standard AWS ubuntu image for the OK case,
>versus a custom AWS image for the
Hi,
I am using Postgres 14 on AWS RDS and am seeing the output of pg_dump be
restored as expected by pg_restore on some database instances, and fail
with reports of duplicate keys on other database instances:
- My deployments are always a pair, one "logic VM" for Django etc and
one "RDS
The current forms of “AI” have no concept of state or long term memory. On each invocation of the AI you have to tell it,This is a Postgres database.This is my database schema.These are the indexes I have.After providing that information the “AI” “might” generate a valid query for your particular
## Andreas Joseph Krogh (andr...@visena.com):
> Hi, are there any plans for using some kind of AI for query-planning?
Actually, we do have our GEQO - Genetic Query Optimization - already
in the planner: https://www.postgresql.org/docs/current/geqo.html
As per the common taxomonies, genetic
On Sat, Jun 22, 2024, 5:20 PM Andreas Joseph Krogh
wrote:
> Hi, are there any plans for using some kind of AI for query-planning?
>
> Can someone with more knowledge about this than I have please explain why
> it might, or not, be a good idea, and what the challenges are?
>
On 6/22/24 04:50, Andreas Joseph Krogh wrote:
Hi, are there any plans for using some kind of AI for query-planning?
Can someone with more knowledge about this than I have please explain
why it might, or not, be a good idea, and what the challenges are?
1) Require large amount of resources.
Thank you very much for help and pointers to useful information.
Just want to make clear (sorry I am slow on uptake). I should first REINDEX and
then ALTER DATABASE xxx REFRESH COLLATION VERSION, or first ALTER and then
REINDEX or does the order of these action matter at all?
Thank you,
Dmitry
Hi, are there any plans for using some kind of AI for query-planning?
Can someone with more knowledge about this than I have please explain why it
might, or not, be a good idea, and what the challenges are?
Thanks.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56
Hi Tom, thanks for the response!
So the same user is able to connect using a non replication connection
using the same mtls certificate and pg_ident.conf map. So it seems like the
cert & map are working for this user.
hostssl all pgrepmgr_nonprod 100.0.0.0/8 cert map=pgrepmgr_nonprod_map
This
17 matches
Mail list logo