Weird seqscan node plan
Hi all! I'm confused by planner choice: seqscan but not index scan when index scan cost is much cheaper. 1st plan: https://explain.depesz.com/s/Cti#l8 2nd plan (with "set enable_seqscan = off"): https://explain.depesz.com/s/qn0I#l8 Look at 8th row: this nested loop decided to join the second node by using seqscan (1st plan) when index is available (2nd plan). Index scan is much cheaper (0.430 over 257.760). What am I missing? And thanks for any reply!
Re: I think that my data is saved correctly, but when printing again, other data appears
Good afternoon, Well, I spent some time without implementing my thesis for a topic of mental fatigue, I had to go for a while. A few days ago I came back and was trying to implement the context change. But I really believe that, although I read the explanations and source codes that you indicated, and others that I found, I didn't quite understand what I should do. Well, as I was told that I should save the CurrentMemoryContext before starting the SPI, Let's say it would be before doing SPI_connect (). Is this correct? And then I must use MemoryContextStrdup. As you told me the MemoryContextStrdup It is used to create a copy of a string in a specific memory context. Well, where in the source code should I use MemoryContextStrdup? After doing the SPI_connect () or where? I would use it from MemoryContextStrdup to copy the data variable as I understand it, But in what context would I have to create the copy? In a new context or what would the theme be like? Should I use AllocSetContextCreate to create the new context or what would the theme be like? And if I have to create the new context with AllocSetContextCreate, where in the source code will I have to create it? After doing SPI_connect () or where? The truth is that I also read the source code of https://github.com/cohenjo/pg_idx_advisor but I don't see that The MemoryContextStrdup that they told me to use has been used there. Sorry for the inconvenience and see the same thing again. But as I indicated, I also read the explanations they told me to read (which were explanations about the memory contexts in Postgresql mainly) but there were no examples of source code. And since there is nothing on the Internet of examples that will help me to use, I am asking again. And the truth is that I didn't find examples of this in the Postgres source code, just definitions, That is the source code where MemoryContextStrdup is defined. It may be very easy for you and you will see it very clearly, but for me it really is not, and there are no examples of use on the Internet. I really searched a lot and found nothing. I would greatly appreciate a help please. Regards, Yessica Brinkmann El lun., 28 oct. 2019 a las 12:39, Yessica Brinkmann (< yessica.brinkm...@gmail.com>) escribió: > Thank you very much for the answer. > Best regards, > Yessica Brinkmann > > El lun., 28 oct. 2019 a las 8:03, Joe Conway () > escribió: > >> On 10/27/19 8:01 PM, Yessica Brinkmann wrote: >> > Thank you so much for the answers. >> > By telling me this: "MemoryContextStrdup enables you to create a copy of >> > a string in an explicitly specified memory context." I better understood >> > the function of MemoryContextStrdup. >> > And thank you very much to Mr. Jony Cohen for giving me the reference of >> > his work. I really was already researching about his work, and I >> > included it in the state of the art of my thesis, but I didn't look at >> > the source code. >> > I will be looking at the source code for a help, and especially in this >> > case to see for the moment, the maintenance of the context for its >> > subsequent restoration. >> >> For better understanding of how Postgres manages memory, you might want >> to also read this: >> >> >> https://github.com/postgres/postgres/blob/master/src/backend/utils/mmgr/README >> >> and possibly browse through this: >> >> >> https://github.com/postgres/postgres/blob/master/src/backend/utils/mmgr/mcxt.c >> >> HTH, >> >> Joe >> -- >> Crunchy Data - http://crunchydata.com >> PostgreSQL Support for Secure Enterprises >> Consulting, Training, & Open Source Development >> >>
[11.6] WALs recovery ordering with Restore Command - timelines
Hi, PostgreSQL 11.6 Centos 7.6 pgBackrest 2.19 Case: Master is on timeline 15 while Standby that should be synchronized with Master as a hot standby is on timeline 13. WALs to be obtained from archive. recovery.conf: >primary_conninfo = 'user=usename host=10.10.10.3 port=5432 sslmode=prefer application_name=pg2' >recovery_target_timeline = 'latest' >restore_command = '/usr/bin/pgbackrest --stanza=platform archive-get %f "%p" --log-level-console info' >standby_mode = 'on' Standby recovery is done like that: 1. try restore 000F0A2700E5 2. unable to find 000F0A2700E5 in the archive 3. try restore 000E0A2700E5 4. found 000E0A2700E5 in the archive -- trying next WAL for BOTH timelines 1. try restore 000F0A2700E6 2. unable to find 000F0A2700E6 in the archive 3. try restore 000E0A2700E6 4. found 000E0A2700E6 in the archive Why does Postgres restore WALs not in a timelines order? First 13, then 14, then 15. Up to timeline switch position. WALs it try to restore for the latest timeline are deliberately not yet exist. It leads to terrible recovery performance because of long "unable to find" operations.
Call Stored Procedure with inout parameters in c#
I tried with inout parameters but it did not work correctly. here is my procedure : CREATE OR REPLACE PROCEDURE public.testing ( INOUT x int ) LANGUAGE 'plpgsql' AS $$ BEGIN x := x * 3; END ; $$; my C# code: public void myMethod2() { CRUD.con.Open(); int a = 5; using (var cmd = new NpgsqlCommand("call public.testing();", CRUD.con)) { var objParam = new NpgsqlParameter("x", NpgsqlDbType.Integer) { Direction = ParameterDirection.Output }; objParam.Value = 5; cmd.Parameters.Add(objParam); cmd.ExecuteNonQuery(); Console.WriteLine(objParam.Value); } CRUD.con.Dispose(); }
Re: casting Bangla characters to NUMERIC
select translate(string,'০১২৩৪৫৬৭৮৯','0123456789'); seems to do the trick. On Mon, 25 Nov 2019 at 16:38, Bob Jolliffe wrote: > > Thanks Tom. That is what I expected to hear. Was being hopeful ... > > On Mon, 25 Nov 2019 at 16:27, Tom Lane wrote: > > > > Bob Jolliffe writes: > > > I have an interesting problem. I have a string field in a table which > > > (sometimes) is expected to contain numeric values ie. a cast of the > > > field to INTEGER is supposed to succeed. > > > > > My issue is that the application is running in Bangladesh, and > > > sometimes the users have entered values using Bangla characters. eg. > > > "৮" rather than "8". (Note that this shouldnt have been possible to > > > enter in the first place, but we can ignore that for now. They are > > > there, I have to somehow deal with it.) > > > > > Is there a way with LOCALE settings to get this cast to work > > > correctly? > > > > Doubt it :-(. ISTM it's certainly outside the charter of int4in to > > do that. Ideally to_number() would handle it, but I don't think it > > does at the moment. Probably your best bet is to translate those > > characters to regular ASCII digits using replace(), then cast. > > It'd be a bit tedious, but fortunately there are only 10 cases > > to consider, and you could wrap that up in a function. > > > > regards, tom lane
Re: casting Bangla characters to NUMERIC
Thanks Tom. That is what I expected to hear. Was being hopeful ... On Mon, 25 Nov 2019 at 16:27, Tom Lane wrote: > > Bob Jolliffe writes: > > I have an interesting problem. I have a string field in a table which > > (sometimes) is expected to contain numeric values ie. a cast of the > > field to INTEGER is supposed to succeed. > > > My issue is that the application is running in Bangladesh, and > > sometimes the users have entered values using Bangla characters. eg. > > "৮" rather than "8". (Note that this shouldnt have been possible to > > enter in the first place, but we can ignore that for now. They are > > there, I have to somehow deal with it.) > > > Is there a way with LOCALE settings to get this cast to work > > correctly? > > Doubt it :-(. ISTM it's certainly outside the charter of int4in to > do that. Ideally to_number() would handle it, but I don't think it > does at the moment. Probably your best bet is to translate those > characters to regular ASCII digits using replace(), then cast. > It'd be a bit tedious, but fortunately there are only 10 cases > to consider, and you could wrap that up in a function. > > regards, tom lane
Re: casting Bangla characters to NUMERIC
Bob Jolliffe writes: > I have an interesting problem. I have a string field in a table which > (sometimes) is expected to contain numeric values ie. a cast of the > field to INTEGER is supposed to succeed. > My issue is that the application is running in Bangladesh, and > sometimes the users have entered values using Bangla characters. eg. > "৮" rather than "8". (Note that this shouldnt have been possible to > enter in the first place, but we can ignore that for now. They are > there, I have to somehow deal with it.) > Is there a way with LOCALE settings to get this cast to work > correctly? Doubt it :-(. ISTM it's certainly outside the charter of int4in to do that. Ideally to_number() would handle it, but I don't think it does at the moment. Probably your best bet is to translate those characters to regular ASCII digits using replace(), then cast. It'd be a bit tedious, but fortunately there are only 10 cases to consider, and you could wrap that up in a function. regards, tom lane
casting Bangla characters to NUMERIC
Hi I have an interesting problem. I have a string field in a table which (sometimes) is expected to contain numeric values ie. a cast of the field to INTEGER is supposed to succeed. My issue is that the application is running in Bangladesh, and sometimes the users have entered values using Bangla characters. eg. "৮" rather than "8". (Note that this shouldnt have been possible to enter in the first place, but we can ignore that for now. They are there, I have to somehow deal with it.) Is there a way with LOCALE settings to get this cast to work correctly? Currently I get: postgres=# select cast('৮' as INTEGER); ERROR: invalid input syntax for integer: "৮" LINE 1: select cast('৮' as INTEGER); Regards Bob
Re: Pgadmin 4 schema visibility
On 11/24/19 11:55 PM, Jill Jade wrote: Hello everyone, I am using Postgres 11 with Pgadmin4. System schemas pg_temp and pg_toast are not visible when I do \dn+ in psql. But when I do SELECT nspname FROM pg_catalog.pg_namespace;, I see the schemas. namespace.jpg dn.PNG pgadmin4.PNG Can anyone explain why is this so? Does anyone have a solution for this? \dnS+: https://www.postgresql.org/docs/11/app-psql.html \dn[S+] [ pattern ] Lists schemas (namespaces). If pattern is specified, only schemas whose names match the pattern are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. If + is appended to the command name, each object is listed with its associated permissions and description, if any. Thank you -- Adrian Klaver adrian.kla...@aklaver.com
Re: Adding LIMIT changes PostgreSQL plan from good to a bad one
On November 22, 2019 at 20:28:39, Michael Lewis (mle...@entrata.com) wrote: I try to avoid DISTINCT and use GROUP BY when feasible, as well as avoiding OR condition. If you combined anon1 and anon2 with UNION ALL, and did (inner) join instead of left, or even moved all of that to EXISTS, perhaps that gives you better consistent performance. Something like this- SELECT contacts.id FROM contacts JOIN contacts__aggregated AS contacts__aggregated_1 ON contacts__aggregated_1.company_id = contacts.company_id AND contacts__aggregated_1.contact_id = contacts.id AND contacts__aggregated_1.field_name = 'names' WHERE contacts.company_id = '\x4c2118ad54397f271b00' AND EXISTS ( SELECT FROM contacts_values WHERE contacts_values.field_id = '\x' AND contacts_values.field_name = 'facebook' AND nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value)) = nimble_contact_value_normalize('http://www.facebook.com/jon.ferrara')) AND contacts_values.company_id = contacts.company_id AND contacts_values.id = contacts.id UNION ALL SELECT FROM contacts_values WHERE contacts_values.field_id = '\x' AND contacts_values.field_name = 'last_name' AND nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value)) = nimble_contact_value_normalize('Ferrara') AND contacts_values.company_id = contacts.company_id AND contacts_values.id = contacts.id ) ORDER BY contacts__aggregated_1.value ASC LIMIT 30 OFFSET 0; Hi Michael! Thank you, your suggestion about replacing DISTINCT with GROUP BY improved the plan for my query significantly. It is still not perfect, but at least it is something we can live with. However, rewriting OR with UNION does not change things. Here's the plan for it: https://explain.depesz.com/s/c6Ec. Judging from the number of loops it uses some form of nested loop internally. It appears that any way of expressing OR ruins something for the planner. My performance is great for a single criterion and for multiple criteria joined by AND. Adding any OR into the mix results in giant JOINs and misestimations.
Re: Constants in the foreighn key constraints
Hi team. Thanks for the information. Looks like there're some architectural limitations for such foreign keys. Also thanks for the suggestions on how to make it behaving like I want on current postgres version. On Sat, 23 Nov 2019, 19:11 Tom Lane, wrote: > Adrian Klaver writes: > > Please reply to list also. > > Ccing list. > > > On 11/22/19 11:36 PM, aleksey ksenzov wrote: > >> While I understand I can do everything with triggers/functions, for me > >> it looks like a good idea to have possibility to use constants in > >> constraints, so it would be very nice if postgres community could add > >> this functionality in the nearest releases. > > It seems quite unlikely to me that we'd add such a thing. It'd be > a weird wart on the foreign-key feature. Notable problems: > > * How would it interact with referential actions, notably > ON UPDATE CASCADE, ON UPDATE/DELETE SET NULL, ON UPDATE/DELETE SET DEFAULT? > I guess you could disallow those options for such a foreign key, > but anytime you have a feature that's that non-orthogonal with > existing ones, you have to ask yourself if you've designed it right. > > * Such FKs couldn't be displayed in the information_schema views, > at least not without violating the letter and spirit of the SQL spec. > We already have some cases of constraints that can't be shown in > information_schema, but that's not the sort of wart I want more of. > > BTW, it seems to me that you can get the same behavior with existing > features: make a regular multicolumn foreign key constraint, and then > add a CHECK constraint restricting what value one of the referencing > columns can have. Yeah, this requires useless storage of a column > that will only ever have one value. I think that's an okay limitation > for a niche use-case. It also generalizes more easily to cases where > there's more than exactly one allowed value for a referencing column. > > regards, tom lane >
AW: sql query for postgres replication check
> On Fri, Nov 22, 2019 at 01:20:59PM +, Zwettler Markus (OIZ) wrote: > > I came up with the following query which should return any apply lag in > > seconds. > > > > select coalesce(replay_delay, 0) replication_delay_in_sec from ( > >select datname, > > ( > > select case > > when received_lsn = latest_end_lsn then 0 > > else extract(epoch > > from now() - latest_end_time) > >end > > from pg_stat_wal_receiver > > ) replay_delay > >from pg_database > >where datname = current_database() > > ) xview; > > > > > > I would expect delays >0 in case SYNC or ASYNC replication is somehow > > behind. We will do a warning at 120 secs and critical at 300 secs. > > pg_stat_wal_receiver is available only on the receiver, aka the standby so it > would > not really be helpful on a primary. On top of that streaming replication is > system- > wide, so there is no actual point to look at databases either. > > > Would this do the job or am I missing something here? > > Here is a suggestion for Nagios: hot_standby_delay, as told in > https://github.com/bucardo/check_postgres/blob/master/check_postgres.pl > -- > Michael I don't want to use check_hot_standby_delay as I would have to configure every streaming replication configuration separately with nagios. I want a generic routine which I can load on any postgres server regardless of streaming replication or database role. The query would return >0 if streaming replication falls behind and 0 in all other cases (replication or not). Checking streaming replication per database doesn't make any sense to me. Markus