Re: [GENERAL] 10 beta 4 foreign table partition check constraint broken?
On Thu, Sep 21, 2017 at 02:59:21PM +0900, Michael Paquier wrote: /tmp/mutt-mayon-1000-26043-945be079d938129298 > On Fri, Sep 15, 2017 at 10:43 PM, Paul Jones wrote: > > Is this a bug in Postgres 10b4? Looks like neither partition ranges > > nor check constraints are honored in 10b4 when inserting into > > partitions that are foreign tables. > > Here is what you are looking for in the documentation: > https://www.postgresql.org/docs/10/static/sql-createforeigntable.html > Constraints on foreign tables (such as CHECK or NOT NULL clauses) are > not enforced by the core PostgreSQL system, and most foreign data > wrappers do not attempt to enforce them either; that is, the > constraint is simply assumed to hold true. There would be little point > in such enforcement since it would only apply to rows inserted or > updated via the foreign table, and not to rows modified by other > means, such as directly on the remote server. Instead, a constraint > attached to a foreign table should represent a constraint that is > being enforced by the remote server. Thank you for the pointer... it is clear that I just didn't read far enough. > > > Here is a nearly shovel-ready example. Just replace with your > > servers/passwords. > > > > -- -- > > -- Server 2 > > -- -- > > > > CREATE DATABASE cluster; > > \c cluster > > > > CREATE TABLE foo_1 ( > > id INT NOT NULL, > > nameTEXT > > ); > > So here I think that you should add a CHECK constraint to this table, > and that the behavior of your example works as expected. I will try this, thanks! > -- > Michael . -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 10 beta 4 foreign table partition check constraint broken?
Is this a bug in Postgres 10b4? Looks like neither partition ranges nor check constraints are honored in 10b4 when inserting into partitions that are foreign tables. Here is a nearly shovel-ready example. Just replace with your servers/passwords. -- -- -- Server 1 -- -- CREATE DATABASE cluster; \c cluster CREATE EXTENSION postgres_fdw; CREATE SERVER server2 FOREIGN DATA WRAPPER postgres_fdw OPTIONS(host 'server2', dbname 'cluster'); CREATE USER MAPPING FOR postgres SERVER server2 OPTIONS(user 'postgres', password 'pgpassword'); CREATE TABLE foo ( id INT NOT NULL, nameTEXT ) PARTITION BY RANGE (id); CREATE FOREIGN TABLE foo_1 PARTITION OF foo FOR VALUES FROM (0) TO (1) SERVER server2 OPTIONS (table_name 'foo_1'); -- -- -- Server 2 -- -- CREATE DATABASE cluster; \c cluster CREATE TABLE foo_1 ( id INT NOT NULL, nameTEXT ); -- -- -- Server 1 -- -- INSERT INTO foo_1 VALUES(0,'funky bug'), (100, 'wiggle frank'), (15000, 'boegger snot'); SELECT * FROM foo; DROP FOREIGN TABLE foo_1; CREATE FOREIGN TABLE foo_1 PARTITION OF foo (id CONSTRAINT f1 CHECK ((id >= 0) AND (id < 1))) FOR VALUES FROM (0) TO (1) SERVER server2 OPTIONS (table_name 'foo_1'); INSERT INTO foo_1 VALUES(0,'funky bug'), (100, 'wiggle frank'), (15000, 'boegger snot'); SELECT * FROM foo; . -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Redacting JSONB
Is there some way to redact JSONB fields? This is essentially implementing "row-level" security on JSONB fields. Could possibly be done with a view or a function. MongoDB and MarkLogic both do this. PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Question about jsonb and data structures
On Wed, Jun 21, 2017 at 09:37:20AM -0700, Emilie Laffray wrote: > Date: Wed, 21 Jun 2017 09:37:20 -0700 > From: Emilie Laffray > To: Achilleas Mantzios > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Question about jsonb and data structures > > Hello Achilleas, > > I fail to see how it would solve my problem here. I already have a > structure that is packed and nested. Your example is on a simple key/value > pair structure and effectively you can address the ids very simply. In my > case, I would need to return only a subset of the json data. > Maybe I missed something from your example? Would jsquery help? https://github.com/postgrespro/jsquery > > On Wed, Jun 21, 2017 at 12:27 AM, Achilleas Mantzios < > ach...@matrix.gatewaynet.com> wrote: > > > On 21/06/2017 01:01, Emilie Laffray wrote: > > > >> Hello, > >> > >> I have been playing with Postgresql recently with a large table and I > >> have started looking at reducing the number of rows in that table. > >> One idea to reduce the actual size, I thought I would "compress" the data > >> structure into a JSON object (more on that later). > >> The table is pretty straightforward in itself > >> other_id integer > >> type_id integer > >> label_id integer > >> rank_id integer > >> value real > >> > >> and the goal is to move to a data structure where we have > >> other_id integer > >> value jsonb > >> > >> There are many things in the table that is not optimal for legacy reasons > >> and I can't just get rid of them. > >> > >> I looked at several json object data structure to see if I could make it > >> work notably > >> [{"type":1,"label":1,"rank":1,"value":.5},{"type":1,"label": > >> 2,"rank":1,"value":.25}] > >> {"1":{"1":{"rank":1,"value":.5},"2":{"rank":1,"value":.25}}} > >> > >> For better or worse, the first one would be the best for me as I can do a > >> simple query like this using the GIN index built on top of value: > >> SELECT * > >> FROM mytable > >> WHERE value @> '[{"type":1,"rank":1,"label":2}]' > >> > >> Effectively, I would want to extract the value corresponding to my > >> condition using simple SQL aka not having to write a function extracting > >> the json. > >> > >> The experiment on the second data structure shows that it is not as > >> convenient as I may need to perform search on either type, label, rank and > >> various combinations of the fields. > >> > >> Maybe you could try smth like : > > test=# select * from lala; > > id |txt > > + > > 1 | one > > 2 | two > > 3 | ZZZbabaZZZ > > 4 | ZZZbabaZZZ > > 5 | ZZZbabaZZZ > > 6 | ZZZbabaZZZ > > 7 | ZZZbabaZZZ > > 8 | ZZZbabaZZZ > > 9 | ZZZbabaZZZ > > 10 | ZZZbabaZZZ > > 11 | ZZZbabaZZZ > > 12 | ZZZbabaZZZ > > 13 | ZZZbabaZZZ > > (13 rows) > > > > select foo.* FROM (select id,to_jsonb(lala) as jzon from lala) as foo > > where jzon @> '{"id":5}'; > > > > > > Am I missing something? > >> > >> Thanks in advance, > >> Emilie Laffray > >> > > > > > > -- > > Achilleas Mantzios > > IT DEV Lead > > IT DEPT > > Dynacom Tankers Mgmt > > > > > > > > -- > > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-general > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Upgrade method from 9.2 to 10 ?
Would it be possible to upgrade from 9.2 to 10 by doing a pg_basebackup to the new server, followed by pg_upgrade -k, then streaming replication from the 9.2 server to the 10 server until we're ready to cut over to 10? The idea is to minimize downtime. PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Stats update difference between VACUUM ANALYZE and ANALYZE in 9.2?
For a freshly pg_restore'd 9.2 database, would VACUUM ANALYZE update statistics any better than just an ANALYZE? After a restore, we ran a bunch of ANALYZEs on each table individually using GNU 'parallel' (for speed). Many of these tables are child tables in a partition. Following the ANALYZEs, a join with the parent table showed all of the child tables scanned sequentially. After running VACUUM ANALYZE on the whole database, the same join used index-only scans on the child tables. An examination of the fine manual implies there may be some difference (or a documentation conflict?) between running ANALYZE manually on individual tables and an unqualified ANALYZE on the whole database. 5.9.6: "If you are using manual VACUUM or ANALYZE commands, don't forget that you need to run them on each partition individually. A command like: ANALYZE measurement; will only process the master table." ANALYZE: "If the table being analyzed has one or more children, ANALYZE will gather statistics twice: once on the rows of the parent table only, and a second time on the rows of the parent table with all of its children. This second set of statistics is needed when planning queries that traverse the entire inheritance tree. The autovacuum daemon, however, will only consider inserts or updates on the parent table itself when deciding whether to trigger an automatic analyze for that table. If that table is rarely inserted into or updated, the inheritance statistics will not be up to date unless you run ANALYZE manually." Can anyone explain what's going on here? Thanks, Paul Jones -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Question about shared_buffer cache behavior
In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from a single table that uses an index appears to read the table into the shared_buffer cache. Then, as many times as the exact same SELECT is repeated in the same session, it runs blazingly fast and doesn't even touch the disk. All good. Now, in the *same* session, if a different SELECT from the *same* table, using the *same* index is run, it appears to read the entire table from disk again. Why is this? Is there something about the query that qualifies the contents of the share_buffer cache? Would this act differently for different kinds of indexes? PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] MongoDB 3.2 beating Postgres 9.5.1?
I have been running the EDB benchmark that compares Postgres and MongoDB. I believe EDB ran it against PG 9.4 and Mongo 2.6. I am running it against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000 JSON records generated by the benchmark. It looks like Mongo is winning, and apparently because of its cache management. The first queries on both run in ~30 min. And, once PG fills its cache, it whips Mongo on repeats of the *same* query (vmstat shows no disk reads for PG). However, when different query on the same table is issued to both, vmstat shows that PG has to read the *entire* table again, and it takes ~30 min. Mongo does a lot of reads initially but after about 5 minutes, it stops reading and completes the query, most likely because it is using its cache very effectively. Host: Virtual Machine 4 CPUs 16 Gb RAM 200 Gb Disk RHEL 6.6 PG: 9.5.1 compiled from source shared_buffers = 7GB effectve_cache_size = 12GB Mongo: 3.2 installed with RPM from Mongo In PG, I created the table by: CREATE TABLE json_tables ( dataJSONB ); After loading, it creates the index: CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops); After a lot of experimentation, I discovered that the benchmark was not using PG's index, so I modified the four queries to be: SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}'; SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}'; SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}'; SELECT data FROM json_tables WHERE data @> '{"type": "service"}'; Here are two consecutive explain analyze for PG, for the same query. No functional difference in the plans that I can tell, but the effect of PG's cache on the second is dramatic. If anyone has ideas on how I can get PG to more effectively use the cache for subsequent queries, I would love to hear them. --- benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}'; QUERY PLAN - Bitmap Heap Scan on json_tables (cost=113.50..37914.64 rows=1 width=1261) (actual time=2157.118..1259550.327 rows=909091 loops=1) Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb) Rows Removed by Index Recheck: 4360296 Heap Blocks: exact=37031 lossy=872059 -> Bitmap Index Scan on json_tables_idx (cost=0.00..111.00 rows=1 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1) Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb) Planning time: 291.932 ms Execution time: 1259886.920 ms (8 rows) Time: 1261191.844 ms benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}'; QUERY PLAN --- Bitmap Heap Scan on json_tables (cost=113.50..37914.64 rows=1 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1) Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb) Rows Removed by Index Recheck: 4360296 Heap Blocks: exact=37031 lossy=872059 -> Bitmap Index Scan on json_tables_idx (cost=0.00..111.00 rows=1 width =0) (actual time=769.081..769.081 rows=909091 loops=1) Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb) Planning time: 33.967 ms Execution time: 29869.381 ms (8 rows) Time: 29987.122 ms -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Anyone compare PG 9.5 and MongoDB 3.2?
MongoDB has released 3.2 with their WiredTiger storage. Has anyone benchmarked 9.5 against it, and for JSONB elements several MB in size? PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.4 -> 9.5 upgrade problem when both python2 and python3 present
On Mon, Jan 11, 2016 at 06:23:06PM -0500, Tom Lane wrote: > Date: Mon, 11 Jan 2016 18:23:06 -0500 > From: Tom Lane > To: Paul Jones > cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] 9.4 -> 9.5 upgrade problem when both python2 and > python3 present > > > > I guess the thing to do is to manually pg_dump the databases that have > > python, drop them, upgrade, and then manually restore. > > If you'd rather build a patched version of PG, I have posted a > work-in-progress patch to address this issue: > http://www.postgresql.org/message-id/31659.1452538...@sss.pgh.pa.us It worked most excellently, thank you. > > regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.4 -> 9.5 upgrade problem when both python2 and python3 present
On Mon, Jan 11, 2016 at 10:04:16AM -0500, Tom Lane wrote: /tmp/mutt-mayon-1000-19386-284b6a00794950f414 > Paul Jones writes: > > pg_upgrade complains about not being able to find $libdir/plpython3 > > when upgrading a 9.4 cluster that has both python2 and python3 used. > > No, that's not what the error message says: > > > Could not load library "$libdir/plpython3" > > FATAL: Python major version mismatch in session > > DETAIL: This session has previously used Python major version 2, and it is > > now attempting to use Python major version 3. > > HINT: Start a new session to use a different Python major version. > > This is a restriction we put in place because libpython2 and libpython3 > don't coexist nicely in the same address space. Unfortunately, it makes > it problematic to restore a dump that contains references to both python2 > and python3 functions. > > It looks like pg_upgrade tries to load all libraries from functions in > any database in the old cluster into a single session in the new cluster, > which will fail in a scenario like this even if you keep python2 and > python3 functions rigorously separated into distinct databases. I'm > not sure if we could weaken that test enough to work. I guess the thing to do is to manually pg_dump the databases that have python, drop them, upgrade, and then manually restore. > > > I dropped the python2 database but still got the problem. > > You must still have at least one database that contains references > to python2 (check pg_language to be sure). I thought of that after I pulled the trigger on the mail... Thanks for the information... > > regards, tom lane > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 9.4 -> 9.5 upgrade problem when both python2 and python3 present
pg_upgrade complains about not being able to find $libdir/plpython3 when upgrading a 9.4 cluster that has both python2 and python3 used. Both the 9.4 and 9.5 PGs have been built from source with python2/3 in the recommended way and the plpython3.so is present in /usr/local/pgsql/lib. I dropped the python2 database but still got the problem. The session: postgres@mayon:~$ pg_upgrade -b /usr/local/pgsql9.4/bin -B /usr/local/pgsql/bin -d /mnt/pgdata9.4 -D /mnt/pgdata Performing Consistency Checks - Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for reg* system OID user data typesok Checking for contrib/isn with bigint-passing mismatch ok Creating dump of global objects ok Creating dump of database schemas ok Checking for presence of required libraries fatal Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: loadable_libraries.txt Failure, exiting postgres@mayon:~$ cat *.txt Could not load library "$libdir/plpython3" FATAL: Python major version mismatch in session DETAIL: This session has previously used Python major version 2, and it is now attempting to use Python major version 3. HINT: Start a new session to use a different Python major version. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] ERROR: could not read block 3 in file "base/12511/12270"
I have been having disk errors that have corrupted something in my postgres database. Other databases work ok: Running on Ubuntu 10.04. paul@kitanglad:~$ psql -U postgres psql (9.4.5) Type "help" for help. postgres=# SELECT pg_catalog.pg_is_in_recovery(); ERROR: could not read block 3 in file "base/12511/12270": read only 4096 of 8192 bytes postgres=# \c pjtest You are now connected to database "pjtest" as user "postgres". pjtest=# SELECT pg_catalog.pg_is_in_recovery(); pg_is_in_recovery --- f (1 row) Since this is the "postgres" database, dropping and re-creating it doesn't seem possible. pg_dump also gets the same error when I run it on "postgres" so pg_basebackup will probably get the same error. The only thing I can think of is to create a completely new data directory with initdb and pg_dump/restore all the databases in the cluter to the new data directory. Is this my only option? PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Code for user-defined type
- Original Message - > From: Tom Lane > To: Paul Jones > Cc: "pavel.steh...@gmail.com" ; > "pgsql-general@postgresql.org" > Sent: Thursday, May 29, 2014 11:32 AM > Subject: Re: [GENERAL] Code for user-defined type > > Paul Jones writes: > >> I used the proper XGetDatum and DatumGetX and was able to get it to work >> properly. However, I since discovered that I probably should not use >> "cstring_to_text" because of the palloc's it does. The > problem comes >> when doing "\copy table from file". After about 1000 rows, > the backend >> dies with SEGV, I think because of too many pallocs being created in >> the copy transaction. > > That probably means you're stomping on memory that doesn't belong to > you. > > pallocs per se should not be a problem for COPY --- it does a context > reset per row. And even if it didn't, you'd not likely be running out > of memory after a mere thousand rows. However, a buffer-overrun type > of coding error would be probabilistic as to when it became obvious > via a core dump; some of the time you'd be stomping on memory that > was unused anyway. > > regards, tom lane Ok, I am going to recompile with --enable-cassert and give cstring_to_text/text_to_cstring another try to see if I can track down what's going wrong. I'm letting internal routines do all the work so it's probably something bad I'm passing to them. PJ > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Code for user-defined type
On Wed, May 28, 2014 at 10:51:43AM +0200, Pavel Stehule wrote: > > Hello > > > 2014-05-27 20:30 GMT+02:00 Paul Jones : > > > I have written a user-defined type that allows direct import and printing > > of > > DB2 timestamps.It does correctly import and export DB2 timestamps, > > butI'm wondering ifsomeone could tell me if I made anymistakes in > > the C code, particularly w.r.t. memory leaks or non-portableconstructs. > > > > > > I'm doing this on 9.3.4. > > > > Thanks, > > There is one issue DirectFunctionCall takes a parameters converted to Datum > and returns Datum > > You should to use a macros XGetDatum and DatumGetX > > In this case > > newDate = DatumGetTimestamp(DirectFunctionCall2(to_timestamp, > CStringGetDatum(date_txt), > CStringGetDatum(cstring_to_text(nls_date_format; > > PG_RETURN_TIMESTAMP(newDate); > > > > There is inconsistency in types - Timestamp and Timestamptz - Thanks, Pavel! I used the proper XGetDatum and DatumGetX and was able to get it to work properly. However, I since discovered that I probably should not use "cstring_to_text" because of the palloc's it does. The problem comes when doing "\copy table from file". After about 1000 rows, the backend dies with SEGV, I think because of too many pallocs being created in the copy transaction. I rewrote it so that the format string is turned into a text at .so load time, and then converted the input string into a local text. PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Code for user-defined type
I have written a user-defined type that allows direct import and printing of DB2 timestamps.It does correctly import and export DB2 timestamps, butI'm wondering ifsomeone could tell me if I made anymistakes in the C code, particularly w.r.t. memory leaks or non-portableconstructs. I'm doing this on 9.3.4. Thanks, PJ -- SQL --- CREATE TYPE db2tstz; CREATE FUNCTION db2tstzin(cstring) RETURNS db2tstz AS '/home/paul/src/pgproj/types/db2tstz.so', 'db2tstzin' LANGUAGE C STRICT; CREATE FUNCTION db2tstzout(db2tstz) RETURNS cstring AS '/home/paul/src/pgproj/types/db2tstz.so', 'db2tstzout' LANGUAGE C STRICT; CREATE TYPE db2tstz ( INPUT = db2tstzin, OUTPUT = db2tstzout, LIKE = timestamptz ); CREATE TABLE foo ( aaa INT, bbb DB2TSTZ ); INSERT INTO foo VALUES (1, '2013-10-03-17.22.18.823581'), (2, '2010-04-06-13.16.47.174372'); C Code #include "postgres.h" #include #include "fmgr.h" #include "utils/formatting.h" #include "utils/datetime.h" #ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; #endif /* by value */ PG_FUNCTION_INFO_V1(db2tstzin); Datum db2tstzin(PG_FUNCTION_ARGS) { char *date_txt = PG_GETARG_CSTRING(0); Timestamp tsp; tsp = DirectFunctionCall2(to_timestamp, cstring_to_text(date_txt), cstring_to_text("-MM-DD-HH24.MI.SS.US")); PG_RETURN_DATUM(tsp); } PG_FUNCTION_INFO_V1(db2tstzout); Datum db2tstzout(PG_FUNCTION_ARGS) { TimestampTz tsp = PG_GETARG_TIMESTAMPTZ(0); text *result; result = (text *)DirectFunctionCall2(timestamp_to_char, (int)&tsp, cstring_to_text("-MM-DD-HH24.MI.SS.US")); PG_RETURN_CSTRING(text_to_cstring(result)); } -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] User defined operator fails to work in EXCLUDE constraint
- Original Message - > From: Tom Lane > To: Paul Jones > Cc: "pgsql-general@postgresql.org" > Sent: Sunday, April 13, 2014 4:25 PM > Subject: Re: [GENERAL] User defined operator fails to work in EXCLUDE > constraint > > Paul Jones writes: >> I tried to define my own circle operator to use in an EXCLUDE constraint > but it fails to detect >> insertion of rows that should not be simultaneously be allowed in the > table. The operator >> compares two circles' radii and works for a simple SELECT. What am I > doing wrong? > > This: > >> ALTER OPERATOR FAMILY circle_ops USING gist ADD >> OPERATOR 15 === (circle, circle); > > You can't just add a new operator to a GIST opclass and have it work with > no coding, because what makes it work is teaching the opclass' > consistent() function about it. > > What I'd have expected to happen when you did this was bleating about > an unrecognized operator strategy number. The reason you didn't get that > was that rtree_internal_consistent doesn't throw an error in the default: > case in its switch, which seems pretty stupid now that I look at it. > > In this particular application, circle_ops couldn't really help you even > if you were prepared to go and change the C code, because what it stores > in the index is bounding boxes for the circles. I can't see any way for > bounding-box comparisons to exclude subtrees of the index when the query > is about whether the radii match; so you'd not be able to do better than > a full index scan, which will not be faster than a full table scan. > > You could probably solve your problem with a different index > representation. A brute-force way would be to make an expression index > on the range [radius(aa), radius(aa) + 0.005] and then look for > overlaps of those ranges. There might be a better answer. > > regards, tom lane > I was afraid it was something like this. I see that I was way in over my head on this one and I was mislead because it didn't complain about anything. I do appreciate the lesson. PJ > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] User defined operator fails to work in EXCLUDE constraint
I tried to define my own circle operator to use in an EXCLUDE constraint but it fails to detect insertion of rows that should not be simultaneously be allowed in the table. The operator compares two circles' radii and works for a simple SELECT. What am I doing wrong? Here is the code to reproduce. The second insert at the end should fail because the two circles have the same radius. CREATE OR REPLACE FUNCTION circradcmp(aa CIRCLE, bb CIRCLE) RETURNS BOOLEAN AS $$ DECLARE zz DOUBLE PRECISION; BEGIN zz := abs(radius(aa) - radius(bb)); IF (zz < 0.0005) THEN RETURN TRUE; ELSE RETURN FALSE; END IF; END; $$ LANGUAGE plpgsql; CREATE OPERATOR === ( LEFTARG = CIRCLE, RIGHTARG = CIRCLE, PROCEDURE = circradcmp, COMMUTATOR = === ); ALTER OPERATOR FAMILY circle_ops USING gist ADD OPERATOR 15 === (circle, circle); CREATE TABLE punky ( acirc CIRCLE, EXCLUDE USING GIST (acirc circle_ops WITH ===) ); INSERT INTO punky VALUES ('(0,0),3)'); INSERT INTO punky VALUES ('(7,0),3)'); Paul Jones -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging extension with gdb?
> > From: Tom Lane >To: Paul Jones >Cc: "pgsql-general@postgresql.org" >Sent: Wednesday, November 20, 2013 9:13 AM >Subject: Re: [GENERAL] Debugging extension with gdb? > > >Paul Jones writes: > >> Can someone point me to information on how to debug a Postgres extension >> with gdb? > >I've noticed that on some platforms, it works better to attach gdb to the >backend process before you load the extension, but on others, it works >better to attach afterwards. You might try the opposite procedure from >whatever you were doing. Other than that, it's mostly Just Worked for >me on any remotely modern platform. Be sure both the backend and the >extension are compiled with -g, of course. > > regards, tom lane After I *carefully* re-read your mail, I realized that I did not have the entire postgresql compiled with '-g'. So I went back and re-built all of it and, behold, I set and caught a breakpoint in the extension. Thanks! PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Debugging extension with gdb?
Can someone point me to information on how to debug a Postgres extension with gdb? I have been able to attach to one but getting breakpoints to work right and getting the addresses to properly relocate (since the extension could be loaded anywhere) has been the problem for me. I have not found anything very definitive yet in searching the net. Thanks, PJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Can LC_TIME affect timestamp input?
- Original Message - > From: Jasen Betts > To: pgsql-general@postgresql.org > Cc: > Sent: Saturday, January 26, 2013 9:00 PM > Subject: Re: [GENERAL] Can LC_TIME affect timestamp input? > > On 2013-01-25, Paul Jones wrote: >> Is it possible for LC_TIME locale to affect the format with which >> timestamps are input? > >> I have DB2 CSV dumps with timestamps like > '2003-10-21-22.59.44.00' > > All the non-digit symbols between "21" and the "44" look > unusual Yes, they do, but that's what came out of DB2 (it is what it is...) > >> that I want to load into Postgres with \copy. I would like to > eliminate >> the sed scripts that convert the timestamps in order to speed up the load. > >> (I know I could stage it through a temp table and use to_timestamp() >> but that requires a temp table for each real table, which is not viable >> w.r.t. our project goals). > > could you use a text column and then do alter table? The project sponsors won't accept this (but this is a good addition to my personal toolbox for another day, thanks) > >> I created a special locale with the DB2 timestamp format defined and did >> >> set lc_time='en_DB.UTF-8'; > > what's "DB" ? Just made it up...stands for DB2. > >> It didn't affect anything, in or out with Postgres. I know the locale >> works because date(1) displays the DB2 format correctly. > > That feels like a bug somewhere > > > AIUI date uses strftime("%c",... > and strftime() doesn't support fractional seconds, so postgres can't > use it. Since posting this, I tried digging around in the source code. From looking at "timestamp_in" and related routines, it doesn't appear to take into account any LC_* environment var. And I didn't see strftime(3) used for timestamps (although I could have missed it.) Timestamp input seems hard-wired for the most part. PJ > . > > > > -- > ⚂⚃ 100% natural > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Can LC_TIME affect timestamp input?
Is it possible for LC_TIME locale to affect the format with which timestamps are input? I have DB2 CSV dumps with timestamps like '2003-10-21-22.59.44.00' that I want to load into Postgres with \copy. I would like to eliminate the sed scripts that convert the timestamps in order to speed up the load. (I know I could stage it through a temp table and use to_timestamp() but that requires a temp table for each real table, which is not viable w.r.t. our project goals). I created a special locale with the DB2 timestamp format defined and did set lc_time='en_DB.UTF-8'; It didn't affect anything, in or out with Postgres. I know the locale works because date(1) displays the DB2 format correctly. Postgres version: 9.2.2 (Built from source) OS: Centos 6.3 Paul Jones -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fine-grained replication?
I should have been more specific about why I'm interested in this. The idea is not so much the replication but being able to use the replication information to make business decisions based on what is observed happening to the data, or being able to selectively repair data if there are problems. XReader certainly shows a lot of promise, but it's not quite here yet. Specific answers to respondents below... > From: Sergey Konoplev > Date: Tue, 19 Jun 2012 13:00:49 +0400 > On Tue, Jun 19, 2012 at 1:49 AM, Paul Jones wrote: >> Is anyone aware of other non-trigger-based, fine-grained replication tools >> for PostgreSQL along the lines of the XReader >> <http://wiki.postgresql.org/wiki/XReader> or pgreplay >> <http://pgreplay.projects.postgresql.org/>? > Binary Replication? > http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial > Sergey Konoplev Thanks! This is good info. On Tue, Jun 19, 2012 at 12:34:02PM +0200, Albe Laurenz wrote: > I'm not sure what you want, because "pgreplay" is certainly not a > replication > tool, unless "replication" means something quite different to you than > it > does to me. > > Can you be more specific? Yes, as explained above. So pgreplay in this context would be more likely to be used to repair something. > Date: Tue, 19 Jun 2012 12:09:28 +0200 > From: Guillaume Lelarge > > If you want fine grained replication, it means you won't be able to use > log shipping and streaming replication. If you don't want trigger based > replication, you won't have many options still available. pgPool comes to > mind but, AFAIK, it isn't fine grained (and, to be honest, I would say it > isn't really replication). > > So, nope, sorry. > > BTW, what's the issue with trigger-based replication? it really helps a > lot in many cases. > The worry with trigger-based replication is that it could become a performance bottleneck for heavy loads. We tried out Slony just to see how it worked and it did its job just fine for what we did with it. Paul Jones -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Fine-grained replication?
Is anyone aware of other non-trigger-based, fine-grained replication tools for PostgreSQL along the lines of the XReader <http://wiki.postgresql.org/wiki/XReader> or pgreplay <http://pgreplay.projects.postgresql.org/>? Thanks, Paul Jones -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general