Re: [GENERAL] On-disk size of db increased after restore
Hi, On Thu, 2010-09-02 at 13:22 -0400, Tom Lane wrote: Devrim, have you identified yet which tables have the bloat? Are they the ones with tweaked autovacuum parameters? That's it. On prod server, that table consumes 50 GB disk space, and on the backup machine, it uses 148 GB. I applied custom autovac settings only to that table. This is 8.4.4 btw... So, what should I do now? Regards, -- Devrim GÜNDÜZ PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM Repository: http://yum.pgrpms.org Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Forcing the right queryplan
Henk van Lingen wrote: Now there are two types of query plans: syslog=# explain SELECT id, devicereportedtime, facility, priority, fromhost, syslogtag, infounitid, message FROM systemevents WHERE ( ( to_tsvector('english', message) @@ to_tsquery ( '131.211.112.9')) ) ORDER BY id DESC LIMIT 100; QUERY PLAN Limit (cost=0.00..10177.22 rows=100 width=159) - Index Scan Backward using systemevents_pkey on systemevents (cost=0.00.. 1052934.86 rows=10346 width=159) Filter: (to_tsvector('english'::regconfig, message) @@ to_tsquery('131. 211.112.9'::text)) (3 rows) This one is useless (takes very long). However this one: Hello Henk, I saw your other mail today, I'm replying on this one for better formatting. With a limit of 100 the planner guesses it will find 100 matching rows within some cost. At 500 rows the cost is higher than that of the second plan: syslog=# explain SELECT id, devicereportedtime, facility, priority, fromhost, syslogtag, infounitid, message FROM systemevents WHERE ( ( to_tsvector('english', message) @@ to_tsquery ( '131.211.112.9')) ) ORDER BY id DESC LIMIT 500; QUERY PLAN --- Limit (cost=40928.89..40930.14 rows=500 width=159) - Sort (cost=40928.89..40954.76 rows=10346 width=159) Sort Key: id - Bitmap Heap Scan on systemevents (cost=2898.06..40413.36 rows=1034 6 width=159) Recheck Cond: (to_tsvector('english'::regconfig, message) @@ to_t squery('131.211.112.9'::text)) - Bitmap Index Scan on msgs_idx (cost=0.00..2895.47 rows=10346 width=0) Index Cond: (to_tsvector('english'::regconfig, message) @@ to_tsquery('131.211.112.9'::text)) (7 rows) works acceptable. How to use the right plan regardless of the 'LIMIT-size'? The planner obviously thinks it will have read 100 rows from systemevents backwards earlier than it actually does, with the where clause that contains the scanning for string 131.211.112.9. Increasing the stats target in this case will probably not help, since the statistics will not contain selectivity for all possible ts queries. If the index is useless anyway, you might consider dropping it. Otherwise, increasing random_page_cost might help in choosing the otherplan, but on the other hand that plan has index scanning too, so I'm not to sure there. If that doesn't help, it would be interesting to see some output of vmstat 1 (or better: iostat -xk 1) to see what is the bottleneck during execution of the first plan. If it is IO bound, you might want to increase RAM or add spindles for increased random io performance. If it is CPU bound, it is probably because of executing the to_tsvector function. In that case it might be interesting to see if changing ts_vectors cost (see ALTER FUNCTION ... COST .../ http://developer.postgresql.org/pgdocs/postgres/sql-alterfunction.html) again helps the planner to favor the second plan over the first. regards, Yeb Havinga -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Compiling extension C with MingW in windows, Error...
On Fri, Sep 3, 2010 at 5:31 AM, Edwin Quijada listas_quij...@hotmail.com wrote: Hi, I am tryng to compile a C extension in windows using Minigw but always I get the same error C:\Program Files\PostgreSQL\8.3\share\exte_cC:\mingw\bin\gcc -shared -o pg2.dll pg2.o pg2.o:pg2.c:(.text+0x86): undefined reference to `_imp__CurrentMemoryContext' pg2.o:pg2.c:(.text+0x92): undefined reference to `MemoryContextAlloc' collect2: ld returned 1 exit status This error is just when it links. You need to link against postgres.exe to get access to these symbols. I don't recall if the mingw linker allows you to just specify the EXE file these days, but I think it does. If not, you'll need to create an import library from the EXE and link to that (the binary distribution only ships with import libraries for MSVC, but mingw can't use standard windows import libraries, so you need to create your own there) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] select now() problem?
I wonder if there's an equivalent of gcore on windows. If there is, it might be useful. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] alter column to inet get error.
Hello List, I want to change some columns in a database that were originally created as char varying to inet. When I try I get an error. Is there anyway to work around this? See below for table definition. Table public.kernel_gre Column | Type | Modifiers -+---+--- interface | character varying(15) | not null source_ip | character varying(16) | dest_ip | character varying(16) | physical_ip | character varying(16) | status | boolean | default false physical_src_ip | character varying(16) | tunnel_netmask | character varying(16) | key | character varying(32) | state | boolean | default false broadcast | boolean | default false alter TABLE kernel_gre ALTER COLUMN source_ip TYPE inet; ERROR: column source_ip cannot be cast to type inet current table contents: interface | source_ip | dest_ip | physical_ip | status | physical_src_ip | tunnel_netmask | key | state | broadcast ---+---+---+-++-+-+-+---+--- gre2 | 10.1.1.2 | 10.1.1.1 | 1.1.1.1 | t | 1.1.1.2 | 255.255.255.255 | | f | f gre3 | 10.1.1.4 | 10.1.1.3 | 1.1.1.3 | t | 1.1.1.4 | 255.255.255.255 | | f | f gre4 | 10.1.1.6 | 10.1.1.5 | 1.1.1.5 | t | 1.1.1.6 | 255.255.255.255 | | f | f gre5 | 10.1.1.8 | 10.1.1.7 | 1.1.1.7 | t | 1.1.1.8 | 255.255.255.255 | | f | f gre6 | 10.1.1.10 | 10.1.1.9 | 1.1.1.9 | t | 1.1.1.10| 255.255.255.255 | | f | f gre7 | 10.1.1.12 | 10.1.1.11 | 1.1.1.11| t | 1.1.1.12| 255.255.255.255 | | f | f gre8 | 10.1.1.14 | 10.1.1.13 | 1.1.1.13| t | 1.1.1.14| 255.255.255.255 | | f | f gre9 | 10.1.1.16 | 10.1.1.15 | 1.1.1.15| t | 1.1.1.16| 255.255.255.255 | | f | f gre10 | 10.1.1.18 | 10.1.1.17 | 1.1.1.17| t | 1.1.1.18| 255.255.255.255 | | f | f gre11 | 10.1.1.20 | 10.1.1.19 | 1.1.1.19| t | 1.1.1.20| 255.255.255.255 | | f | f gre12 | 10.1.1.22 | 10.1.1.21 | 1.1.1.21| t | 1.1.1.22| 255.255.255.255 | | f | f gre13 | 10.1.1.24 | 10.1.1.23 | 1.1.1.23| t | 1.1.1.24| 255.255.255.255 | | f | f gre14 | 10.1.1.26 | 10.1.1.25 | 1.1.1.25| t | 1.1.1.26| 255.255.255.255 | | f | f gre15 | 10.1.1.28 | 10.1.1.27 | 1.1.1.27| t | 1.1.1.28| 255.255.255.255 | | f | f gre16 | 10.1.1.30 | 10.1.1.29 | 1.1.1.29| t | 1.1.1.30| 255.255.255.255 | | f | f gre17 | 10.1.1.32 | 10.1.1.31 | 1.1.1.31| t | 1.1.1.32| 255.255.255.255 | | f | f gre18 | 10.1.1.34 | 10.1.1.33 | 1.1.1.33| t | 1.1.1.34| 255.255.255.255 | | f | f gre19 | 10.1.1.36 | 10.1.1.35 | 1.1.1.35| t | 1.1.1.36| 255.255.255.255 | | f | f gre20 | 10.1.1.38 | 10.1.1.37 | 1.1.1.37| t | 1.1.1.38| 255.255.255.255 | | f | f gre21 | 10.1.1.40 | 10.1.1.39 | 1.1.1.39| t | 1.1.1.40| 255.255.255.255 | | f | f (20 rows) Thanks in advance, -- Stephen Clark NetWolves Sr. Software Engineer III Phone: 813-579-3200 Fax: 813-882-0209 Email: steve.cl...@netwolves.com www.netwolves.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column to inet get error.
On Fri, Sep 3, 2010 at 7:21 AM, Steve Clark scl...@netwolves.com wrote: Hello List, I want to change some columns in a database that were originally created as char varying to inet. When I try I get an error. Is there anyway to work around this? See below for table definition. Table public.kernel_gre Column | Type | Modifiers -+---+--- interface | character varying(15) | not null source_ip | character varying(16) | dest_ip | character varying(16) | physical_ip | character varying(16) | status | boolean | default false physical_src_ip | character varying(16) | tunnel_netmask | character varying(16) | key | character varying(32) | state | boolean | default false broadcast | boolean | default false alter TABLE kernel_gre ALTER COLUMN source_ip TYPE inet; ERROR: column source_ip cannot be cast to type inet Try alter TABLE kernel_gre ALTER COLUMN source_ip TYPE inet using source_ip::inet -- To understand recursion, one must first understand recursion. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column to inet get error.
In response to Steve Clark : Hello List, I want to change some columns in a database that were originally created as char varying to inet. When I try I get an error. Is there anyway to work around this? See below for table definition. Table public.kernel_gre Column | Type | Modifiers -+---+--- interface | character varying(15) | not null source_ip | character varying(16) | dest_ip | character varying(16) | physical_ip | character varying(16) | status | boolean | default false physical_src_ip | character varying(16) | tunnel_netmask | character varying(16) | key | character varying(32) | state | boolean | default false broadcast | boolean | default false alter TABLE kernel_gre ALTER COLUMN source_ip TYPE inet; ERROR: column source_ip cannot be cast to type inet Try this with explicet cast: test=# create table ip (ip text); CREATE TABLE Zeit: 247,763 ms test=*# copy ip from stdin; Geben Sie die zu kopierenden Daten ein, gefolgt von einem Zeilenende. Beenden Sie mit einem Backslash und einem Punkt alleine auf einer Zeile. 127.0.0.1 \. Zeit: 5199,184 ms test=*# alter table ip alter column ip type inet using ip::inet; ALTER TABLE Zeit: 242,569 ms test=*# \d ip Tabelle »public.ip« Spalte | Typ | Attribute +--+--- ip | inet | test=*# Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] On-disk size of db increased after restore
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= dev...@gunduz.org writes: On Thu, 2010-09-02 at 13:22 -0400, Tom Lane wrote: Devrim, have you identified yet which tables have the bloat? Are they the ones with tweaked autovacuum parameters? That's it. On prod server, that table consumes 50 GB disk space, and on the backup machine, it uses 148 GB. I applied custom autovac settings only to that table. This is 8.4.4 btw... OK, so the bug is fixed, but you still have fillfactor = 0 on the affected table. So, what should I do now? Explicitly reset the table's fillfactor to default (100), then you'll need to CLUSTER or VACUUM FULL or something. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problems (bug?) with the Postgres 8.4.4 optimizer, 2nd try
Hi everyone. I am using Postgres 8.4.4 on a large-ish amount of data and recently noticed that my application got very slow at times. I quickly discovered that a specific query was triggering a sequential scan despite suitable indices being available. The query in question looks like this: select * from kvstore where deviceid = 7 AND (locid = 1410929 AND locid = 1690468) OR (locid = 1690469 and locid = 1690468) Note that the last condition (locid = 2 AND locid = 1) can never be satisfied. Now, the Postgres optimizer seems to believe that a sequential scan of 16 million rows is the right way of approaching this query, despite having accurate statistics (I ran VACUUM ANALYZE before to ensure everything is up-to-date). However, if I remove the last part and query for select * from kvstore where deviceid = 7 AND (locid = 1410929 AND locid = 1690468), indices are used and everything works nicely. And I believe that the optimizer should remove an invalid query, or at least handle it gracefully (e.g. use it as a parameter for a range query). Since it doesn't do that, I am a little stumped as to what the correct course of action for me is. I could try to manually remove invalid parts of my query, but then again I don't want to be patching queries to accommodate a stubborn optimizer if I don't have to... maybe I stumbled upon a bug? One more thing, while I'm already writing this message: Maybe someone can explain why for the above (working) query, and given a primary key on (deviceid, locid) Postgres decides to do a Bitmap Index Scan followed by a Bitmap Heap Scan, rather than a simple Index Scan on the perfectly matching index? I can even simplify this problem: When I issue this query: select * from kvstore where deviceid = 7 AND locid = 1410929 my primary key index is used in an Index Scan. As soon as I make this a range query (select * from kvstore where deviceid = 7 AND locid = 1410929) Postgres decides to do a a Bitmap Index Scan followed by a Bitmap Heap Scan. I am somewhat confused by this behavior, but it is more my curiosity that is asking than a burning need (as in the first case). If someone could explain what might happen there, I'd be very happy, though! :) Thanks in advance for any help you might be able to provide! Daniel PS: Sorry about the double-post, I accidentally hit send before :-/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problems (bug?) with the Postgres 8.4.4 optimizer
Hi everyone. I am using Postgres 8.4.4 on a large-ish amount of data and recently noticed that my application got very slow at times. I quickly discovered that a specific query was triggering a sequential scan despite suitable indices being available. The query in question looks like this: select * from kvstore where deviceid = 7 AND (locid = 1410929 AND locid = 1690468) OR (locid = 1690469 and locid = 1690468) Note that the last condition (locid = 2 AND locid = 1) can never be satisfied. Now, the Postgres optimizer seems to believe that a sequential scan of 16 million rows is the right way of approaching this query, despite having accurate statistics (I ran VACUUM ANALYZE before to ensure everything is up-to-date). However, if I remove the last part and query for select * from kvstore where deviceid = 7 AND (locid = 1410929 AND locid = 1690468), indices are used and everything works nicely. And I believe that the optimizer should remove an invalid query, or at least handle it gracefully (e.g. use it as a parameter for a range query). Since it doesn't do that, I am a little stumped as to what the correct course of action for me is. I could try to manually remove invalid parts of my query, but then again I don't want to be patching queries to accommodate a stubborn optimizer if I don't have to... maybe I stumbled upon a bug? One more thing, while I'm already writing this message: Maybe someone can explain why for the above (working) query, and given a primary key on (deviceid, locid) Postgres decides to do a after a Bitmap Index Scan there is always another Bitmap Heap Scan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column to inet get error.
On 09/03/2010 09:38 AM, A. Kretschmer wrote: In response to Steve Clark : Hello List, I want to change some columns in a database that were originally created as char varying to inet. When I try I get an error. Is there anyway to work around this? See below for table definition. Table public.kernel_gre Column | Type | Modifiers -+---+--- interface | character varying(15) | not null source_ip | character varying(16) | dest_ip | character varying(16) | physical_ip | character varying(16) | status | boolean | default false physical_src_ip | character varying(16) | tunnel_netmask | character varying(16) | key | character varying(32) | state | boolean | default false broadcast | boolean | default false alter TABLE kernel_gre ALTER COLUMN source_ip TYPE inet; ERROR: column source_ip cannot be cast to type inet Try this with explicet cast: test=# create table ip (ip text); CREATE TABLE Zeit: 247,763 ms test=*# copy ip from stdin; Geben Sie die zu kopierenden Daten ein, gefolgt von einem Zeilenende. Beenden Sie mit einem Backslash und einem Punkt alleine auf einer Zeile. 127.0.0.1 \. Zeit: 5199,184 ms test=*# alter table ip alter column ip type inet using ip::inet; ALTER TABLE Zeit: 242,569 ms test=*# \d ip Tabelle »public.ip« Spalte | Typ | Attribute +--+--- ip | inet | test=*# Regards, Andreas Thanks guys, that seems to do the trick. Postgresql ROCKS!!! -- Stephen Clark NetWolves Sr. Software Engineer III Phone: 813-579-3200 Fax: 813-882-0209 Email: steve.cl...@netwolves.com www.netwolves.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] On-disk size of db increased after restore
On Fri, 2010-09-03 at 09:41 -0400, Tom Lane wrote: This is 8.4.4 btw... OK, so the bug is fixed, but you still have fillfactor = 0 on the affected table. I'm confused. I'm still seeing a bug in here: I cannot restore a dump effectively... Running CLUSTER or VACUUM FULL does not make any sense to me in here. Regards, -- Devrim GÜNDÜZ PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM Repository: http://yum.pgrpms.org Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz signature.asc Description: This is a digitally signed message part
Re: [GENERAL] On-disk size of db increased after restore
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= dev...@gunduz.org writes: On Fri, 2010-09-03 at 09:41 -0400, Tom Lane wrote: This is 8.4.4 btw... OK, so the bug is fixed, but you still have fillfactor = 0 on the affected table. I'm confused. I'm still seeing a bug in here: I cannot restore a dump effectively... Running CLUSTER or VACUUM FULL does not make any sense to me in here. Oh, wait. What you need is this patch: 2010-06-06 23:01 itagaki * doc/src/sgml/ref/create_table.sgml, src/backend/access/common/reloptions.c (REL8_4_STABLE): Ensure default-only storage parameters for TOAST relations to be initialized with proper values. Affected parameters are fillfactor, analyze_threshold, and analyze_scale_factor. Especially uninitialized fillfactor caused inefficient page usage because we built a StdRdOptions struct in which fillfactor is zero if any reloption is set for the toast table. In addition, we disallow toast.autovacuum_analyze_threshold and toast.autovacuum_analyze_scale_factor because we didn't actually support them; they are always ignored. Report by Rumko on pgsql-bugs on 12 May 2010. Analysis by Tom Lane and Alvaro Herrera. Patch by me. Backpatch to 8.4. which I now realize went in *post* 8.4.4. We're really overdue for a new set of back-branch releases ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column to inet get error.
In response to Steve Clark : Try this with explicet cast: Thanks guys, that seems to do the trick. Postgresql ROCKS!!! Yeah, definitively! You are welcome, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Compiling extension C with MingW in windows, Error...
Date: Fri, 3 Sep 2010 09:41:17 +0200 Subject: Re: [GENERAL] Compiling extension C with MingW in windows, Error... From: mag...@hagander.net To: listas_quij...@hotmail.com CC: pgsql-general@postgresql.org On Fri, Sep 3, 2010 at 5:31 AM, Edwin Quijada listas_quij...@hotmail.com wrote: Hi, I am tryng to compile a C extension in windows using Minigw but always I get the same error C:\Program Files\PostgreSQL\8.3\share\exte_cC:\mingw\bin\gcc -shared -o pg2.dll pg2.o pg2.o:pg2.c:(.text+0x86): undefined reference to `_imp__CurrentMemoryContext' pg2.o:pg2.c:(.text+0x92): undefined reference to `MemoryContextAlloc' collect2: ld returned 1 exit status This error is just when it links. You need to link against postgres.exe to get access to these symbols. I don't recall if the mingw linker allows you to just specify the EXE file these days, but I think it does. If not, you'll need to create an import library from the EXE and link to that (the binary distribution only ships with import libraries for MSVC, but mingw can't use standard windows import libraries, so you need to create your own there) Magnus how can I linking against postgres.exe I mean the gcc line. I did others function and worked fine This is the function that I did and worked, with this code I did a foo.dll and added this to postgresql. #include postgres.h #include string.h #include fmgr.h #ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; #endif /* by value */ PG_FUNCTION_INFO_V1(add_one); Datum add_one(PG_FUNCTION_ARGS) { int32 arg = PG_GETARG_INT32(0); PG_RETURN_INT32(arg + 1); } *---* *-Edwin Quijada *-Developer DataBase *-JQ Microsistemas *-Soporte PostgreSQL *-www.jqmicrosistemas.com *-809-849-8087 *---*
Re: [GENERAL] Connection question
- Original Message - From: Craig Ringer cr...@postnewspapers.com.au To: Bayless Kirtley bk...@cox.net Cc: List, Postgres pgsql-general@postgresql.org Sent: Thursday, September 02, 2010 10:15 PM Subject: Re: [GENERAL] Connection question On 2/09/2010 11:59 PM, Bayless Kirtley wrote: Thanks a lot Craig. The register is connecting through localhost so it's not that. I guess I'm not surprised about the Windows thing. I suppose we'll just have to live with it then. I wouldn't be too sure yet. Did you run the test where you reboot the POS system with the ethernet unplugged? Does it still connect to the database OK? I didn't think XP broke TCP/IP connections to *localhost* when an ethernet interface went up or down, unless the connection was to a local IP associated with that interface. If you're connected to 127.0.0.1 from 127.0.0.1 it shouldn't, AFAIK, matter if you lose your wifi/ethernet. Come to think of it, you might want to check to make sure the local Ethernet interface is set to use DHCP. If it's using a static IP, Windows might permit that IP to be used to talk to the local host even when there's no link on the interface. So - to be sure, you can disable the Ethernet interface in the network control panel before your reboot for testing. Anyway, if you really can't make it work properly, there's a simple if mildly clumsy workaround. You can prevent XP from realizing it's lost its connection to the router by placing another device between the XP machine(s) and the router, like a plain old cheap switch. That way Ethernet link on the XP machines is never lost when you reboot the router - only between the router and the switch, where the XP machines can't tell. When the router goes down they lose their DHCP server, but that doesn't matter if it comes up again before they try to renew their DHCP leases, and for local-to-local traffic that's about all they'll be relying on the router for. You could even avoid that by setting static IP addresses. -- Craig Ringer Tech-related writing at http://soapyfrogs.blogspot.com/ Hmmm, more to consider then. I have not done the test you suggested. Guess I pounced on the Windows losing TCP/IP prematurely. It has been a while but I think I set them up for static IP addresses initially. I did check to be certain it is using localhost though. I will catch a slow day soon and try your ideas. I'll report the results back soon. Thanks Craig, Bayless -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Hi all, I've come across a puzzling situation with a table having a timestamp with time zone column. This column is full of values displaying exactly as '1999-12-31 19:00:00-05', but for some reason Postgres is treating some of these identical-seeming timestamps as being different. If I update all these timestamps by adding an interval of '1 DAYS' to all rows, Postgres recognizes all the values as being the same. If I repeat this experiment using a timestamp without time zone type, Postgres recognizes all the timestamps as being the same. When I pg_dump the timestamps_test table, I see a normal-looking dump: COPY timestamps_test (ts) FROM stdin; 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 … and when I reload this pg_dump file back into the same database, Postgres again recognizes that all the timestamps are the same (i.e. SELECT COUNT(DISTINCT(ts)) returns 1). I've attached a plain-text pg_dump of this table. Here's a log of how I created this timestamps_test table, from a source table full of these '1999-12-31 19:00:00-05' timestamps. Any ideas what might be causing this? test=# CREATE TABLE timestamps_test (ts timestamp with time zone NOT NULL); CREATE TABLE test=# INSERT INTO timestamps_test (ts) SELECT DISTINCT(updated) FROM myschema.strange_table; INSERT 0 119 test=# SELECT COUNT(DISTINCT(ts)) FROM timestamps_test; count --- 119 (1 row) test=# SELECT DISTINCT(ts) FROM timestamps_test LIMIT 10; ts 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 (10 rows) test=# SELECT MAX(ts) = MIN(ts), MAX(ts) - MIN(ts) FROM timestamps_test; ?column? | ?column? --+-- f| 00:00:00 (1 row) test=# UPDATE timestamps_test SET ts = ts + INTERVAL '1 DAYS'; UPDATE 119 test=# SELECT COUNT(DISTINCT(ts)) FROM timestamps_test; count --- 1 (1 row) test=# SELECT DISTINCT(ts) FROM timestamps_test LIMIT 10; ts 2000-01-01 19:00:00-05 (1 row) test=# SELECT version(); version --- PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 2 0080704 (Red Hat 4.1.2-46), 64-bit (1 row) test=# SELECT name, setting FROM pg_settings WHERE name IN ('TimeZone', 'lc_collate', 'lc_ctype', 'lc_time', 'DateStyle'); name| setting + DateStyle | ISO, MDY lc_collate | C lc_ctype | C lc_time| C TimeZone | US/Eastern (5 rows) Thanks for any ideas, Josh -- -- PostgreSQL database dump -- SET statement_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = off; SET check_function_bodies = false; SET client_min_messages = warning; SET escape_string_warning = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: timestamps_test; Type: TABLE; Schema: public; Owner: postgres; Tablespace: -- CREATE TABLE timestamps_test ( ts timestamp with time zone NOT NULL ); ALTER TABLE public.timestamps_test OWNER TO postgres; -- -- Data for Name: timestamps_test; Type: TABLE DATA; Schema: public; Owner: postgres -- COPY timestamps_test (ts) FROM stdin; 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 1999-12-31
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Josh Kupershmidt schmi...@gmail.com writes: I've come across a puzzling situation with a table having a timestamp with time zone column. This column is full of values displaying exactly as '1999-12-31 19:00:00-05', but for some reason Postgres is treating some of these identical-seeming timestamps as being different. Is this installation using float or integer timestamps? If the former, it might be interesting to look at the subtraction result ts - '1999-12-31 19:00:00-05'::timestamptz I'm thinking some of them might be different by submicrosecond amounts. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
On Fri, Sep 3, 2010 at 2:58 PM, Tom Lane t...@sss.pgh.pa.us wrote: Is this installation using float or integer timestamps? If the former, it might be interesting to look at the subtraction result ts - '1999-12-31 19:00:00-05'::timestamptz I'm thinking some of them might be different by submicrosecond amounts. Ah yes, this is likely why. pg_config says CONFIGURE = ... '--disable-integer-datetimes' ... But I'm having trouble seeing for sure whether there are submicrosecond parts of these timestamps. I just see a bunch of '00:00:00' values with your query: test=# SELECT ts - '1999-12-31 19:00:00-05'::timestamptz FROM timestamps_test LIMIT 5; ?column? -- 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 (5 rows) And SELECT EXTRACT(microseconds FROM ts) FROM timestamps_test also just gives me zeroes. Is there a way for me to see for sure? Josh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How can I use parameters in plain sql
How can I use parameters in plain sql like sql server. FICTIONAL example that works for sql server: declare @i int; set @i = 1; select * from mytable where i...@i;
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Josh Kupershmidt schmi...@gmail.com writes: On Fri, Sep 3, 2010 at 2:58 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm thinking some of them might be different by submicrosecond amounts. Ah yes, this is likely why. pg_config says CONFIGURE = ... '--disable-integer-datetimes' ... But I'm having trouble seeing for sure whether there are submicrosecond parts of these timestamps. Experimenting, I can do this: regression=# create table t1 (ts timestamptz); CREATE TABLE regression=# insert into t1 select '1999-12-31 19:00:00.001-05'::timestamptz; INSERT 0 1 regression=# insert into t1 select '1999-12-31 19:00:00.00-05'::timestamptz; INSERT 0 1 regression=# select * from t1; ts 1999-12-31 19:00:00-05 1999-12-31 19:00:00-05 (2 rows) regression=# select extract(epoch from ts - '1999-12-31 19:00:00-05'::timestamptz) from t1; date_part -- 1.0761449337e-07 0 (2 rows) This timestamp (2000-01-01 00:00 GMT) is actually the zero value internally for Postgres timestamps, so in principle a float timestamp has precision far smaller than microseconds for values near this. We don't make any great effort to expose that though. It looks like the closest value that timestamptzin makes different from zero is regression=# select extract(epoch from '1999-12-31 19:00:00.001-05' - '1999-12-31 19:00:00-05'::timestamptz) ; date_part -- 1.45519152283669e-11 (1 row) regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How can I use parameters in plain sql
On Fri, Sep 3, 2010 at 2:45 PM, John Adams john_adams_m...@yahoo.com wrote: How can I use parameters in plain sql like sql server. FICTIONAL example that works for sql server: declare @i int; set @i = 1; select * from mytable where i...@i; postgresql doesn't support variables in plain sql. psql has some client side manged variables, and you can of course use pl/pgsql. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
On Fri, Sep 3, 2010 at 3:24 PM, Tom Lane t...@sss.pgh.pa.us wrote: regression=# select extract(epoch from ts - '1999-12-31 19:00:00-05'::timestamptz) from t1; date_part -- 1.0761449337e-07 0 (2 rows) This timestamp (2000-01-01 00:00 GMT) is actually the zero value internally for Postgres timestamps, so in principle a float timestamp has precision far smaller than microseconds for values near this. We don't make any great effort to expose that though. It looks like the closest value that timestamptzin makes different from zero is regression=# select extract(epoch from '1999-12-31 19:00:00.001-05' - '1999-12-31 19:00:00-05'::timestamptz) ; date_part -- 1.45519152283669e-11 (1 row) EXTRACT(epoch ...) was what I was looking for: SELECT EXTRACT(epoch FROM ts - '1999-12-31 19:00:00-05'::timestamptz) FROM timestamps_test LIMIT 5; date_part --- 1.4120666068199e-309 1.4154982781624e-309 1.41550281692099e-309 1.41591466059161e-309 1.41591524669472e-309 (5 rows) Thanks for the help, Tom. Josh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
[ trivia warning ] I wrote: We don't make any great effort to expose that though. It looks like the closest value that timestamptzin makes different from zero is regression=# select extract(epoch from '1999-12-31 19:00:00.001-05' - '1999-12-31 19:00:00-05'::timestamptz) ; date_part -- 1.45519152283669e-11 (1 row) Actually, it looks like the precision is being limited by the rotation from EST zone. In GMT zone I can do this: regression=# select extract(epoch from '2000-01-01 00:00:00.001'::timestamptz - '2000-01-01 00:00:00'); date_part --- 1e-67 (1 row) and it could go a lot smaller except there's an arbitrary limit on the length of input string that timestamptzin will take. If float timestamps weren't deprecated it might be worth trying to make this behave less surprisingly. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Josh Kupershmidt schmi...@gmail.com writes: EXTRACT(epoch ...) was what I was looking for: SELECT EXTRACT(epoch FROM ts - '1999-12-31 19:00:00-05'::timestamptz) FROM timestamps_test LIMIT 5; date_part --- 1.4120666068199e-309 1.4154982781624e-309 1.41550281692099e-309 1.41591466059161e-309 1.41591524669472e-309 (5 rows) Wow. You must have gotten those with the help of some arithmetic, because timestamptzin would never have produced them. I found out I can do regression=# select extract(epoch from ('2000-01-01 00:00:00'::timestamptz + '0.1'::interval) - '2000-01-01 00:00:00'); date_part --- 1e-209 (1 row) but I wonder what it was you actually did. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
On Fri, Sep 3, 2010 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote: Wow. You must have gotten those with the help of some arithmetic, because timestamptzin would never have produced them. I found out I can do regression=# select extract(epoch from ('2000-01-01 00:00:00'::timestamptz + '0.1'::interval) - '2000-01-01 00:00:00'); date_part --- 1e-209 (1 row) but I wonder what it was you actually did. I wonder myself :-) I encountered these timestamps while going through some C code I inherited which uses libpq to load several tables (such as myschema.strange_table in the original example) using COPY FROM STDIN. I don't think any timestamp arithmetic was involved. The code was supposed to copy in legitimate timestamps, but instead loaded all these '1999-12-31 19:00:00-05' values, and I'm still trying to figure out how/why. Josh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Josh Kupershmidt schmi...@gmail.com writes: On Fri, Sep 3, 2010 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote: but I wonder what it was you actually did. I wonder myself :-) I encountered these timestamps while going through some C code I inherited which uses libpq to load several tables (such as myschema.strange_table in the original example) using COPY FROM STDIN. I don't think any timestamp arithmetic was involved. The code was supposed to copy in legitimate timestamps, but instead loaded all these '1999-12-31 19:00:00-05' values, and I'm still trying to figure out how/why. Interesting. I can't imagine how you could have produced these with plain COPY, since that would go through timestamptzin. Was it by any chance a binary COPY? If so I could believe that funny timestamps could get in. Maybe some confusion over endianness of the binary data, for instance. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
On Fri, Sep 3, 2010 at 4:01 PM, Tom Lane t...@sss.pgh.pa.us wrote: Interesting. I can't imagine how you could have produced these with plain COPY, since that would go through timestamptzin. Was it by any chance a binary COPY? If so I could believe that funny timestamps could get in. Maybe some confusion over endianness of the binary data, for instance. Exactly, the code is using COPY ... TO STDOUT WITH BINARY along with COPY ... FROM STDIN WITH BINARY. Josh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] funkiness with '1999-12-31 19:00:00-05'::timestamp with time zone
Josh Kupershmidt schmi...@gmail.com writes: On Fri, Sep 3, 2010 at 4:01 PM, Tom Lane t...@sss.pgh.pa.us wrote: Interesting. I can't imagine how you could have produced these with plain COPY, since that would go through timestamptzin. Was it by any chance a binary COPY? If so I could believe that funny timestamps could get in. Maybe some confusion over endianness of the binary data, for instance. Exactly, the code is using COPY ... TO STDOUT WITH BINARY along with COPY ... FROM STDIN WITH BINARY. OK; what you need to look at is how the client code is preparing the timestamp values. What they should be is floats representing seconds since 2000-01-01 00:00 GMT, sent in bigendian byte order. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How can I use parameters in plain sql
On Fri, Sep 3, 2010 at 3:47 PM, John Adams john_adams_m...@yahoo.com wrote: psql has some client side manged variables, and you can of course use pl/pgsql. Do you mean I should use a pl/pgsql stored procedure or do I have to somehow mark the sql as pl/pgsql? How? Because in sql server it is all the same i.e. plain sql=tsql pl/pgsql is only used in functions: create function foo(i int) returns setof mytable as $$ begin return query select * from mytable where id = i; end; $$ language plpgsql; then plpgsql is reserved for fancy things that are tricky to do with plain sql. it's got loops, robust error handling, etc. http://www.postgresql.org/docs/8.4/static/plpgsql.html merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to restore a Plan from a stored plan text?
I've used the following codes to translate the PlannedStmt node to a char string: PlannedStmt * pltl = (PlannedStmt *) linitial(plantree_list); Plan *pl = pltl-planTree; char *s; s = nodeToString(pl); How to restore from this s to Plan? I noticed using func parseNodeString() in /backends/nodes/readfuncs.c can't work, for example there is no codes translating into Agg node, should I write my code to parse this string back into PlannedStmt node? Thanks!
Re: [GENERAL] How to restore a Plan from a stored plan text?
sunpeng blueva...@gmail.com writes: I've used the following codes to translate the PlannedStmt node to a char string: PlannedStmt * pltl = (PlannedStmt *) linitial(plantree_list); Plan *pl = pltl-planTree; char *s; s = nodeToString(pl); How to restore from this s to Plan? You can't. The fact that there's nodeToString support for all Plan node types is only intended as a debugging aid --- there's no intention that it should be possible to serialize and deserialize plans this way. You didn't say what it is you actually hope to accomplish, but maybe asking plancache.c to store the plan for you would do. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore a Plan from a stored plan text?
Thanks for your help!The motivation is that I try to find the most used sub plan ,and cach the sub plan's execution result and store sub plan itself on disk. Even the sub plan's connection is closed, the consequent connection with the same sub plan could utilize the stored cached result. For example. The first connection comes and according history information we find the most used sub plan, and after execution, i serialize this sub plan node into a text file and stored the sub plan's execution result on disk: Plan *subPlan1; char *s; s = nodeToString(subPlan1); //then store s into a text file subPlan1.txt on disk. //and store the sub plan's execution result Then the first connection closed. Now the second connection comes, if the server generate the same sub plan i could just read the first sub plan's result: Plan *subPlan2; char *s ;//then read s from the text file subPlan1.txt on disk Plan *subPlan1 = deserialized(s); bool equ = equal(plan1,plan2); //which also can't work for Plan node if(equ){ //then return the cached first connection's result; } ... Then should I write deserialized(s) codes and another equal(void *, void*) function to support Plan node? 2010/9/3 Tom Lane t...@sss.pgh.pa.us sunpeng blueva...@gmail.com writes: I've used the following codes to translate the PlannedStmt node to a char string: PlannedStmt * pltl = (PlannedStmt *) linitial(plantree_list); Plan *pl = pltl-planTree; char *s; s = nodeToString(pl); How to restore from this s to Plan? You can't. The fact that there's nodeToString support for all Plan node types is only intended as a debugging aid --- there's no intention that it should be possible to serialize and deserialize plans this way. You didn't say what it is you actually hope to accomplish, but maybe asking plancache.c to store the plan for you would do. regards, tom lane
[GENERAL] How to let the created table visible to current process when using SPI_execute(create table ...)?
When Postmaster starts, I've forked another process AP just as syslogger, bgwritter,... In the process AP, If I can't find a table, I would create one, the codes are: char * sqlCreate_DM_ = create table DM_( ...); ; SPI_connect(); int ret = SPI_execute(sqlCreate_DM_, false, 1); SPI_finish(); but after SPI_finish(); I found the created table DM_ is not visible to current process, how to resolve it? Thanks!