Re: [GENERAL] Adding ip4r to Postgresql core?
On Wed, Aug 7, 2013 at 9:44 PM, Tom Lane t...@sss.pgh.pa.us wrote: Alvaro Herrera alvhe...@2ndquadrant.com writes: Tom Lane wrote: That's been proposed before, and rejected before, on the grounds that since it doesn't support IPv6 its days are numbered. Actually, that's not true. Ah, my information is obsolete. As a user of ip4r, yes, we use it with ipv6 just fine. Quoting from the linked readme file, : IP4R therefore supports six distinct data types: : ip4 - a single IPv4 address : ip4r - an arbitrary range of IPv4 addresses : ip6 - a single IPv6 address : ip6r - an arbitrary range of IPv6 addresses : ipaddress - a single IPv4 or IPv6 address : iprange- an arbitrary range of IPv4 or IPv6 addresses So the obvious question today is whether this isn't duplicative of the range datatype stuff. IOW, why wouldn't we be better off to invent inetrange and call it good? actually this misses the one area where ip4r is really helpful and that is GiST support. If you want to have an exclusion constraint which specifies that no two cidr blocks in a table can contain eachother, you can do this easily with ip4r but it takes a lot of work without it. iprange is mostly helpful in that area. Our current core data types have better cross-type casting support, but the lack of GiST support (a commutable overlaps operator for example) is a really big limitation. My vote would be to focus on GiST support instead, but otherwise ip4r would be acceptable to me. -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more.shtml
Re: [GENERAL] Self referencing composite datatype
On Aug 8, 2013, at 4:11, Sergey Konoplev gray...@gmail.com wrote: create table node as ( id integer primary key, r integer, s integer, children integer[] element references node ); so you could download 9.3rc2 and experimant with it. Now (on =9.2.x) you can create the table without FK create table node as ( id integer primary key, r integer, s integer, children integer[] ); and check integrity by triggers. Or, instead of attempting to reference all child nodes from the parent, reference the parent node from each child node. That's been supported in PG versions like forever and can be queried fairly efficiently using recursive CTE's since PG 9. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Self referencing composite datatype
On Wed, Aug 7, 2013 at 11:38 PM, Alban Hertroys haram...@gmail.com wrote: On Aug 8, 2013, at 4:11, Sergey Konoplev gray...@gmail.com wrote: create table node as ( id integer primary key, r integer, s integer, children integer[] ); and check integrity by triggers. Or, instead of attempting to reference all child nodes from the parent, reference the parent node from each child node. That's been supported in PG versions like forever and can be queried fairly efficiently using recursive CTE's since PG 9. That particular moment I thought it was about graphs. Later OP mentioned tree, so yes, it is better to use parent reference here. -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Self referencing composite datatype
On Wed, Aug 7, 2013 at 11:38 PM, Alban Hertroys haram...@gmail.com wrote: On Aug 8, 2013, at 4:11, Sergey Konoplev gray...@gmail.com wrote: create table node as ( id integer primary key, r integer, s integer, children integer[] element references node ); so you could download 9.3rc2 and experimant with it. Now (on =9.2.x) you can create the table without FK create table node as ( id integer primary key, r integer, s integer, children integer[] ); and check integrity by triggers. Or, instead of attempting to reference all child nodes from the parent, reference the parent node from each child node. That's been supported in PG versions like forever and can be queried fairly efficiently using recursive CTE's since PG 9. If you do this, have a position number, and use that for ordering. You need some sort of ordinality here. Best Wishes, Chris Travers Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more.shtml
Re: [GENERAL] Replication Postgre Oracle
Hi, From Oracle to PostgreSQL, you could have a look at Goldengate. It does not support PostgreSQL as the source database. Regards, Jayadevan On Wed, Aug 7, 2013 at 1:54 PM, BOUVARD Aurélien aurelien.bouv...@supinfo.com wrote: Hi all, ** ** My compagny would like to configure replication between PostgreSQL 9.2.4 and Oracle Database (11g and maybe 12c soon). We saw that* Postgres Plus Database Replication *could be a good solution for us.* * * * We also thank to develop a solution based on trigger and/or WAL , but we didn’t know if it’s possible in our case…we have a huge amount of data (100 terabits) and it will increase soon (Datawarehouse context) ** ** So it will be very interesting for us to have some feedback about PostGre Plus or other concepts/solutions. ** ** Regards, ** ** ** ** ** ** ** **
Re: [GENERAL] Incremental backup with RSYNC or something?
Hi Ben, Are you able to post these scripts?
Re: [GENERAL] How to prevent clear screen when query finish ?
On 2013-08-07 19:01, Adrian Klaver wrote: On 08/07/2013 08:53 AM, Condor wrote: Hello, sorry for dumb question, did any one can tell me how the hell I can remove clear screen after finish the sql query from console ? This probably have some idea, but for me look like very ... not good idea. When I run query from console like: SELECT * FROM table_x; I got the result ... with (END) and when I click key - q for quit the result disappear and I can't scroll it back, if I need to check something again (change console do something, get one line and need to scroll back) and want to see my old result again, I need to run query again ... I never has this problem on Slackware, but today one of my partners give me a shell to his ubuntu server to fix something on database. The pager is probably not set, see here for more info: http://www.postgresql.org/docs/9.2/interactive/app-psql.html pager Controls use of a pager program for query and psql help output. If the environment variable PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default (such as more) is used. When the pager option is off, the pager program is not used. When the pager option is on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. The pager option can also be set to always, which causes the pager to be used for all terminal output regardless of whether it fits on the screen. \pset pager without a value toggles pager use on and off. Thank you, last question: How I can find where is set this ENV ? because: # env TERM=xterm SHELL=/bin/bash SSH_CLIENT=192.68.1.111 52614 22 SSH_TTY=/dev/pts/2 USER=root LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01; 35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: MAIL=/var/mail/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games PWD=/root LANG=en_US.UTF-8 PS1=\h:\w\$ SHLVL=1 HOME=/root LANGUAGE=en_US LS_OPTIONS=--color=auto LOGNAME=root SSH_CONNECTION=192.68.1.111 52614 192.68.1.121 22 HISTTIMEFORMAT=[%Y-%m-%d %T] _=/usr/bin/env OLDPWD=/root and when I enter to db: my_db=# \set AUTOCOMMIT = 'on' PROMPT1 = '%/%R%# ' PROMPT2 = '%/%R%# ' PROMPT3 = ' ' VERBOSITY = 'default' VERSION = 'PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit' DBNAME = 'my_db' USER = 'postgres' HOST = '127.0.0.1' PORT = '5432' ENCODING = 'UTF8' my_db=# I can't see this variable PAPER but yes, \pset paper work for connection. Cheers, Hristo S. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to prevent clear screen when query finish ?
I think you mean PAGER, not PAPER. I usually do this: PAGER=more psql This will set it for the connection, and it lets me use less as a pager by default elsewhere. You might also see what you can do to set it locally if you want to change it for everything. Cheers, Hristo S. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more.shtml
Re: [GENERAL] How to prevent clear screen when query finish ?
On 7 August 2013 18:01, Adrian Klaver adrian.kla...@gmail.com wrote: On 08/07/2013 08:53 AM, Condor wrote: http://www.postgresql.org/docs/9.2/interactive/app-psql.html pager Controls use of a pager program for query and psql help output. If the environment variable PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default (such as more) is used.-- If you use less for pager and want this behaviour system-wide, you'll want to set another env variable: setenv LESS -X (I use tcsh) or export LESS=-X if you use bash. That makes less behave properly (not clearing the screen) after closing man-pages and such. -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest.
Re: [GENERAL] Performance of ORDER BY RANDOM to select random rows?
On Thu, Aug 08, 2013 at 12:01:17PM +1000, Victor Hooi wrote: I'm just wondering if this is still the case? Yes. Order by random() is and, most likely, will be slow. Not sure if there is any engine that could make it fast. I just ran those benchmarks on my system (Postgres 9.2.4), and using ORDERY BY RANDOM did not seem substantially to generating random integers in Python and picking those out (and handling non-existent rows). I think you accidentally a word. depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] setting high value for wal_keep_segments
Hi, I am going to sync slave with my master which is almost 500 G. I am not using archive directory instead of I am using wal files for streaming. As it may take almost 3 hours, I am thinking of setting up 400 for wal_keep_segments where I have enough space available. Without the space issue, could there be any other problem in setting up such high value for wal_keep_segments? As this is production, I need to confirmed. Thanks.
Re: [GENERAL] setting high value for wal_keep_segments
no problem if you have enough space. we have set it to 4096 one year ago,everything is OK. jov 在 2013-8-8 下午9:26,AI Rumman rumman...@gmail.com写道: Hi, I am going to sync slave with my master which is almost 500 G. I am not using archive directory instead of I am using wal files for streaming. As it may take almost 3 hours, I am thinking of setting up 400 for wal_keep_segments where I have enough space available. Without the space issue, could there be any other problem in setting up such high value for wal_keep_segments? As this is production, I need to confirmed. Thanks.
Re: [GENERAL] How to prevent clear screen when query finish ?
On 08/08/2013 12:09 AM, Condor wrote: On 2013-08-07 19:01, Adrian Klaver wrote: On 08/07/2013 08:53 AM, Condor wrote: Thank you, last question: How I can find where is set this ENV ? because: I can't see this variable PAPER but yes, \pset paper work for connection. If you don't want to deal with ENV you could use a psqlrc file: http://www.postgresql.org/docs/9.2/static/app-psql.html Files Unless it is passed an -X or -c option, psql attempts to read and execute commands from the system-wide psqlrc file and the user's ~/.psqlrc file before starting up. (On Windows, the user's startup file is named %APPDATA%\postgresql\psqlrc.conf.) See PREFIX/share/psqlrc.sample for information on setting up the system-wide file. It could be used to set up the client or the server to taste (using the \set and SET commands). The location of the user's ~/.psqlrc file can also be set explicitly via the PSQLRC environment setting. Both the system-wide psqlrc file and the user's ~/.psqlrc file can be made psql-version-specific by appending a dash and the PostgreSQL major or minor psql release number, for example ~/.psqlrc-9.2 or ~/.psqlrc-9.2.5. The most specific version-matching file will be read in preference to a non-version-specific file. Cheers, Hristo S. -- Adrian Klaver adrian.kla...@gmail.com -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to find transaction ID
Hi, I am trying some restore tools, can you advise how to find the latest transaction ID in PostgreSQL and the transaction ID at a particular Point-In-Time? regards -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to find transaction ID
From: ascot.m...@gmail.com ascot.m...@gmail.com To: PostgreSQL general pgsql-general@postgresql.org Cc: ascot.m...@gmail.com Sent: Thursday, 8 August 2013, 14:52 Subject: [GENERAL] How to find transaction ID Hi, I am trying some restore tools, can you advise how to find the latest transaction ID in PostgreSQL and the transaction ID at a particular Point-In-Time? Hmm, it's not clear entirely what you want. Afaik select txid_current() should get you the current transaction id. The column xmin for a row from any table will get you the inserting transaction id, and xmax will get you the deleting (or attempted delete) transaction id. See http://www.postgresql.org/docs/9.0/static/functions-info.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] DB transactions when browser freezes
Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. The uploding completed and the transactions started. But unfortunately, my browser crashed/freezed in the middle of the transactions. I wanted to know, given that the uploading of the file was completed, will the transactions keep executing on server?? Before the browser crashed there was a process that consumed 23.6% of the CPU for the transactions and now, I see the same process with approximately the same percentage. Should I assume that the transactions keep running? I am trying to avoid uploading the file all over again because it has already been running for 5 hours. Thank you in advance!! dafNi
Re: [GENERAL] DB transactions when browser freezes
On 08/08/2013 07:33 AM, dafNi zaf wrote: Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. The uploding completed and the transactions started. But unfortunately, my browser crashed/freezed in the middle of the transactions. I wanted to know, given that the uploading of the file was completed, will the transactions keep executing on server?? Before the browser crashed there was a process that consumed 23.6% of the CPU for the transactions and now, I see the same process with approximately the same percentage. Should I assume that the transactions keep running? I am trying to avoid uploading the file all over again because it has already been running for 5 hours. tail the server log and verify. Thank you in advance!! dafNi -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to find transaction ID
From: Glyn Astill glynast...@yahoo.co.uk To: ascot.m...@gmail.com ascot.m...@gmail.com; PostgreSQL general pgsql-general@postgresql.org Cc: Sent: Thursday, 8 August 2013, 15:20 Subject: Re: [GENERAL] How to find transaction ID From: ascot.m...@gmail.com ascot.m...@gmail.com To: PostgreSQL general pgsql-general@postgresql.org Cc: ascot.m...@gmail.com Sent: Thursday, 8 August 2013, 14:52 Subject: [GENERAL] How to find transaction ID Hi, I am trying some restore tools, can you advise how to find the latest transaction ID in PostgreSQL and the transaction ID at a particular Point-In-Time? Hmm, it's not clear entirely what you want. Afaik select txid_current() should get you the current transaction id. The column xmin for a row from any table will get you the inserting transaction id, and xmax will get you the deleting (or attempted delete) transaction id. See http://www.postgresql.org/docs/9.0/static/functions-info.html I guess I shouldn't have said that about xmin and xmax; you can't rely on those columns in various circumstances i.e. after a vacuum. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB transactions when browser freezes
On 8 August 2013 16:33, dafNi zaf dza...@gmail.com wrote: Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. The uploding completed and the transactions started. But unfortunately, my browser crashed/freezed in the middle of the transactions. I wanted to know, given that the uploading of the file was completed, will the transactions keep executing on server?? Before the browser crashed there was a process that consumed 23.6% of the CPU for the transactions and now, I see the same process with approximately the same percentage. Should I assume that the transactions keep running? I am trying to avoid uploading the file all over again because it has already been running for 5 hours. There is a chance that the database server is just still processing your request and that it will only figure out that there's a client error once it reaches the end of that 20GB file, after which the client returns an error and the database rolls back the transaction. Depending on how phpPgAdmin/PHP were implemented, that's not necessarily what'll happen though, so you might just get lucky and the transaction commits. Hard to tell, I know nothing about the internals of phpPgAdmin. What kind of file is it anyway? A database dump perhaps? -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest.
Re: [GENERAL] DB transactions when browser freezes
dafNi wrote Should I assume that the transactions keep running? Never assume...or at least try and verify those assumptions when possible. To verify this assumption: Connect to the DB directly as a super-user and run this (or something similar): SELECT procpid, current_query, client_addr, xact_start, query_start FROM pg_stat_activity ORDER BY xact_start ASC, client_addr; to what activity is currently in-progress. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-transactions-when-browser-freezes-tp5766824p5766829.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB transactions when browser freezes
2013/8/8 dafNi zaf dza...@gmail.com: Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. 20GB is a lot to be uploading from a browser, even in this day and age. Is the web server configured to accept uploads of that size? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB transactions when browser freezes
i execute it periodically and sometimes there is a transaction and other times it's idle: INSERT INTO traces VALUES (.) or IDLE in transaction So it's still running.. even thought there is some idle time. Thank you very much! On Thu, Aug 8, 2013 at 5:46 PM, David Johnston pol...@yahoo.com wrote: dafNi wrote Should I assume that the transactions keep running? Never assume...or at least try and verify those assumptions when possible. To verify this assumption: Connect to the DB directly as a super-user and run this (or something similar): SELECT procpid, current_query, client_addr, xact_start, query_start FROM pg_stat_activity ORDER BY xact_start ASC, client_addr; to what activity is currently in-progress. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-transactions-when-browser-freezes-tp5766824p5766829.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB transactions when browser freezes
its a huge file with such queries: BEGIN; INSERT INTO traces VALUES (.); . . . COMMIT; Anyway, I managed to see that the transactions still occure like David Johnston sugested. And luckily the browser is alive now after one hour that it had been freezed... thank you very much for the reply! On Thu, Aug 8, 2013 at 5:46 PM, Alban Hertroys haram...@gmail.com wrote: On 8 August 2013 16:33, dafNi zaf dza...@gmail.com wrote: Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. The uploding completed and the transactions started. But unfortunately, my browser crashed/freezed in the middle of the transactions. I wanted to know, given that the uploading of the file was completed, will the transactions keep executing on server?? Before the browser crashed there was a process that consumed 23.6% of the CPU for the transactions and now, I see the same process with approximately the same percentage. Should I assume that the transactions keep running? I am trying to avoid uploading the file all over again because it has already been running for 5 hours. There is a chance that the database server is just still processing your request and that it will only figure out that there's a client error once it reaches the end of that 20GB file, after which the client returns an error and the database rolls back the transaction. Depending on how phpPgAdmin/PHP were implemented, that's not necessarily what'll happen though, so you might just get lucky and the transaction commits. Hard to tell, I know nothing about the internals of phpPgAdmin. What kind of file is it anyway? A database dump perhaps? -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest.
Re: [GENERAL] DB transactions when browser freezes
yes, I altered the php.ini file in /etc/php5/apache2/ directory in order to accept huge files. The uploading has been completed and the transactions started. I can now see the transactions using either: ps aux | grep postgres (via command line) or the solution David Johnston sugested. Thank you for the reply! dafNi On Thu, Aug 8, 2013 at 5:57 PM, Ian Lawrence Barwick barw...@gmail.comwrote: 2013/8/8 dafNi zaf dza...@gmail.com: Hello to everybody, I started to upload (via phpPgAdmin) to a local server a huge file (20GB) in order to fill up a database. 20GB is a lot to be uploading from a browser, even in this day and age. Is the web server configured to accept uploads of that size? Y
Re: [GENERAL] setting high value for wal_keep_segments
On Thu, Aug 8, 2013 at 6:23 AM, AI Rumman rumman...@gmail.com wrote: Hi, I am going to sync slave with my master which is almost 500 G. I am not using archive directory instead of I am using wal files for streaming. As it may take almost 3 hours, I am thinking of setting up 400 for wal_keep_segments where I have enough space available. Without the space issue, could there be any other problem in setting up such high value for wal_keep_segments? As this is production, I need to confirmed. Another data point: I set up SR on two systems recently in production with the wal_keep_segments set to 1 (lots of logs were being generated), and the slaves were about 1TB each. No problems were experienced.
Re: [GENERAL] setting high value for wal_keep_segments
Yeah, I already set it like that and it works. Thanks. On Thu, Aug 8, 2013 at 11:59 AM, bricklen brick...@gmail.com wrote: On Thu, Aug 8, 2013 at 6:23 AM, AI Rumman rumman...@gmail.com wrote: Hi, I am going to sync slave with my master which is almost 500 G. I am not using archive directory instead of I am using wal files for streaming. As it may take almost 3 hours, I am thinking of setting up 400 for wal_keep_segments where I have enough space available. Without the space issue, could there be any other problem in setting up such high value for wal_keep_segments? As this is production, I need to confirmed. Another data point: I set up SR on two systems recently in production with the wal_keep_segments set to 1 (lots of logs were being generated), and the slaves were about 1TB each. No problems were experienced.
Re: [GENERAL] How to avoid Force Autovacuum
Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: We have one production database server , having 6 DBs, Postgres 9.2.1 version. There were some fixes for autovacuum problems in 9.2.3. Some other fixes will be coming when 9.2.5 is released. Many of your problems are likely to go away by staying up-to-date on minor releases. http://www.postgresql.org/support/versioning/ autovacuum_vacuum_threshold = 5 By setting this so high, you are increasing the amount of work autovacuum will need to do when it does work on a table. A smaller value tends to give less bursty performance. Also, any small, frequently-updated tables may bloat quite a bit in 5 transactions. maintenance_work_mem = 2GB Each autovacuum worker will allocate this much RAM. If all of your autovacuum workers wake up at once, would losing 2GB for each one from your cache cause a significant performance hit? (Since you didn't say how much RAM the machine has, it's impossible to tell.) How can i avoid the autovacuum process ? Vacuuming is a necessary part of PostgreSQL operations, and autovacuum is almost always part of a good vacuum plan. The bug fixes in 9.2.3 will help avoid some of the most extreme problems, but you might also want to reduce the threshold so that it has less work to do each time it wakes up, reducing the impact. And also autovacuum executed in the template0 database also. What does running this in psql this show?: \x on select * from pg_database where datname = 'template0'; select * from pg_stat_database where datname = 'template0'; -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] system catalog to check if auto vacuum is disabled for a particular table
Guys i am using postgresql 9.2. How can i check if a particular table has auto vacuum disabled manually or not. Which system catalog can get me this information? Thanks
Re: [GENERAL] system catalog to check if auto vacuum is disabled for a particular table
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Prabhjot Sheena Sent: Thursday, August 08, 2013 2:36 PM To: pgsql-general@postgresql.org Subject: [GENERAL] system catalog to check if auto vacuum is disabled for a particular table Guys i am using postgresql 9.2. How can i check if a particular table has auto vacuum disabled manually or not. Which system catalog can get me this information? Thanks You can query reloptions (it has type of text[]) in pg_class for your relname. If autovacuum was disabled one of the elements of the array will be: 'autovacuum_enabled=false'. Regards, Igor Neyman -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pl/Python runtime overhead
Thanks Sergey, This is going to help for sure. I'll also look at the url. What I've been trying to understand is when python runtime is invoked during the function execution (lifecycle?) . Maybe looking at plpython's source may help get an understanding of that. Regards Seref On Thu, Aug 8, 2013 at 2:54 AM, Sergey Konoplev gray...@gmail.com wrote: On Wed, Aug 7, 2013 at 7:43 AM, Seref Arikan serefari...@kurumsalteknoloji.com wrote: When a pl/python based function is invoked, does it keep a python runtime running across calls to same function? That is, if I use connection pooling, can I save on the python runtime initialization and loading costs? You can use the following wrapping technique to cache function's body, that will save you some resources and time. It stores the main() in SD (session data) built-in object and retrieves it when stored, so plpython does not need to process it every time stored function is called. CREATE OR REPLACE FUNCTION some_plpython_function() RETURNS integer LANGUAGE plpythonu AS $function$ An example of a function's body caching and error handling sdNamespace = 'some_plpython_function' if sdNamespace not in SD: def main(): The function is assumed to be cached in SD and reused result = None # Do whatever you need here return result # Cache body in SD SD[sdNamespace] = main try: return SD[sdNamespace]() except Exception, e: import traceback plpy.info(traceback.format_exc()) $function$; I can also recommend you to cache query plans, as plpython does not do it itself. The code below also works with SD to store prepared plans and retrieve them. This allows you to avoid preparing every time you are executing the same query. Just like plpgsql does, but manually. if SD.has_key('%s_somePlan' % sdNamespace): somePlan = SD['%s_planName' % sdNamespace] else: somePlan = plpy.prepare(...) Are there any documents/books etc you'd recommend to get a good understanding of extending postgres with languages like python? I'd really like to get a good grip of the architecture of this type of extension, and possibly attempt to introduce a language of my own choosing. The docs I've seen so far are mostly too specific, making it a bit for hard for me to see the forest from the trees. AFAIK, this one is the best one http://www.postgresql.org/docs/9.2/interactive/plpython.html. -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com
[GENERAL] How To Install Extension Via Script File?
Hi all, I have a database that uses the ltree extension. I typically create a new database like so (as a normal user), using my script file: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; And then su to postgres, login and install the ltree extension on mydb. Then I logout of my psql instance and re-run the script (as a normal user) to create the tables views on mydb. I comment out the table/view creation portion until I finish the first couple steps, and then uncomment the tables and views on the 2nd run. Otherwise the script will fail because the ltree extension has to be installed as a superuser. I want a script something like: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; \c mydb CREATE EXTENSION ltree; CREATE TABLE mytable(myfields); rinse, repeat for additional tables and views. And to be able to run it from the Bash prompt (as securely as possible). I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. I did create a password for my postgres user (hence the -W option). And this is on a local box. How can I run my script? Thanks! Don -- D.C. Parris, FMP, Linux+, ESL Certificate Minister, Security/FM Coordinator, Free Software Advocate http://dcparris.net/ https://www.xing.com/profile/Don_Parris GPG Key ID: F5E179BE
Re: [GENERAL] How To Install Extension Via Script File?
Don Parris wrote on 08.08.2013 23:13: And to be able to run it from the Bash prompt (as securely as possible). I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. You are not running the script as superuser because you supplied -U user and thus the _script_ is executed as user. psql is started as postgres (the Linux user, not the DB user). I don't see the reason for using sudo in the first place, -U is enough: Leave out the sudo, and use: psql -U postgres -W -d mydb --file=/home/user/dev/mydb_create.sql Or if you do want to use sudo, the leave out the -U user parameter: sudo -u postgres psql -d mydb --file=/home/user/dev/mydb_create.sql Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How To Install Extension Via Script File?
On 08/08/2013 03:13 PM, Don Parris wrote: Hi all, I have a database that uses the ltree extension. I typically create a new database like so (as a normal user), using my script file: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; And then su to postgres, login and install the ltree extension on mydb. Then I logout of my psql instance and re-run the script (as a normal user) to create the tables views on mydb. I comment out the table/view creation portion until I finish the first couple steps, and then uncomment the tables and views on the 2nd run. Otherwise the script will fail because the ltree extension has to be installed as a superuser. I want a script something like: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; \c mydb CREATE EXTENSION ltree; CREATE TABLE mytable(myfields); rinse, repeat for additional tables and views. And to be able to run it from the Bash prompt (as securely as possible). I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. I did create a password for my postgres user (hence the -W option). And this is on a local box. How can I run my script? Thanks! Don -- D.C. Parris, FMP, Linux+, ESL Certificate Minister, Security/FM Coordinator, Free Software Advocate http://dcparris.net/ https://www.xing.com/profile/Don_Parris GPG Key ID: F5E179BE Have you tried adding the extension to template1. I find that works nicely as it means CREATE DATABASE dbname gets the extension. That said, I'm wondering if you're actually having trouble accessing the extension subdirectory. Perhaps the server is running as different user than the owner of the extensions?
[GENERAL] Postgres won't start
Linux Mint (from Ubuntu) version 9.1. Postgres will no longer start, but I cannot find out why. Command line: $ /usr/lib/postgresql/9.1/bin/pg_ctl start -D /home/postgresql/9.1/main -l /var/log/postgresql/postgresql-9.1-main.log -s -w -o '-c config_file=/etc/postgresql/9.1/main/postgresql.conf' pg_ctl: could not start server Examine the log output. There is no error message in the log: 2013-08-08 21:35:33 GMT DEBUG: postgres: PostmasterMain: initial environment dump: 2013-08-08 21:35:33 GMT DEBUG: - 2013-08-08 21:35:33 GMT DEBUG: MAIL=/var/mail/postgres 2013-08-08 21:35:33 GMT DEBUG: USER=postgres 2013-08-08 21:35:33 GMT DEBUG: LANGUAGE=en_GB:en 2013-08-08 21:35:33 GMT DEBUG: LC_TIME=C 2013-08-08 21:35:33 GMT DEBUG: SHLVL=1 2013-08-08 21:35:33 GMT DEBUG: HOME=/var/lib/postgresql 2013-08-08 21:35:33 GMT DEBUG: XDG_SESSION_COOKIE=4464da00797efbf61ba9be4b517969c3-1375997124.817437-835692463 2013-08-08 21:35:33 GMT DEBUG: LC_MONETARY=C 2013-08-08 21:35:33 GMT DEBUG: COLORTERM=mate-terminal 2013-08-08 21:35:33 GMT DEBUG: PG_GRANDPARENT_PID=15848 2013-08-08 21:35:33 GMT DEBUG: LOGNAME=postgres 2013-08-08 21:35:33 GMT DEBUG: _=/usr/lib/postgresql/9.1/bin/pg_ctl 2013-08-08 21:35:33 GMT DEBUG: TERM=xterm 2013-08-08 21:35:33 GMT DEBUG: PGLOCALEDIR=/usr/share/locale 2013-08-08 21:35:33 GMT DEBUG: PGSYSCONFDIR=/etc/postgresql-common 2013-08-08 21:35:33 GMT DEBUG: PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games 2013-08-08 21:35:33 GMT DEBUG: LC_ADDRESS=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: XDG_RUNTIME_DIR=/run/user/postgres 2013-08-08 21:35:33 GMT DEBUG: DISPLAY=:0 2013-08-08 21:35:33 GMT DEBUG: LC_TELEPHONE=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: LANG=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: XAUTHORITY=/home/olly/.Xauthority 2013-08-08 21:35:33 GMT DEBUG: SHELL=/bin/bash 2013-08-08 21:35:33 GMT DEBUG: LC_NAME=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: LC_MEASUREMENT=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: LC_IDENTIFICATION=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: PWD=/var/lib/postgresql 2013-08-08 21:35:33 GMT DEBUG: LC_NUMERIC=C 2013-08-08 21:35:33 GMT DEBUG: LC_PAPER=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: PGDATA=/home/postgresql/9.1/main 2013-08-08 21:35:33 GMT DEBUG: LC_COLLATE=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: LC_CTYPE=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: LC_MESSAGES=en_GB.UTF-8 2013-08-08 21:35:33 GMT DEBUG: - 2013-08-08 21:35:33 GMT DEBUG: TZ localtime gets max score 5200 2013-08-08 21:35:33 GMT DEBUG: TZ Singapore scores 0: at 1357776000 2013-01-10 08:00:00 std versus 2013-01-10 01:00:00 std 2013-08-08 21:35:33 GMT DEBUG: TZ GMT scores 0: at 1357776000 2013-01-10 00:00:00 std versus 2013-01-10 01:00:00 std ...[many lines of timezone info deleted]... 2013-08-08 21:35:34 GMT DEBUG: TZ Atlantic/St_Helena scores 0: at 1357776000 2013-01-10 00:00:00 std versus 2013-01-10 01:00:00 std 2013-08-08 23:35:34 CEST DEBUG: invoking IpcMemoryCreate(size=32399360) 2013-08-08 23:35:34 CEST DEBUG: removing file pg_notify/ 2013-08-08 23:35:34 CEST DEBUG: max_safe_fds = 985, usable_fds = 1000, already_open = 5 2013-08-08 23:35:34 CEST DEBUG: logger shutting down 2013-08-08 23:35:34 CEST DEBUG: shmem_exit(0): 0 callbacks to make 2013-08-08 23:35:34 CEST DEBUG: proc_exit(0): 0 callbacks to make 2013-08-08 23:35:34 CEST DEBUG: exit(0) 2013-08-08 23:35:34 CEST DEBUG: shmem_exit(-1): 0 callbacks to make 2013-08-08 23:35:34 CEST DEBUG: proc_exit(-1): 0 callbacks to make # cat /proc/sys/kernel/shmmax 33554432 I increased this to 64Mb # cat /proc/sys/kernel/shmmax 67108864 but it made no difference
Re: [GENERAL] Postgres won't start
On 08/08/2013 03:02 PM, Oliver Elphick wrote: Linux Mint (from Ubuntu) version 9.1. Postgres will no longer start, but I cannot find out why. So anything happen between the last time it started and now?: Upgrade of Postgres? Upgrade of Mint? Something else? Command line: $ /usr/lib/postgresql/9.1/bin/pg_ctl start -D /home/postgresql/9.1/main -l /var/log/postgresql/postgresql-9.1-main.log -s -w -o '-c config_file=/etc/postgresql/9.1/main/postgresql.conf' pg_ctl: could not start server -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres won't start
On 08/08/2013 03:17 PM, Oliver Elphick wrote: I tried to change the listen_addresses line in postgresql.conf, by adding an IPv6 address. On meeting problems I tried changing it back. What problems? Have you run ps to see if there is another instance of Postgres running? Currently it says: listen_addresses = 'localhost' -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How To Install Extension Via Script File?
On 8/8/2013 2:13 PM, Don Parris wrote: I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. can the postgres user access /home/user/dev ? thats aside from the rest of potentially wrong stuff the other guys mentioned. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How To Install Extension Via Script File?
On Thu, Aug 8, 2013 at 5:44 PM, Thomas Kellerer spam_ea...@gmx.net wrote: Don Parris wrote on 08.08.2013 23:13: And to be able to run it from the Bash prompt (as securely as possible). I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_** create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. You are not running the script as superuser because you supplied -U user and thus the _script_ is executed as user. psql is started as postgres (the Linux user, not the DB user). My bad - forgot to change the example above to reflect accurately the user... I did use the postgres user. I don't see the reason for using sudo in the first place, -U is enough: Fair enough. But I think you are onto something here below... Leave out the sudo, and use: psql -U postgres -W -d mydb --file=/home/user/dev/mydb_**create.sql Or if you do want to use sudo, the leave out the -U user parameter: sudo -u postgres psql -d mydb --file=/home/user/dev/mydb_**create.sql When I try a simple psql -U postgres -W - just to initiate the psql session, I get: psql: FATAL: Peer authentication failed for user postgres It's like my regular user cannot connect as the postgres user. However, this works (with me just typing my password for sudo): donp@wiesbaden:~$ sudo -u postgres psql -U postgres [sudo] password for donp: psql (9.1.9) Type help for help. postgres=# \q Maybe I have some permissions issues? Thanks, Don
Re: [GENERAL] How To Install Extension Via Script File?
On Thu, Aug 8, 2013 at 5:45 PM, Rob Sargent robjsarg...@gmail.com wrote: On 08/08/2013 03:13 PM, Don Parris wrote: Hi all, I have a database that uses the ltree extension. I typically create a new database like so (as a normal user), using my script file: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; And then su to postgres, login and install the ltree extension on mydb. Then I logout of my psql instance and re-run the script (as a normal user) to create the tables views on mydb. I comment out the table/view creation portion until I finish the first couple steps, and then uncomment the tables and views on the 2nd run. Otherwise the script will fail because the ltree extension has to be installed as a superuser. I want a script something like: CREATE DATABASE mydb WITH TEMPLATE template0 ENCODING 'UTF8'; \c mydb CREATE EXTENSION ltree; CREATE TABLE mytable(myfields); rinse, repeat for additional tables and views. SNIP Have you tried adding the extension to template1. I find that works nicely as it means CREATE DATABASE dbname gets the extension. That said, I'm wondering if you're actually having trouble accessing the extension subdirectory. Perhaps the server is running as different user than the owner of the extensions? Thanks Rob, If I do that, and then create DB, as I do, using template0 ENCODING UTF8, the extension does not appear to be installed on the new database. At least, when I tried that before, it did not appear to work. I had to install the extension on the database anyway. I have not had time to delve into how to resolve that - hasn't really been all that important until now. Thanks! Don -- D.C. Parris, FMP, Linux+, ESL Certificate Minister, Security/FM Coordinator, Free Software Advocate http://dcparris.net/ https://www.xing.com/profile/Don_Parrishttp://www.linkedin.com/in/dcparris GPG Key ID: F5E179BE
Re: [GENERAL] How To Install Extension Via Script File?
On Thu, Aug 8, 2013 at 6:30 PM, John R Pierce pie...@hogranch.com wrote: On 8/8/2013 2:13 PM, Don Parris wrote: I thought I could add the commands and run the create script by doing: sudo -u postgres psql -U user -W -d mydb --file=/home/user/dev/mydb_** create.sql I thought that, running my script as the superuser, it would have the privileges necessary to install the extension, but I still got a permission denied error. can the postgres user access /home/user/dev ? thats aside from the rest of potentially wrong stuff the other guys mentioned. Oops! That *could* create havoc, couldn't it?Still, should be easy enough to rectify.
Re: [GENERAL] Postgres won't start
To start with, it worked but the pg_hba.conf entry appeared to be wrong. I tried changing that and then the current problem started. I tried listen_addresses = '*'; then back to just 'localhost'. Since I have maximum logging enabled, I don't think it is getting as far as reading the configuration files - that is not mentioned in the log. There is no other instance of postgres running. On 9 August 2013 00:59, Oliver Elphick o...@lfix.co.uk wrote: To start with, it worked but the pg_hba.conf entry appeared to be wrong. I tried changing that and then the current problem started. I tried listen_addresses = '*'; then back to just 'localhost'. Since I have maximum logging enabled, I don't think it is getting as far as reading the configuration files - that is not mentioned in the log. There is no other instance of postgres running. On 9 August 2013 00:29, Adrian Klaver adrian.kla...@gmail.com wrote: On 08/08/2013 03:17 PM, Oliver Elphick wrote: I tried to change the listen_addresses line in postgresql.conf, by adding an IPv6 address. On meeting problems I tried changing it back. What problems? Have you run ps to see if there is another instance of Postgres running? Currently it says: listen_addresses = 'localhost' -- Adrian Klaver adrian.kla...@gmail.com -- Time is short - http://www.lfix.co.uk/disappearance.html -- Time is short - http://www.lfix.co.uk/disappearance.html
Re: [GENERAL] Postgres won't start
On 08/08/2013 04:02 PM, Oliver Elphick wrote: To start with, it worked but the pg_hba.conf entry appeared to be wrong. I tried changing that and then the current problem started. I tried listen_addresses = '*'; then back to just 'localhost'. Since I have maximum logging enabled, I don't think it is getting as far as reading the configuration files - that is not mentioned in the log. There is no other instance of postgres running. Hmmm. Two things, then I am tapped out for the time being: 1) Did you change the pg_hba.conf entry back? 2) What user are you running the pg_ctl command as? -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Read data from WAL
On Mon, Jul 15, 2013 at 01:34:01PM +, Baldur Þór Emilsson wrote: Thank you all for your responses. I'm aware of xlogdump but I'm afraid it does not help me with readign the data in the WAL. It is mainly for debugging or educational purposes (citing the docs) and it outputs a lot of information about the WAL records but not the contents of them (e.g. it says where an INSERT wrote the data, but not what the data is). Please correct me if I'm mistaken. What I'm looking for is a way to read the data that is inserted into the database. I was hoping there was some known way of doing that to save me the time it takes to patch Postgres to do that. There is no way to decode the WAL data as the decoding information is in the database and not easily available to an external tool. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to avoid Force Autovacuum
On Thu, Aug 8, 2013 at 11:18 AM, Kevin Grittner kgri...@ymail.com wrote: There were some fixes for autovacuum problems in 9.2.3. Some other fixes will be coming when 9.2.5 is released. Many of your problems are likely to go away by staying up-to-date on minor releases. By setting this so high, you are increasing the amount of work autovacuum will need to do when it does work on a table. A smaller value tends to give less bursty performance. Also, any small, frequently-updated tables may bloat quite a bit in 5 transactions. Each autovacuum worker will allocate this much RAM. If all of your autovacuum workers wake up at once, would losing 2GB for each one from your cache cause a significant performance hit? (Since you didn't say how much RAM the machine has, it's impossible to tell.) What does running this in psql this show?: \x on select * from pg_database where datname = 'template0'; select * from pg_stat_database where datname = 'template0'; In addition to Kevin's notes, I think it is also worth to look at the result of the query below. select name, setting from pg_settings where name ~ 'vacuum' and setting reset_val; -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Weird error when setting up streaming replication
I'm going through all my usual steps for setting up streaming replication on a new pair of servers. Modify configs as appropriate, rsync data from master to slave, etc. I have this all automated with chef, and it has been pretty bulletproof for awhile. However, today, I ran into this when starting the slave on this new pair: * Starting PostgreSQL 9.2 database server * The PostgreSQL server failed to start. Please check the log output: 2013-08-08 23:47:30 GMT LOG: database system was interrupted; last known up at 2013-08-08 23:22:40 GMT 2013-08-08 23:47:30 GMT LOG: entering standby mode 2013-08-08 23:47:30 GMT LOG: WAL file is from different database system 2013-08-08 23:47:30 GMT DETAIL: WAL file database system identifier is 5909892614333033983, pg_control database system identifier is 5909892824786287231. 2013-08-08 23:47:30 GMT LOG: invalid primary checkpoint record 2013-08-08 23:47:30 GMT LOG: invalid secondary checkpoint record 2013-08-08 23:47:30 GMT PANIC: could not locate a valid checkpoint record 2013-08-08 23:47:30 GMT LOG: startup process (PID 10600) was terminated by signal 6: Aborted 2013-08-08 23:47:30 GMT LOG: aborting startup due to startup process failure And I've been stumped. I've completely nuked my data dirs and started over and gotten the same result, but with different identifier numbers (as I would expect). Any Ideas? Thanks! QH
Re: [GENERAL] Weird error when setting up streaming replication
On Fri, Aug 9, 2013 at 8:55 AM, Quentin Hartman qhart...@direwolfdigital.com wrote: 2013-08-08 23:47:30 GMT LOG: WAL file is from different database system 2013-08-08 23:47:30 GMT DETAIL: WAL file database system identifier is 5909892614333033983, pg_control database system identifier is 5909892824786287231. It looks that you are not able to detect valid checkpoint records when replaying WAL because your new system has been initialized with a fresh initdb, symbolized by the errors above. You should build your new node using a base backup or a snapshot of the data folder of the node you are trying to replace. -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How To Install Extension Via Script File?
Don Parris parri...@gmail.com writes: When I try a simple psql -U postgres -W - just to initiate the psql session, I get: psql: FATAL: Peer authentication failed for user postgres It's like my regular user cannot connect as the postgres user. You're right, it can't, if you've selected peer authentication in pg_hba.conf. You'd need to use some other auth method, perhaps password-based auth, if you want this to work. Read up on auth methods in the fine manual. However, this works (with me just typing my password for sudo): donp@wiesbaden:~$ sudo -u postgres psql -U postgres [sudo] password for donp: Sure, because then psql is launched as the postgres OS user, and peer auth will let that user connect as the postgres DB user. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Performance of ORDER BY RANDOM to select random rows?
On Wed, Aug 7, 2013 at 7:01 PM, Victor Hooi victorh...@yahoo.com wrote: also seems to suggest that using ORDER BY RANDOM() will perform poorly on Postgres. I'm just wondering if this is still the case? I just ran those benchmarks on my system (Postgres 9.2.4), and using ORDERY BY RANDOM did not seem substantially to generating random integers in Python and picking those out (and handling non-existent rows). Has Postgres's behaviour for ORDER BY RANDOM change sometime recently? Unfortunately, It has not. However, there always is a workaround. You can get a random results fast by WITH RECURSIVE query. WITH RECURSIVE r AS ( WITH b AS (SELECT min(id), max(id) FROM table1) ( SELECT id, min, max, array[]::integer[] AS a, 0 AS n FROM table1, b WHERE id min + (max - min) * random() LIMIT 1 ) UNION ALL ( SELECT t.id, min, max, a || t.id, r.n + 1 AS n FROM table1 AS t, r WHERE t.id min + (max - min) * random() AND t.id all(a) AND r.n + 1 10 LIMIT 1 ) ) SELECT t.id FROM table1 AS t, r WHERE r.id = t.id; The general idea is that we get a random value between min(id) and max(id) and then get the first row with id bigger than this value. Then we repeat until we get 10 of such rows, checking that this id has not been retrieved earlier. Surely, the probability of appearing one or another value in the result depends on the distribution of id values in the table, but in the most cases I faced it works good. I had an idea to play with pg_stats.histogram_bounds to work around the described issue, but it was never so critical for tasks I solved. -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres won't start
Oliver Elphick o...@lfix.co.uk writes: Linux Mint (from Ubuntu) version 9.1. Postgres will no longer start, but I cannot find out why. Command line: $ /usr/lib/postgresql/9.1/bin/pg_ctl start -D /home/postgresql/9.1/main -l /var/log/postgresql/postgresql-9.1-main.log -s -w -o '-c config_file=/etc/postgresql/9.1/main/postgresql.conf' pg_ctl: could not start server Examine the log output. There is no error message in the log: This log file seems suspiciously incomplete. Is it the file mentioned in the pg_ctl -l switch? If so, that file would only be used until the syslogger process took over logging (which we know it did since we see its exit messages in there). To find out what happened after that, you need to look wherever your postgresql.conf is telling PG to log. I wonder whether we shouldn't change the syslogger to emit something to stderr when it takes over logging, saying logging is now redirected to someplace. This isn't the first case I've seen of people forgetting to look in the right place. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres won't start
On 9 August 2013 01:02, Oliver Elphick wrote: Since I have maximum logging enabled, I don't think it is getting as far as reading the configuration files - that is not mentioned in the log. I regularly run into problems when some editor adds a UTF-8 BOM to pg_hba.conf or postgresql.conf which renders the files unreadable for postgresql. Regards, Brar