[GENERAL] Re: passing schema name and table name as parameter functions in postgresql 9.2
i got it here i done it with 'quote_ident ' instead of 'quote_literal' -- View this message in context: http://postgresql.1045698.n5.nabble.com/passing-schema-name-and-table-name-as-parameter-functions-in-postgresql-9-2-tp5758130p5758260.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Synchronous Replication in production
Colin Sloss wrote: > I have been testing the differences between asynchronous and synchronous hot > standby streaming > replication on PostgreSQL 9.2.4. There is some push towards synchronous > replication, but I am finding > some serious problems, and wonder how other people deal with them. > > Action: > The Slave is unreachable (postgres is stopped, or machine is turned off) > > Result: > Transactions stay open, and add up until connection is allowed between Master > and Slave again. > > My guess: > Create a script that detects when transactions are being opened, but not > committed on slave. Alter > postgresql.conf to comment out synchronous_standby_names and pg_ctl reload. > This should boot the > server back to asynchronous, more or less. > > Caveats: > I haven't tested this yet, and I'm not sure what would happen to the > transactions that started while > synchronous replication was active. Any guesses? > > The whole idea of my solution was to have no single point of failure. This > seems to create two > exclusive points of failure, each needing a completely separate reaction. My > original proposal was > asynchronous replication, with xlogs being stored on shared storage, and DRBD > replication to prevent > it from being a single point of failure. I have never seen it go over 15kbs > behind in my setup, which > still results in a very speedy failover. Is it just me, or is that seeming > better than just > synchronous replication? > > Another caveat I found is that setting up slaves becomes more complicated. > You have to set up the > Master in asynchronous style, and then switch it to synchronous only when the > timing is right. > Otherwise the transactions will sit there until everything is ready. > > Sorry for the onslaught of questions, I don't expect all of them satisfied. > Please share any > resolutions to these issues which you guys have discovered. One simple guideline first: If you use synchronous replication, you'll have to have at least two standby servers or the overall availability of your system will suffer. The differences between synchronous and asynchronous replication are mostly: a) With synchronous replication you cannot lose a committed transaction during failover. b) Synchronous replication will slow down your system; the higher the network latency between the servers, the slower it will get. You should work on getting the requirements defined: - In the case of failover, can you afford to lose a few committed transactions? - Are you ready to pay the price for synchronous replication (second standby if you don't want availability to suffer, slower database system, more complicated setup and failover procedures)? For synchronous replication, keep the servers close together with a strong network inbetween. To protect against catastrophes (if that's a requirement), you should use another asynchronous standby in a distant location. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Open bytea files (images, docs, excel, PDF) stored in Postgresql using a Microsoft Access form
Hi, I,ve got a clients table in PostgreSQL. Each client has different documents (more than 7.000 files in total in different extensions JPG, XLS,DOC,PDF...) stored in a bytea field in another PostgreSQL Table. My intention is to create a form using Microsoft Access (or some other software) to be able to manage those files (Add more files, modify files or delete files). What should I do in order Access could understand (open) each file or store new files? Regards, Aitor
[GENERAL] Function tracking
Hi all I'm looking for suggestions on the best way to track the updates to a function. We have two databases, Dev & Live, so I want to update Live with just the functions that have been modified in the DEV databas3e. Is there another, easier way to track the updates than manually recording it in a document? (I'm trying to eliminate human interference). Many thanks in advance Rebecca
Re: [GENERAL] Function tracking
Hello 2013/6/7 Rebecca Clarke : > Hi all > > I'm looking for suggestions on the best way to track the updates to a > function. > > We have two databases, Dev & Live, so I want to update Live with just the > functions that have been modified in the DEV databas3e. > Is there another, easier way to track the updates than manually recording it > in a document? (I'm trying to eliminate human interference). > There is a few tools http://stackoverflow.com/questions/4804779/how-to-check-difference-between-two-databases-in-postgressql http://pgdiff.sourceforge.net/ But I prefer editing files for storing schema and function definitions. And I use a git. I dislike direct object modifying via tools like pgAdmin and similar. Regards Pavel > > Many thanks in advance > > Rebecca > > > > > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Strange ranking with dict_xsyn
Hi everybody I'm trying to use the dict_xsyn contrib module to implement query expansion. I'm baffled by what seems like incorrect behaviour, and would appreciate some help. Here is a simple example using the packaged example "xsyn_sample.rules": speel=# CREATE EXTENSION dict_xsyn; CREATE EXTENSION speel=# ALTER TEXT SEARCH DICTIONARY xsyn (RULES='xsyn_sample'); ALTER TEXT SEARCH DICTIONARY speel=# ALTER TEXT SEARCH CONFIGURATION english ALTER MAPPING FOR asciiword WITH xsyn,english_stem; ALTER TEXT SEARCH CONFIGURATION speel=# SELECT to_tsquery('english', 'supernova'); --dict_xsyn is working to_tsquery -- 'supernova' & 'sn' & 'sne' & '1987a' (1 row) speel=# select ts_rank(to_tsvector('english', 'supernova'), to_tsquery('english', 'cute|supernova')); --no surprise ts_rank - 0.0486342 (1 row) speel=# select ts_rank(to_tsvector('english', 'supernova'), to_tsquery('english', 'supernova')); --unexpected ts_rank --- 1e-20 (1 row) speel=# select to_tsvector('english', 'supernova') @@ to_tsquery('english', 'supernova'); --no surprise ?column? --- t (1 row) speel=# select ts_rank(to_tsvector('english', 'cute supernova'), to_tsquery('english', 'supernova')); --unexpected ts_rank --- 1e-20 (1 row) speel=# select ts_rank(to_tsvector('english', 'cute supernova'), to_tsquery('english', 'cute')); --no surprise ts_rank --- 0.0607927 (1 row) speel=# select ts_rank(to_tsvector('english', 'supernova'), to_tsquery('english', 'cute & supernova')); --unexpected, was expecting 0 ts_rank --- 1e-20 (1 row) speel=# select to_tsvector('english', 'supernova') @@ to_tsquery('english', 'cute & supernova'); --no surprise ?column? --- f (1 row) The ranking seems like a bug, and has been causing some problems for my application. I would expect the ranking to be the same (or similar) if a single word query matches one of the two words in the document. It seems like a query with an entry in the extended synonym dictionary somehow creates this extremely small rank value, which is below any reasonable threshold that might be used to reduce the number of results in a full text query. I was also expecting a rank of 0 when the @@ operator returns false. In the above examples there are ranks of 1e-20 where the @@ operator gave true and false respectively, which seems unintuitive (and wrong) to me. I'm not sure if 1e-20 is supposed to mean "as good as 0". As a comparison, for a search configuration without dict_xsyn, the results are mostly as expected (but of course without the query expansion): speel=# select ts_rank(to_tsvector('simple', 'supernova'), to_tsquery('simple', 'supernova')); ts_rank --- 0.0607927 (1 row) speel=# select ts_rank(to_tsvector('simple', 'cute supernova'), to_tsquery('simple', 'supernova')); ts_rank --- 0.0607927 (1 row) speel=# select ts_rank(to_tsvector('simple', 'supernova'), to_tsquery('simple', 'cute|supernova')); --only slightly different; that's ok ts_rank --- 0.0303964 (1 row) speel=# select ts_rank(to_tsvector('simple', 'supernova'), to_tsquery('simple', 'cute & supernova')); --mm, that 1e-20 again :-( ts_rank --- 1e-20 (1 row) Thank you for any help anyone can provide. Friedel -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Slave promotion failure
Le 2013-06-06 à 18:40, Michael Paquier a écrit : > On Fri, Jun 7, 2013 at 1:37 AM, François Beausoleil > wrote: > I can't seem to promote the slave: > > $ sudo -u postgres touch /var/lib/postgresql/9.1/main/recovery.done > # log is silent > This has no effect. recovery.conf is renamed to recovery.done internally by > the server. If recovery.done is present in data folder with recovery.conf at > the moment of promotion, recovery.done is removed before file renaming. What > you can do to use a trigger file for promotion is setting up trigger_file in > recovery.conf, then promotion will be kicked once file has been created. I believe I know what my mistake is: I set trigger_file to /var/lib/postgresql/9.1/main/recovery.done -- and PostgreSQL doesn't seem to like that name. I should set to another name and retry. It's still strange that pg_ctl promote didn't work though. Maybe because recovery.done existed at the time I tried. I'll try again today, with better names. Thanks! François smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] User postgres unable to revoke privileges?
Le 2013-06-06 à 17:59, Tom Lane a écrit : > =?iso-8859-1?Q?Fran=E7ois_Beausoleil?= writes: >> regress=# ALTER DEFAULT PRIVILEGES FOR ROLE dataanalysts IN SCHEMA public >> REVOKE SELECT ON TABLES FROM dataanalysts; >> ERROR: permission denied for schema public > >> I'm logged in as postgres, the database superuser. Why am I getting a >> permission denied? > > I suspect you already revoked public CREATE privilege in schema public. Ha, yes, you are right. > Note where the fine manual says: > > schema_name > The name of an existing schema. Each target_role must have CREATE > privileges for each specified schema. > > There was some debate previously about whether that restriction was a > good idea at all; and given this example, it seems like we definitely > shouldn't require it during a REVOKE. I may not have read that section carefully enough. I'll try again, by adding postgres back with create privileges on the public schema. Thanks! François smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Streaming replication with sync slave, but disconnects due to missing WAL segments
Hi Jeff. Thanks for the clarification. I'll adjust wal_keep_segments for the expected biggest table in the backup. Best regards, Mads From: Jeff Janes To: "mads.tand...@schneider-electric.com" , Cc: Albe Laurenz , "pgsql-general@postgresql.org" Date: 06-06-2013 18:33 Subject:Re: [GENERAL] Streaming replication with sync slave, but disconnects due to missing WAL segments Sent by:pgsql-general-ow...@postgresql.org On Wed, Jun 5, 2013 at 11:26 PM, wrote: Hi Thanks for your reply. Do you know of any options that I could give pg_dump/psql to avoid creating one big transaction? I'm using the plain text format for pg_dump. For the plain text format, it is already not one big transaction, unless you specify to -1 to the psql. However, the load of any individual table will still be a single transaction, so for a very large table it will still be a very long transaction. Using pg_dump for --inserts could get around this, but it would probably be better to fix the fundamental problem by increasing wal_keep_segments or something of that nature. Cheers, Jeff __ This email has been scanned by the Symantec Email Security.cloud service. __ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] excute function before DROP EXTENSION
Hello, I just wanted to know if there was a way to automatically execute a cleanup function before DROP EXTENSION. In previous versions we have uninstall script which can be modified to call the function before dropping the schema created by the contrib module. Just wondering if I can achieve this in 9.1 onwards using DROP EXTENSION. -- Beena Emerson
Re: [GENERAL] Function tracking
2013/6/7 Pavel Stehule > Hello > > 2013/6/7 Rebecca Clarke : > > Hi all > > > > I'm looking for suggestions on the best way to track the updates to a > > function. > > > > We have two databases, Dev & Live, so I want to update Live with just the > > functions that have been modified in the DEV databas3e. > > Is there another, easier way to track the updates than manually > recording it > > in a document? (I'm trying to eliminate human interference). > > > > There is a few tools > > > http://stackoverflow.com/questions/4804779/how-to-check-difference-between-two-databases-in-postgressql > http://pgdiff.sourceforge.net/ > > But I prefer editing files for storing schema and function > definitions. And I use a git. I dislike direct object modifying via > tools like pgAdmin and similar. > Same here. -- // Dmitriy.
[GENERAL] Full text search, SQL functions, and the planner
Hi, To maintain an externally defined interface, I'd like to define a function hasword(haystack, needle) that (essentially) returns 1 when to_tsvector(haystack) @@ to_tsquery(needle), 0 otherwise. I've tried CREATE OR REPLACE FUNCTION ivo_hasword(haystack TEXT, needle TEXT) RETURNS INTEGER AS $func$ SELECT CASE WHEN to_tsvector('english', $1) @@ plainto_tsquery($2) THEN 1 ELSE 0 END $func$ LANGUAGE SQL STABLE; It seems the planner inlines the function body, as intended: # explain select * from rr.resource where 1=ivo_hasword(res_title, 'optical'); ... Seq Scan on resource (cost=0.00..2269.72 rows=69 width=924) Filter: (1 = CASE WHEN (to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text)) THEN 1 ELSE 0 END) -- but it doesn't use the index on res_title, as it could, as exhibited by this equivalent query: # explain select * from rr.resource where to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text); ... Bitmap Heap Scan on resource (cost=21.96..731.76 rows=252 width=924) Recheck Cond: (to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text)) -> Bitmap Index Scan on resource_res_title (cost=0.00..21.89 rows=252 width=0) Index Cond: (to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text)) Indeed, the index isn't used either when I'm writing the 1 = CASE WHEN statement into the query directly (this is on postgres 9.1). Sure enough, if I define the function like this: CREATE OR REPLACE FUNCTION bool_hasword(haystack TEXT, needle TEXT) RETURNS BOOLEAN AS $func$ SELECT to_tsvector('english', $1) @@ plainto_tsquery($2) $func$ LANGUAGE SQL STABLE; (i.e., returning a boolean instead of the lousy integer), the index is used. So -- is there a way to let the planner look through the CASE? Or some way of turning the boolean to an integer that the planner can see through? The CAST(... AS INTEGER) that fortunately would even produce the right numbers appears to be opaque to the planner. Anticipating the sane and logical answer: The boolean function doesn't really help me; this is about handing through that function directly to ADQL (http://www.ivoa.net/documents/latest/ADQL.html) as a user defined function, and those user defined functions cannot be boolean-valued. Cheers, Markus -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Open bytea files (images, docs, excel, PDF) stored in Postgresql using a Microsoft Access form
Le vendredi 07 juin 2013 à 11:35 +0200, Aitor Gil Martin a écrit : > Hi, > > I,ve got a clients table in PostgreSQL. Each client has different > documents (more than 7.000 files in total in different extensions JPG, > XLS,DOC,PDF…) stored in a bytea field in another PostgreSQL Table. My > intention is to create a form using Microsoft Access (or some other > software) to be able to manage those files (Add more files, modify > files or delete files). > > What should I do in order Access could understand (open) each file or > store new files? I do this from a navigator now, but the general idea is : use this table structure: create table ( id_document serial, content bytea, extension text) In your form, display a list of links, one for each file, pointing to a directory you can write to, with the file's id and extension. So if file id is 7001, extension is pdf, path is : your/directory/7001.doc Write the file's data to disk, obviously using the same name for the file as in the link, once the user clicks on the link, using the on_click event : file will be served to the user. -- Salutations, Vincent Veyron http://marica.fr/site/demonstration Progiciel de gestion des sinistres assurance et des dossiers contentieux pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Open bytea files (images, docs, excel, PDF) stored in Postgresql using a Microsoft Access form
Ok. I got the general idea. Thanks Vincent. My next question is: How do I convert the bytea content stored in Postgresql into the real file so I can put it in a local directory? (and the opposite process: store a file on a path to a postgresql database) I know this could be more an access issue but I'm quite newbie... Regards, -Mensaje original- De: Vincent Veyron [mailto:vv.li...@wanadoo.fr] Enviado el: viernes, 07 de junio de 2013 14:41 Para: Aitor Gil Martin CC: pgsql-general@postgresql.org Asunto: Re: [GENERAL] Open bytea files (images, docs, excel, PDF) stored in Postgresql using a Microsoft Access form Le vendredi 07 juin 2013 à 11:35 +0200, Aitor Gil Martin a écrit : > Hi, > > I,ve got a clients table in PostgreSQL. Each client has different > documents (more than 7.000 files in total in different extensions JPG, > XLS,DOC,PDF...) stored in a bytea field in another PostgreSQL Table. My > intention is to create a form using Microsoft Access (or some other > software) to be able to manage those files (Add more files, modify > files or delete files). > > What should I do in order Access could understand (open) each file or > store new files? I do this from a navigator now, but the general idea is : use this table structure: create table ( id_document serial, content bytea, extension text) In your form, display a list of links, one for each file, pointing to a directory you can write to, with the file's id and extension. So if file id is 7001, extension is pdf, path is : your/directory/7001.doc Write the file's data to disk, obviously using the same name for the file as in the link, once the user clicks on the link, using the on_click event : file will be served to the user. -- Salutations, Vincent Veyron http://marica.fr/site/demonstration Progiciel de gestion des sinistres assurance et des dossiers contentieux pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Open bytea files (images, docs, excel, PDF) stored in Postgresql using a Microsoft Access form
Hello 2013/6/7 Aitor Gil Martin : > Ok. I got the general idea. Thanks Vincent. > My next question is: > How do I convert the bytea content stored in Postgresql into the real file so > I can put it in a local directory? (and the opposite process: store a file on > a path to a postgresql database) > I know this could be more an access issue but I'm quite newbie... > Regards, some useful links, you should to use lo_import, lo_exports functions. http://snipplr.com/view/62576/convert-oid-to-bytea/ http://postgresql.1045698.n5.nabble.com/GENERAL-Large-Object-to-Bytea-Conversion-td1870627.html http://www.postgresql.org/docs/9.1/static/lo-funcs.html Regards Pavel > > -Mensaje original- > De: Vincent Veyron [mailto:vv.li...@wanadoo.fr] > Enviado el: viernes, 07 de junio de 2013 14:41 > Para: Aitor Gil Martin > CC: pgsql-general@postgresql.org > Asunto: Re: [GENERAL] Open bytea files (images, docs, excel, PDF) stored in > Postgresql using a Microsoft Access form > > Le vendredi 07 juin 2013 à 11:35 +0200, Aitor Gil Martin a écrit : >> Hi, >> >> I,ve got a clients table in PostgreSQL. Each client has different >> documents (more than 7.000 files in total in different extensions JPG, >> XLS,DOC,PDF...) stored in a bytea field in another PostgreSQL Table. My >> intention is to create a form using Microsoft Access (or some other >> software) to be able to manage those files (Add more files, modify >> files or delete files). >> >> What should I do in order Access could understand (open) each file or >> store new files? > > I do this from a navigator now, but the general idea is : > > use this table structure: > > create table ( > id_document serial, > content bytea, > extension text) > > In your form, display a list of links, one for each file, pointing to a > directory you can write to, with the file's id and extension. So if file > id is 7001, extension is pdf, path is : your/directory/7001.doc > > Write the file's data to disk, obviously using the same name for the > file as in the link, once the user clicks on the link, using the > on_click event : file will be served to the user. > > > -- > Salutations, Vincent Veyron > http://marica.fr/site/demonstration > Progiciel de gestion des sinistres assurance et des dossiers contentieux pour > le service juridique > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] compiling postgresql 9.2.4 on fedora 17 64 bit takes very long time
c k writes: > Yesterday we configured a new server with Fedora 17 64 bit and updated it's > kernel to 3.8. I have to compile and install postgresql with python. So I > executed commands > ./configure --with-python > make > but make is running for last 16 hours and still not completed. It is > showing same messages again and again like checking pthreads.h usability > etc. I have downloaded latest source from postgresql website. Wild guess, but: check your system clock, maybe it's set far in the past. There are places where the build will act strange if the source file timestamps are newer than the system clock reading. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text search, SQL functions, and the planner
Marc Dahn writes: > So -- is there a way to let the planner look through the CASE? No. It would actually be wrong to simplify the expression in the way you're imagining, since "1 = case when bool_expr then 1 else 0 end" does not give the same result as the plain bool_expr if the latter yields null. If you're sufficiently desperate, though, you might consider some hack like this: regression=# CREATE OR REPLACE FUNCTION b_hasword(haystack TEXT, needle TEXT) RETURNS boolean AS $func$ SELECT to_tsvector('english', $1) @@ plainto_tsquery($2) $func$ LANGUAGE SQL STABLE; CREATE FUNCTION regression=# create function inteqbool(int,bool) returns bool as regression-# $$select $1::bool = $2 $$ language sql stable; CREATE FUNCTION regression=# CREATE OPERATOR = (procedure = inteqbool, leftarg=int, rightarg=bool); CREATE OPERATOR regression=# explain select * from resource where 1=b_hasword(res_title, 'optical'); QUERY PLAN Bitmap Heap Scan on resource (cost=4.20..14.38 rows=7 width=32) Recheck Cond: (to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text)) -> Bitmap Index Scan on resource_to_tsvector_idx (cost=0.00..4.20 rows=7 width=0) Index Cond: (to_tsvector('english'::regconfig, res_title) @@ plainto_tsquery('optical'::text)) (4 rows) which relies on the fact that the planner *will* simplify "boolexpr = true" to just "boolexpr", so that after a couple of rounds of inlining and simplification we get to just the bare @@ expression. But aside from the time spent doing that, this approach could have unpleasant side effects in the form of causing "int = bool" expressions to be accepted generally, thus masking errors. Might be better to fix your client-side code. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Synchronous Replication in production
On Fri, Jun 7, 2013 at 3:22 AM, Albe Laurenz wrote: > Colin Sloss wrote: >> I have been testing the differences between asynchronous and synchronous hot >> standby streaming >> replication on PostgreSQL 9.2.4. There is some push towards synchronous >> replication, but I am finding >> some serious problems, and wonder how other people deal with them. >> >> Action: >> The Slave is unreachable (postgres is stopped, or machine is turned off) >> >> Result: >> Transactions stay open, and add up until connection is allowed between >> Master and Slave again. >> >> My guess: >> Create a script that detects when transactions are being opened, but not >> committed on slave. Alter >> postgresql.conf to comment out synchronous_standby_names and pg_ctl reload. >> This should boot the >> server back to asynchronous, more or less. >> >> Caveats: >> I haven't tested this yet, and I'm not sure what would happen to the >> transactions that started while >> synchronous replication was active. Any guesses? >> >> The whole idea of my solution was to have no single point of failure. This >> seems to create two >> exclusive points of failure, each needing a completely separate reaction. My >> original proposal was >> asynchronous replication, with xlogs being stored on shared storage, and >> DRBD replication to prevent >> it from being a single point of failure. I have never seen it go over 15kbs >> behind in my setup, which >> still results in a very speedy failover. Is it just me, or is that seeming >> better than just >> synchronous replication? >> >> Another caveat I found is that setting up slaves becomes more complicated. >> You have to set up the >> Master in asynchronous style, and then switch it to synchronous only when >> the timing is right. >> Otherwise the transactions will sit there until everything is ready. >> >> Sorry for the onslaught of questions, I don't expect all of them satisfied. >> Please share any >> resolutions to these issues which you guys have discovered. > > One simple guideline first: > If you use synchronous replication, you'll have to have at least two standby > servers or the overall availability of your system will suffer. > > The differences between synchronous and asynchronous replication are mostly: > a) With synchronous replication you cannot lose a committed transaction >during failover. > b) Synchronous replication will slow down your system; the higher the >network latency between the servers, the slower it will get. > > You should work on getting the requirements defined: > - In the case of failover, can you afford to lose a few committed > transactions? > - Are you ready to pay the price for synchronous replication > (second standby if you don't want availability to suffer, slower > database system, more complicated setup and failover procedures)? > > For synchronous replication, keep the servers close together with > a strong network inbetween. > To protect against catastrophes (if that's a requirement), you should > use another asynchronous standby in a distant location. yeah -- well put. Synchronous replication exists because in some cases even the loss of a single transaction is unacceptable. So it's not really fair to compare vs asynchronous which is more of a 'best effort' system; it in no way guarantees that every transaction is recoverable. That small fudge allows for big optimizations in terms of process simplicity and performance. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function tracking
On 6/7/2013 6:30 AM, Rebecca Clarke wrote: I'm looking for suggestions on the best way to track the updates to a function. We have two databases, Dev & Live, so I want to update Live with just the functions that have been modified in the DEV databas3e. Is there another, easier way to track the updates than manually recording it in a document? (I'm trying to eliminate human interference). Script the changes out and put them in source control. The small amount of effort and discipline required will pay off big time. -- Stephen -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function tracking
2013/6/7 Rebecca Clarke : > Hi all > > I'm looking for suggestions on the best way to track the updates to a > function. > > We have two databases, Dev & Live, so I want to update Live with just the > functions that have been modified in the DEV databas3e. > Is there another, easier way to track the updates than manually recording it > in a document? (I'm trying to eliminate human interference). You might want to take a look at Sqitch ( http://sqitch.org/ ), which is a little complex to set up, but works along similar lines to GIT. As others have said, you definitely need to use some kind of source control to track changes. With the systems I've managed, I've ensured that releases are tagged in the source control system with a release number, and that database script files exist for each release (if needed) which are applied when the release is rolled out to the respective environment. It's still a bit of a manual process and requires a bit of (self) discipline, but has worked quite well for me. Regards Ian Barwick -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function tracking
Rebecca Clarke wrote on 07.06.2013 12:30: I'm looking for suggestions on the best way to track the updates to a function. We have two databases, Dev & Live, so I want to update Live with just the functions that have been modified in the DEV databas3e. Is there another, easier way to track the updates than manually recording it in a document? (I'm trying to eliminate human interference). We are pretty happy using Liquibase for all our schema migrations. For stored procedures/functions it's best to use a changeSet that includes a SQL file and is marked as runOnChange=true The Liquibase changelog files are then stored in Subversion. A little shell script applies the changes to any environment we want Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Slave promotion failure
Le 2013-06-07 à 07:00, François Beausoleil a écrit : > > Le 2013-06-06 à 18:40, Michael Paquier a écrit : > >> On Fri, Jun 7, 2013 at 1:37 AM, François Beausoleil >> wrote: >> I can't seem to promote the slave: >> >> $ sudo -u postgres touch /var/lib/postgresql/9.1/main/recovery.done >> # log is silent >> This has no effect. recovery.conf is renamed to recovery.done internally by >> the server. If recovery.done is present in data folder with recovery.conf at >> the moment of promotion, recovery.done is removed before file renaming. What >> you can do to use a trigger file for promotion is setting up trigger_file in >> recovery.conf, then promotion will be kicked once file has been created. > > I believe I know what my mistake is: I set trigger_file to > /var/lib/postgresql/9.1/main/recovery.done -- and PostgreSQL doesn't seem to > like that name. I should set to another name and retry. > > It's still strange that pg_ctl promote didn't work though. Maybe because > recovery.done existed at the time I tried. > > I'll try again today, with better names. Okay, here's my new recovery.conf: standby_mode = on restore_command = '/usr/local/omnipitr/bin/omnipitr-restore -D /var/lib/postgresql/9.1/main/ --source gzip=/var/backups/seevibes/wal/dbanalytics.production/ --remove-unneeded --temp-dir /var/tmp/omnipitr -l /var/log/omnipitr/restore.log --error-pgcontroldata hang --pgcontroldata-path /usr/lib/postgresql/9.1/bin/pg_controldata "%f" "%p"' trigger_file = '/var/lib/postgresql/9.1/main/trigger-promotion' archive_cleanup_command = '/usr/local/omnipitr/bin/omnipitr-cleanup --log /var/log/omnipitr/cleanup.log --archive gzip=/var/backups/seevibes/wal/dbanalytics.production "%r"' Notice trigger_file has a better name. I touch the file using: sudo -u postgres touch /var/lib/postgresql/9.1/main/trigger-promotion and nothing happens: no messages appear in the log, PostgreSQL continues to apply WAL records. I've just retried pg_ctl promote, and that too didn't do anything. I'm really at a loss to explain what happens. Bye, François smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Function tracking
> From: Pavel Stehule > To: Rebecca Clarke > Cc: pgsql-general@postgresql.org > Sent: Friday, 7 June 2013, 11:44 > Subject: Re: [GENERAL] Function tracking > > Hello > > 2013/6/7 Rebecca Clarke : >> Hi all >> >> I'm looking for suggestions on the best way to track the updates to a >> function. >> >> We have two databases, Dev & Live, so I want to update Live with just > the >> functions that have been modified in the DEV databas3e. >> Is there another, easier way to track the updates than manually recording > it >> in a document? (I'm trying to eliminate human interference). >> > > There is a few tools > > http://stackoverflow.com/questions/4804779/how-to-check-difference-between-two-databases-in-postgressql > http://pgdiff.sourceforge.net/ > > But I prefer editing files for storing schema and function > definitions. And I use a git. I dislike direct object modifying via > tools like pgAdmin and similar. > I agree, things can get a bit chaotic with everyone using pgAdmin. We do similiar with a set of script files in source control. In addition some sort of automated deployment process helps. My soloution is probably overkill, but we have to deploy over a number of slony nodes in a big two phase commit. I have a controlled deployment process that checks the changes against a small list of things I don't want the devs doing, checks for any errors by testing against a special clone, and then records the actual effects of the changes in the scripts (i.e. drop cascaded etc) before anything is actually deployed. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function tracking
> From: Rebecca Clarke >To: pgsql-general@postgresql.org >Sent: Friday, 7 June 2013, 11:30 >Subject: [GENERAL] Function tracking > > > >Hi all > > >I'm looking for suggestions on the best way to track the updates to a function. > > >We have two databases, Dev & Live, so I want to update Live with just the >functions that have been modified in the DEV databas3e. >Is there another, easier way to track the updates than manually recording it >in a document? (I'm trying to eliminate human interference). > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] What's a good way to improve this query?
Hello again, El jueves 6 de junio de 2013 a las 12:23, Jorge Arévalo escribió: > Hello, > > > El miércoles 5 de junio de 2013 a las 20:31, Paul Ramsey escribió: > > > Well, your objects are larger than the page size, so you're getting them > > out of the toast tables, not directly out of main storage. You may also > > have your type declared as 'main' storage, which means it's zipped up, so > > it's being unzipped before you can access it, that's also an overhead. > > Good to know. I'll check it. Checked. Storage strategy is 'extended'. If I understood well, that means that the data is compressed and, if it's still too long, it's moved out of main storage. Maybe a 'external' strategy would perform faster (no need to unzip), with the penalty of increased storage space. Am I right? At the end, if I want more speed while fetching data, I should change the storage strategy of the raster type. That would probably require rebuild the entire database. I don't think it worths (unless it performs *really* faster) > > > > For metadata retrieval, the thing to do is store the metadata at the head > > of the object (which I'm not looking into pgraster to see if you do, but > > I'll assume for now) and then use PG_DETOAST_DATUM_SLICE in the metadata > > accessor function, so that you only pull the bytes you want, rather than > > detoasting the whole object just to get the header information. > > Ok. I'll check the PostGIS Raster functions too. > Yes, PG_DETOAST_DATUM_SLICE is used in all metadata functions. > > > > You may be causing further pain by having all the metadata functions > > separate, so that in fact the object is being read 9 separate times by your > > different functions. It'll float into cache quickly enough, but the > > uncompression step at each access will still be there. You might want to > > stuff the query through a sampling profiler (OSX Shark!) and confirm, but I > > would guess you'll find a lot of cycles spinning in zlib for this query. > > Yes, you're right. Actually, replacing the calls with a general ST_Metadata > call and unpacking the record at client side, it's really faster. Many thanks! > > And many thanks about Shark too. Looks great. > > Best regards, > Jorge > So, I think the only thing I can do is try to rewrite the queries, if possible. But if I need the metadata of all tiles in order to know individual pixel sizes and dimensions, there's not much space for improvement. I guess things like choosing a tile size that fits in a postgres page size may help. Or creating a cache with precalculated values at client-side. Any other suggestions to improve the experience in data visualization using PostGIS Raster as raster storage system are welcome. Many thanks again, Paul. -- Jorge > > Paul > > > > -- > > Paul Ramsey > > http://cleverelephant.ca > > http://postgis.net > > > > > > On Wednesday, June 5, 2013 at 11:22 AM, Jorge Arévalo wrote: > > > > > Hello, > > > > > > I'm running this PostGIS Raster query > > > > > > select > > > st_scalex(rast), > > > st_scaley(rast), > > > st_skewx(rast), > > > st_skewy(rast), > > > st_width(rast), > > > st_height(rast), > > > rid, > > > st_upperleftx(rast), > > > st_upperlefty(rast), > > > st_numbands(rast) > > > from > > > my_postgis_raster_table > > > > > > > > > > > > I want to remark that, even when 'rast' is a complex type and can be > > > really big, I'm getting just metadata. Not the whole 'rast' column. > > > Anyway, the average dimensions of a 'rast' column in like 600x400 pixels > > > (8 bits per pixel). So, not so big (about 234 KB per rast object). > > > > > > My table has 1257 rows, and this query takes about 45 secs to execute > > > (45646 msecs). I think it's too slow. I'm just getting metadata, not the > > > whole 'rast' object, as said. > > > > > > This is the explain analyze output > > > > > > Seq Scan on my_postgis_raster_table (cost=0.00..198.85 rows=1257 > > > width=36) (actual time=86.867..51861.495 rows=1257 loops=1) > > > Total runtime: 51863.919 ms > > > > > > > > > > > > So, basically a sequential scan. As expected, I guess (I'm not a postgres > > > expert, so sorry if I'm talking nonsense) > > > > > > I've calculated the effective transfer rate for this table > > > > > > SELECT pg_size_pretty(CAST(pg_relation_size('my_postgis_raster_table') / > > > 45646 * 1000 as int8)) AS bytes_per_second; > > > > > > As it's 27KB. Isn't it a slow rate? Is there any kind of index I could > > > create to speed this query? Maybe use some kind of cache system? > > > > > > Many thanks in advance, > > > > > > -- > > > Jorge Arevalo > > > Freelance developer > > > > > > http://www.krop.com/jorgearevalo > > > http://about.me/jorgeas80 > > > > > > Enviado con Sparrow (http://www.sparrowmailapp.com/?sig) > > > > > > > > > > > > > > > -- > > > Sent via pgsql-general mailing list (pgsql-general
Re: [GENERAL] Slave promotion failure
Le 2013-06-07 à 12:00, François Beausoleil a écrit : > > Le 2013-06-07 à 07:00, François Beausoleil a écrit : > >> >> Le 2013-06-06 à 18:40, Michael Paquier a écrit : >> >>> On Fri, Jun 7, 2013 at 1:37 AM, François Beausoleil >>> wrote: >>> I can't seem to promote the slave: >>> >>> $ sudo -u postgres touch /var/lib/postgresql/9.1/main/recovery.done >>> # log is silent >>> This has no effect. recovery.conf is renamed to recovery.done internally by >>> the server. If recovery.done is present in data folder with recovery.conf >>> at the moment of promotion, recovery.done is removed before file renaming. >>> What you can do to use a trigger file for promotion is setting up >>> trigger_file in recovery.conf, then promotion will be kicked once file has >>> been created. >> >> I believe I know what my mistake is: I set trigger_file to >> /var/lib/postgresql/9.1/main/recovery.done -- and PostgreSQL doesn't seem to >> like that name. I should set to another name and retry. >> >> It's still strange that pg_ctl promote didn't work though. Maybe because >> recovery.done existed at the time I tried. >> >> I'll try again today, with better names. > > Okay, here's my new recovery.conf: > > standby_mode = on > restore_command = '/usr/local/omnipitr/bin/omnipitr-restore -D > /var/lib/postgresql/9.1/main/ --source > gzip=/var/backups/seevibes/wal/dbanalytics.production/ --remove-unneeded > --temp-dir /var/tmp/omnipitr -l /var/log/omnipitr/restore.log > --error-pgcontroldata hang --pgcontroldata-path > /usr/lib/postgresql/9.1/bin/pg_controldata "%f" "%p"' > trigger_file = '/var/lib/postgresql/9.1/main/trigger-promotion' > archive_cleanup_command = '/usr/local/omnipitr/bin/omnipitr-cleanup --log > /var/log/omnipitr/cleanup.log --archive > gzip=/var/backups/seevibes/wal/dbanalytics.production "%r"' > > Notice trigger_file has a better name. I touch the file using: > > sudo -u postgres touch /var/lib/postgresql/9.1/main/trigger-promotion > > and nothing happens: no messages appear in the log, PostgreSQL continues to > apply WAL records. > > I've just retried pg_ctl promote, and that too didn't do anything. I'm really > at a loss to explain what happens. I answered my question. I use OmniPITR, and I forgot to include the --finish-recovery flag, pointing to the trigger file. Bye, François smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] PostgreSQL Synchronous Replication in production
On Thu, Jun 6, 2013 at 5:23 AM, Richard Huxton wrote: > > If you have a business e.g. selling books or train tickets or some such > then you might decide it's better to have a simpler more robust setup from > the point of view of providing continuous service to end-customers. In the > (hopefully rare) event of a crash irreparably losing some transactions > apologise to your customers and recompense them generously. > Unfortunately you probably no longer know who to apologize to, what for, or how much to compensate them! I guess when you reconcile your cc credits from the cc company recorded in their system to the sales in your recovered system, you will have evidence of the discrepancies. Now I can't use an ecommerce without pondering all the ways something can go wrong, and how to minimize/address them. Cheers, Jeff