Re: [GENERAL] around fields with psql
On 2012-02-10, Steve Clark scl...@netwolves.com wrote: Is there a way with psql to get column output to be data1,data2,...,datan assuming you are trying to be compatible with CSV: copy ( your_query_here ) to stdout with csv header ; -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default database selector
On 2012-02-07, Dave Potts dave.po...@pinan.co.uk wrote: I am running Ubuntu 11.04, I have Postgres 8.4 and 9.1 installed. My default when I say psql it connects to postgres 8.4 ie I set export PGCLUSTER=9.1/main it connects to 9.1 Q. How can I connect to 9.1 by default without having to set PGCLUSTER? I looked in /etc/postgresql-common, there did not seem to be anything obivious in there for which version to use. man 7 postgresql-common -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] last entry per person
On 2012-02-03, garry ga...@scholarpack.com wrote: This is a multi-part message in MIME format. --060709070909070009090305 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit I have a table which holds a user name and their results in exams. There can be multiple entries per user. I am trying to return the last entry for each user. I can get the last entry in the table using the order by/limit method but how would this be applied per user. My table definition is like the following: gradeid serial primary key, user text, grade char(1), entered timestamp, select distinct on (user) * from EXAMS order by entered desc; -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Let-bindings in SQL statements
On 2012-01-26, David Johnston pol...@yahoo.com wrote: Is it possible to do the equivalent of let-bindings in a pure SQL function? I have a SELECT that invokes now multiple times. It would be nicer to do it only once and reuse the value. Something like this: There is no need. now() is tagged as stable. it will only be executed once. the planner will figure this out for you. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Defining Role Privileges
On 2012-02-08, Carlos Mennens carlos.menn...@gmail.com wrote: ALTER ROLE tom ENCRYPTED PASSWORD 'md5081bea17b5503506d29531af33cc6f4e'; \password tom Is there a downside to using the \password psql command? Is it also encrypted like the statement above? How do you create roles and do you do it manually or have some kind of template? I checked that a few weeks ago when doing the latter version the password is translated into a form similar to former version your chosen new password is sent over the wire encrypted. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Let-bindings in SQL statements
On 2012-01-26, David W Noon dwn...@ntlworld.com wrote: Try using CURRENT_TIMESTAMP instead. In fact, CURRENT_TIMESTAMP is more traditional SQL than now(). I don't have an ANSI standard handy, so I cannot be certain when now() was added, if ever; but I have been using CURRENT TIMESTAMP (space instead of vinculum) under DB2 for 20 years or more. The planner will rewrite CURRENT_TIMESTAMP to now() :) -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] easy function or trigger to UPPER() all alpha data
mgo...@isstrucksoftware.net mgo...@isstrucksoftware.net wrote: We need to ensure that our data is in upper case only in the db. Is there a easy way to do this via a function without having to name each column separately? usually I like to explain why it's not possible before giving the game away, but I see that others have already explained that. here's a trigger function that should do what you want. create or replace function upper_row() returns trigger language plpgsql as $$ begin -- I consider this a hack. no warranty express or implied execute 'select ('|| quote_literal(upper(new::text)) ||'::'|| quote_ident(TG_TABLE_SCHEMA) ||'.'|| quote_ident(TG_TABLE_NAME) || ').*' into new; return new; end; $$; what it does is convert new into a string and then uppercase the string then convert the string back into a record and put the result back into new. I have tested it with ASCII text and it seems to work fine, any datatypes which are case sensitive will be effected numbers and timestamps should be unaffected, but note that this trigger will mangle BYTEA data. because it uses execute it's not particularly efficient should you do any bulk updates, other that that the overhead should not be too much. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] URGENT: temporary table not recognized?
On 2012-01-06, Phoenix Kiula phoenix.ki...@gmail.com wrote: On Fri, Jan 6, 2012 at 6:53 PM, Steve Crawford scrawf...@pinpointresearch.com wrote: Thanks Steve. The file has 350 million lines. Sed, Awk etc are a little painful when the file is 18GB witht hat many lines. On files of that size they're a lot nicer than an interactive editor. It's not like you need to find space on the disk for an edited copy: ( echo copy table_name from stdin other_parameters; cat bigfile | sed script_or_scriptfile echo '\\.' ) | psql connection_parameters I'd want Postgresql to ignore the line altogether when something is missing. Is this an option we can use, or are rules hoisted on us? The copy command is optimised and intended for use with data that is known to be good, or atleast acceptable to the database. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.1.1 crash
Mike Blackwell wrote: The following are the relevant log entries from a recent crash of v9.1.1 running on an older RHEL Linux box. This is the first crash we've experienced in a lot of years of running Pg. Any assistance in how to determine what might have caused this is welcome. 2012-02-10 13:55:59 CST [15949]: [37-1] @ LOG: 0: server process (PID 32670) was terminated by signal 11: Segmentation fault [...] It is difficult to find out anything after the crash if the problem cannot be reproduced. If you happen to have changed the core file ulimit setting away from the default zero, you should have a core file in the data directory which can be used to create a backtrace which shows you where the server crashed. And even that only really helps with a debug build. Other than that, you could make sure that hard disk and memory have no problem (you write that it is an older box). You can try to find out what the server was doing at the time and if you can reproduce it. Crashes are also often caused by nonstandard C funxtions that have been loaded into the database. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql latex and newlines
On vr, 2012-02-10 at 19:25 -0500, Bruce Momjian wrote: On Mon, Dec 12, 2011 at 07:15:12PM +0100, Wim Bertels wrote: Hallo, psql latex output format needs to differentiate between a newline and a tabularnewline. the problem arises when u have a field value that contains a newline character, when this field is not the first column, then all the data after this newline comes in the first column.. u can try this out, writing a function or table, and then add 'enters' or newline in the COMMENT on this function or table. the \pset recordsep doesn't solve this, since the mistakes take place within the same record. Can you give me a self-contained test case I can use so I can use it to fix the to code? Hi Bruce, i have attached some files: 1 sql file 1 corresponding output file 1 full latex file using the output of the above file i don't see and easy search/replace fix for this problem, maybe using the \multirow and \multicolumn options in latex is the most general solution, as all the others seems to need concrete dimensions (as width) http://www.google.com/search?client=ubuntuchannel=fsq=newline+tabular +cell+latexie=utf-8oe=utf-8 http://andrewjpage.com/index.php?/archives/43-Multirow-and-multicolumn-spanning-with-latex-tables.html There is also a problem with long lines as u can test by uncommenting the t4bruce3 function. info about the versions: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 10.04.3 LTS Release:10.04 Codename: lucid wim@zwerfkat:~/wet/gb/postgresql/debug$ psql -V psql (PostgreSQL) 8.4.10 contains support for command-line editing mvg, Wim Bertels \documentclass[12pt,a4paper]{report} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \author{Wim Bertels} \begin{document} \begin{tabular}{l | l | l} \textit{Schema} \textit{Name} \textit{Description} \\ \hline latex\_test t4bruce1 This function and so on enter\\ another enter,\\ just one more.\\ \\ latex\_test t4bruce2 This function and so on enter\\ another enter,\\ just one more,\\ so now have even one more.\\ \\ \end{tabular} \noindent % with a \\ replaced by a newline, doenst help \begin{tabular}{l | l | l} \textit{Schema} \textit{Name} \textit{Description} \\ \hline latex\_test t4bruce1 This function and so on enter\newline another enter,\\ just one more.\\ \\ latex\_test t4bruce2 This function and so on enter\\ another enter,\\ just one more,\\ so now have even one more.\\ \\ \end{tabular} \noindent %\begin{tabular}{l | l | l} %\textit{Schema} \textit{Name} \textit{Description} \\ %\hline %latex\_test t4bruce1 This function and so on enter\\ another enter,\\ just one more.\\ \\ %latex\_test t4bruce2 This function and so on enter\\ another enter,\\ just one more,\\ so now have even one more.\\ \\ %latex\_test t4bruce3 This function and so on enter\\ another enter,\\ just one more,this is a very long line without an enter, just one more,this is a very long line without an enter\\ so now have even one more.\\ \\ %\end{tabular} % %\noindent \end{document}DROP SCHEMA CREATE SCHEMA SET CREATE FUNCTION COMMENT CREATE FUNCTION COMMENT \begin{tabular}{l | l | l} \textit{Schema} \textit{Name} \textit{Description} \\ \hline latex\_test t4bruce1 This function and so on enter\\ another enter,\\ just one more.\\ \\ latex\_test t4bruce2 This function and so on enter\\ another enter,\\ just one more,\\ so now have even one more.\\ \\ \end{tabular} \noindent (2 rows) \\ \pset format latex \o out.tex DROP SCHEMA IF EXISTS latex_test CASCADE; CREATE SCHEMA latex_test; SET SEARCH_PATH TO latex_test; CREATE OR REPLACE FUNCTION t4bruce1(i integer) RETURNS void AS $_$ DECLARE BEGIN END; $_$ LANGUAGE 'plpgsql' VOLATILE; COMMENT ON FUNCTION t4bruce1(i integer) IS $_$This function and so on enter another enter, just one more. $_$; CREATE OR REPLACE FUNCTION t4bruce2(v varchar) RETURNS varchar AS $_$ DECLARE BEGIN RETURN v; END; $_$ LANGUAGE 'plpgsql' VOLATILE; COMMENT ON FUNCTION t4bruce2(v varchar) IS $_$This function and so on enter another enter, just one more, so now have even one more. $_$; /* CREATE OR REPLACE FUNCTION t4bruce3(v varchar) RETURNS varchar AS $_$ DECLARE BEGIN RETURN v; END; $_$ LANGUAGE 'plpgsql' VOLATILE; COMMENT ON FUNCTION t4bruce3(v varchar) IS $_$This function and so on enter another enter, just one more,this is a very long line without an enter, just one more,this is a very long line without an enter so now have even one more. $_$; */ SELECT n.nspname as Schema, p.proname as Name, pg_catalog.obj_description(p.oid, 'pg_proc') as Description FROM pg_catalog.pg_proc p LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace LEFT
Re: [GENERAL] easy function or trigger to UPPER() all alpha data
On Wed, Feb 8, 2012 at 10:51 AM, Andreas Kretschmer akretsch...@spamfence.net wrote: mgo...@isstrucksoftware.net mgo...@isstrucksoftware.net wrote: We need to ensure that our data is in upper case only in the db. Is there a easy way to do this via a function without having to name each column separately? You can define a TRIGGER for such tasks (befor insert or update), but you have to name each column (maybe not within triggers written in pl/perl, i'm not sure ...) you can skirt the restriction with some hstore (ab)use... create or replace function all_upper() returns trigger as $$ begin new := populate_record(new, hstore(array_agg(key), array_agg(upper(value from each(hstore(new)); return new; end; $$ language plpgsql; create trigger on_foo_insert before insert on foo for each row execute procedure all_upper(); postgres=# insert into foo values (1, 'abc', 'def'); INSERT 0 1 Time: 3.388 ms postgres=# select * from foo; a | b | c ---+-+- 1 | ABC | DEF (1 row) of course, if some non text datatype is sensitive to case in it's textual formatting, this might break. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] easy function or trigger to UPPER() all alpha data
On Mon, Feb 13, 2012 at 11:42 AM, mgo...@isstrucksoftware.net wrote: Thank you very much. This is most helpful. you're welcome. Keep in mind hstore features you need start with postgres 9.0 and it's an extension aka contrib you have to add to the database. (also as Andreas noted, please try to keep responses on-list). merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] around fields with psql
On Sat, Feb 11, 2012 at 2:03 AM, Jasen Betts ja...@xnet.co.nz wrote: On 2012-02-10, Steve Clark scl...@netwolves.com wrote: Is there a way with psql to get column output to be data1,data2,...,datan assuming you are trying to be compatible with CSV: copy ( your_query_here ) to stdout with csv header ; yeah -- that's the best way if you want actual csv, from psql you'd probably want to do \copy: postgres=# \copy (select 1, '', 'ab,c') to stdout csv header; ?column?,?column?,?column? 1,,ab,c note that per csv rules columns are only required to be quoted to protect from unambiguous parsing. also, double quotes in your field will be escaped. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] around fields with psql
On 02/13/2012 02:13 PM, Merlin Moncure wrote: On Sat, Feb 11, 2012 at 2:03 AM, Jasen Bettsja...@xnet.co.nz wrote: On 2012-02-10, Steve Clarkscl...@netwolves.com wrote: Is there a way with psql to get column output to be data1,data2,...,datan assuming you are trying to be compatible with CSV: copy ( your_query_here ) to stdout with csv header ; yeah -- that's the best way if you want actual csv, from psql you'd probably want to do \copy: postgres=# \copy (select 1, '', 'ab,c') to stdout csv header; ?column?,?column?,?column? 1,,ab,c note that per csv rules columns are only required to be quoted to protect from unambiguous parsing. also, double quotes in your field will be escaped. merlin Thanks to all that replied. -- Stephen Clark *NetWolves* Director of Technology Phone: 813-579-3200 Fax: 813-882-0209 Email: steve.cl...@netwolves.com http://www.netwolves.com
[GENERAL] What is the life of a postgres back end process?
Are postgres back end processes connection specific? In other words, can we assume / trust that they will be terminated and cleaned up when we close a connection and that they will not live on and be reused by other connections? What is a good way to test this is the case which would account for differences in load? -- Eliot Gable We do not inherit the Earth from our ancestors: we borrow it from our children. ~David Brower I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime. ~David Brower Esse oportet ut vivas, non vivere ut edas. (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero
Re: [GENERAL] What is the life of a postgres back end process?
On Mon, 2012-02-13 at 17:30 -0500, Eliot Gable wrote: Are postgres back end processes connection specific? In other words, can we assume / trust that they will be terminated and cleaned up when we close a connection and that they will not live on and be reused by other connections? Yes, one backend per connection. When you close the connection, the backend process should go away. Under some circumstances, that might not always happen immediately if the backend is in the middle of doing some work. Regards, Jeff Davis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] What is the life of a postgres back end process?
On 02/13/2012 02:45 PM, Jeff Davis wrote: On Mon, 2012-02-13 at 17:30 -0500, Eliot Gable wrote: Are postgres back end processes connection specific? In other words, can we assume / trust that they will be terminated and cleaned up when we close a connection and that they will not live on and be reused by other connections? Yes, one backend per connection. When you close the connection, the backend process should go away. Under some circumstances, that might not always happen immediately if the backend is in the middle of doing some work. Regards, Jeff Davis But to amplify on Jeff's comment, he is referring to the actual final connection to the PostgreSQL server. The OP did not actually specify who we are and what led to the question. An end client completing its work and need for a connection (or even specifically terminating the connection) may not actually release and close the connection to the server for a variety of reasons. PHP persistent connections, Java connection pooling or one or more layers of external connection pooling services like pgbouncer are just three common scenarios. If you suspect a connection is not being closed properly you can run select * from pg_stat_activity(); to view current database connections. To view information about connections from other than the current user you will need database superuser privileges. Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why this regexp matches?!
Alban Hertroys haram...@gmail.com writes: On 4 Feb 2012, at 9:46, hubert depesz lubaczewski wrote: select 'depesz depeszx depesz' ~ E'^(.*)( \\1)+$'; Apparently something odd is going on between the wildcard, the repetitive part and the back-reference. That could be just us not seeing what's wrong with the expression or be an actual bug. FYI, I've made some progress on characterizing the cause of this bug, as per comments at the upstream bug report: https://sourceforge.net/tracker/index.php?func=detailaid=1115587group_id=10894atid=110894 There are actually two distinct bugs involved, and I don't yet have a patch for the case depesz illustrates. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general