[SQL] How compare current_setting(..) ?
Hi there, I'd like to check that my log_line_prefix is set to '%t'. I suppose I can check it with the following statement: SELECT current_setting( 'log_line_prefix' ) WHERE current_setting( 'log_line_prefix' ) != '%t' But it returns every time a row, with '%t', even when log_line_prefix is set to '%t'. What's wrong ? TIA, Sabin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] How compare current_setting(..) ?
On Thu, Feb 22, 2007 at 12:25:42PM +0200, Sabin Coanda wrote: > I'd like to check that my log_line_prefix is set to '%t'. > I suppose I can check it with the following statement: > SELECT current_setting( 'log_line_prefix' ) > WHERE current_setting( 'log_line_prefix' ) != '%t' > > But it returns every time a row, with '%t', even when log_line_prefix is set > to '%t'. > What's wrong ? Works here. Might log_line_prefix have leading or trailing spaces? What does the following return? SELECT '<' || current_setting('log_line_prefix') || '>', length(current_setting('log_line_prefix')); -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] pg_dump fails (timestamp out of range)
Apologies for cross-posting (already sent to pgadmin-support) but I am totally stuck with this: I run an ecommerce system on a webserver, which I want to move to a different machine. However, I am stalled because pg_dump fails with the following error: pg_dump: ERROR: timestamp out of range pg_dump: SQL command to dump the contents of table "server_hit_bin" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: timestamp out of range pg_dump: The command was: COPY public.server_hit_bin (server_hit_bin_id, content_id, internal_content_id, hit_type_id, server_ip_address, server_host_name, bin_start_date_time, bin_end_date_time, number_hits, total_time_millis, min_time_millis, max_time_millis, last_updated_stamp, last_updated_tx_stamp, created_stamp, created_tx_stamp) TO stdout; The table contains about 50,000 records, 6 of which are timestamps. SELECT bin_end_date_time fails with the same error. SELECT last_updated_tx_stamp fails with another error: server closed the connection unexpectedly The server has crashied manies a time, presumably a hardware fault. I presume that the table got corrupted. Any ideas what I can do to dump/restore the database would be much appreciated. -- Regards, Tarlika Elisabeth Schmitz ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] how to generate a list of distinct scalar values from a column which type is array
am Wed, dem 21.02.2007, um 19:21:09 +0100 mailte Sergio Andreozzi folgendes: > Dear all, > > given a column which type is for instance varchar(20)[], is there any SQL > command that let me generate the list of distinct scalar values? > > > e.g.: > col1 > row 1: (aaa, bb, c) > row 2: (, ) > row 3: (aaa, ) > > the query should return: > > aaa > bb > c > > > > if not, I guess the approach is to use stored procedure. Any code snippet/ > suggestion? You can, for each row, split the array into the elements with a loop over array_lower, array_upper, returns this element. Yes, this is a setof-function. Than you can do a simple select distinct * from your_function. I hope this may help you. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] [GENERAL] Esay question, about the numeric format
am Thu, dem 22.02.2007, um 12:20:12 +0100 mailte Rafa Comino folgendes: > Hi every body > I have this query > SELECT 20.00::numeric(38,2) > and postgre gives me 20, i need that postgre gives me 20.00 Works for me: (version 8.1) test=*# SELECT 20.00::numeric(38,2); numeric - 20.00 (1 row) test=*# SELECT 20.00::numeric(38,20); numeric - 20. (1 row) Btw.: http://stoned.homeunix.org/~itsme/postgre/ Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] how to generate a list of distinct scalar values from a column which type is array
am Wed, dem 21.02.2007, um 19:21:09 +0100 mailte Sergio Andreozzi folgendes: > Dear all, > > given a column which type is for instance varchar(20)[], is there any SQL > command that let me generate the list of distinct scalar values? > > > e.g.: > col1 > row 1: (aaa, bb, c) > row 2: (, ) > row 3: (aaa, ) > > the query should return: > > aaa > bb > c > > Okay, next solution: test=*# select * from a; c - {aaa,bb,c} {,} {aaa,} (3 rows) test=*# select distinct c[s] from a, generate_series(1,3)s where c[s] is not null; c -- aaa bb c (5 rows) You need to know the greatest upper dimension of the array, in this case 3, for the generate_series - function. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] how to generate a list of distinct scalar values from a column which type is array
am Thu, dem 22.02.2007, um 14:25:36 +0100 mailte A. Kretschmer folgendes: > test=*# select * from a; > c > - > {aaa,bb,c} > {,} > {aaa,} > (3 rows) > > > test=*# select distinct c[s] from a, generate_series(1,3)s where c[s] is not > null; > c > -- > aaa > bb > c > > > (5 rows) > > > You need to know the greatest upper dimension of the array, in this case > 3, for the generate_series - function. select distinct c[s] from a, generate_series(1,(select max(array_upper(c,1)) from a))s where c[s] is not null; ;-) Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] How compare current_setting(..) ?
Sorry, it was my fault. The setting was '%t' and a space. I fixed it and it works well. Many thanks ! "Michael Fuhr" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On Thu, Feb 22, 2007 at 12:25:42PM +0200, Sabin Coanda wrote: >> I'd like to check that my log_line_prefix is set to '%t'. >> I suppose I can check it with the following statement: >> SELECT current_setting( 'log_line_prefix' ) >> WHERE current_setting( 'log_line_prefix' ) != '%t' >> >> But it returns every time a row, with '%t', even when log_line_prefix is >> set >> to '%t'. >> What's wrong ? > > Works here. Might log_line_prefix have leading or trailing spaces? > What does the following return? > > SELECT '<' || current_setting('log_line_prefix') || '>', > length(current_setting('log_line_prefix')); > > -- > Michael Fuhr > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] pg_dump fails (timestamp out of range)
T E Schmitz <[EMAIL PROTECTED]> writes: > However, I am stalled because pg_dump fails with the following error: > pg_dump: ERROR: timestamp out of range > pg_dump: SQL command to dump the contents of table "server_hit_bin" > failed: PQendcopy() failed. You should treat this as a corrupt-data exercise: you need to identify and fix (or delete) the offending row(s). In this case you might try tests like "bin_end_date_time > '1 Jan '" and so on to see if you can determine exactly which rows are bad. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] DISTINCT ON not working... RESOLVED
This has been resolved -- although I still think it may be a bug in Postgres. I'm confused as hell, it's Friday, and it's hot though... So I'll have to think about it over the weekend and let you know if I can make sense of it. Thanks all for your suggestions. Cheers, ~p -Original Message- From: Phillip Smith [mailto:[EMAIL PROTECTED] Sent: Tuesday, 20 February 2007 09:25 To: 'pgsql-sql@postgresql.org' Subject: DISTINCT ON not working...? Hi all, Strange one - I have a nightly export / import routine that exports from one database and imports to another. Has been working fine for several months, but last night it died on a unique constraint. To cut out all the details, the code that is causing the problem: SELECT DISTINCT ON (ean) code, CASE WHEN ean IS NULL OR valid_barcode(ean) = false THEN null ELSE ean END AS ean FROMTMPTABLE WHERE code NOT IN (SELECT code FROM stock_deleted) ANDean IS NOT NULL That is the code that generates the error on the unique constraint against the ean column. If I play with that and run this: SELECT DISTINCT ON (ean) CASE WHEN ean IS NULL OR valid_barcode(ean) = false THEN null ELSE ean END AS ean, count(*) FROMTMPTABLE WHERE code NOT IN (SELECT code FROM stock_deleted) ANDean IS NOT NULL GROUP BY ean I get a several thousand rows returned, all with a count(*) of 1, except one row: 3246576919422 2 DISTINCT ON should eliminate one of those rows that is making that 2 - as I said, it's been working fine for several months, and it is still doing it correctly for approximately 100 other rows that have duplicate ean codes. Can anyone give me a hand to work out why this one is doubling up?! Cheers, ~p ***Confidentiality and Privilege Notice*** The material contained in this message is privileged and confidential to the addressee. If you are not the addressee indicated in this message or responsible for delivery of the message to such person, you may not copy or deliver this message to anyone, and you should destroy it and kindly notify the sender by reply email. Information in this message that does not relate to the official business of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta. Weatherbeeta, its employees, contractors or associates shall not be liable for direct, indirect or consequential loss arising from transmission of this message or any attachments ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly