[GENERAL] Unable to execute \copy from Client Application
Hello All, My application is a C++ APP. I'm executing \copy command from the CPP and its as follows strPgSqlQuerybuf\\copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; Second \ is to keep the compiler happy. When this command executes on PostGreSql, gives a syntax error as ERROR: syntax error at or near \ at character 1 STATEMENT: \copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ',' Can some please help! Thanks in advance. Regards, Sujay -- View this message in context: http://postgresql.1045698.n5.nabble.com/Unable-to-execute-copy-from-Client-Application-tp5488816p5488816.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unable to execute \copy from Client Application
On Thu, 2012-02-16 at 00:07 -0800, sujayr06 wrote: Hello All, My application is a C++ APP. I'm executing \copy command from the CPP and its as follows strPgSqlQuerybuf\\copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; Second \ is to keep the compiler happy. When this command executes on PostGreSql, gives a syntax error as ERROR: syntax error at or near \ at character 1 STATEMENT: \copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ',' Can some please help! \copy is a psql metacommand, not a SQL command. You cannot use it in your application. You need to use the libpq API. -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drop big index
Vojtech Rylko wrote: I need to drop some b-tree indexes because they are not used anymore. Size of indexes vary between 700 MB and 7 GB. I tried common DROP INDEX... but this query performed so long and blocked table so I had to interrupt it. Is there any way how to drop large indexes in non-blocking or /faster/ way? Unfortunately not (yet). PostgreSQL 9.2 will have DROP INDEX CONCURRENTLY. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] RE: [GENERAL] conexão no windows 7
vossistemas wrote: Instalei o Postgresql 8.3 no windows 7 como servidor. Em uma estação com xp estou tentando me conectar e ocorre a seguinte mensagem: FATAL: no pg_hba.conf entry for host 192.168.1.51, user Vilson, database postgres, SSL off . No servidor com windows 7 está configurado: postgresql.conf: listen_addresses = '*' pg.hba.conf: host all all 192.168.1.0/24 o ip do servidor : 192.168.1.48 o ip da estação: 192.168.1.51 da estação chamo o ip do servidor 192.168.1.48 Já configurei em várias máquinas e não deu problema. Esta é a primeira vez que estou configurando no WINDOWS 7 ULTIMATE Can you post the complete pg_hba.conf? It can be that the line you quoted does not take effect because there is a line above it that applies. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drop big index
2012/2/15 Vojtěch Rylko vojta.ry...@seznam.cz: this query performed so long and blocked table so I had to interrupt it. Is there any way how to drop large indexes in non-blocking or /faster/ way? Usually the problem is not with the size of the index -- but some other running transactions that hold a read lock on the table, and preventing the DROP INDEX from getting an exclusive lock. If the exclusive lock is granted, the drop index is usually very fast. Run 'select * from pg_stat_activity' and see if there are any IDLE in transaction connections. It's normal to have these for a second or few, but longer idle transactions usually indicate an application bug -- it started a transaction, but forgot to rollback or commit. These are problematic for this exact reason -- locks can't be released until the transaction finishes. Regards, Marti -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unable to execute \copy from Client Application
Hello Guillaume, Please find the usage of \copy and the corresponding API. strPgSqlQuerybuf'\'copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; pPostGresQueryResult = PQexecParams(pPgConnection, strPgSqlQuerybuf.str().c_str(), 0, // one param NULL,// let the backend deduce param type NULL, NULL,// don't need param lengths since text NULL,// default to all text params 0); ePostGresCmdExecStatus = PQresultStatus(pPostGresQueryResult) ; Let me know if the usage has to be corrected. Regards, Sujay -- View this message in context: http://postgresql.1045698.n5.nabble.com/Unable-to-execute-copy-from-Client-Application-tp5488816p5488935.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unable to execute \copy from Client Application
Hello, Please find the usage of \copy and the corresponding API. strPgSqlQuerybuf\\copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; pPostGresQueryResult = PQexecParams(pPgConnection, strPgSqlQuerybuf.str().c_str(), 0, // one param NULL,// let the backend deduce param type NULL, NULL,// don't need param lengths since text NULL,// default to all text params 0); ePostGresCmdExecStatus = PQresultStatus(pPostGresQueryResult) ; Let me know if the usage has to be corrected. Regards, Sujay -- View this message in context: http://postgresql.1045698.n5.nabble.com/Unable-to-execute-copy-from-Client-Application-tp5488816p5488941.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unable to execute \copy from Client Application
On Thu, 2012-02-16 at 01:05 -0800, sujayr06 wrote: Hello, Please find the usage of \copy and the corresponding API. strPgSqlQuerybuf\\copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; pPostGresQueryResult = PQexecParams(pPgConnection, strPgSqlQuerybuf.str().c_str(), 0, // one param NULL,// let the backend deduce param type NULL, NULL,// don't need param lengths since text NULL,// default to all text params 0); ePostGresCmdExecStatus = PQresultStatus(pPostGresQueryResult) ; Let me know if the usage has to be corrected. Once again, you cannot use the \copy metacommand. Use the COPY API from libpq (http://www.postgresql.org/docs/9.1/interactive/libpq-copy.html). -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Optimize sort before groupping
Hi. Sometimes order of rows readed from index allows to skip sort node. But sometimes planner don't deduce it. In example below order from index NumerStacji_NumerKierunkowy_KodBłędu_LP is preserved in merge join and can be used in groupping node, but planner don't see it. First query and explain: SELECT NKA,NTA, count(nullif(b.Pierwszy zwrot=b.DataPliku,false)) as CDR, min(b.Data) || ',' || max(b.Data) as Biling, b.KodBłędu, sum((b.Pierwszy zwrot=b.DataPliku and not b.Darmowe)::integer) as Odpłatne, max(r.LP) FROM Bladpol2 b left join Rejestr stacji do naprawy r on ( b.NTA = r.Numer stacji and b.NKA = r.Numer kierunkowy and b.KodBłędu = r.Kod Błędu and replace(b.Data,':','.')::cube r.Zakres ) WHERE b.KodBłędu similar to '74|80|81' and r.Wartość is null GROUP BY b.NTA,b.NKA,b.KodBłędu,r.LP HAVING not bool_and(b.Darmowe) ORDER BY max(b.Data) desc LIMIT 4000; QUERY PLAN Limit (cost=191422.67..191432.67 rows=4000 width=42) (actual time=57136.554..57161.084 rows=4000 loops=1) - Sort (cost=191422.67..192855.26 rows=573034 width=42) (actual time=57136.546..57145.420 rows=4000 loops=1) Sort Key: (max((b.Data)::text)) Sort Method: top-N heapsort Memory: 660kB - GroupAggregate (cost=122756.63..154273.50 rows=573034 width=42) (actual time=49821.500..56670.665 rows=64064 loops=1) Filter: (NOT bool_and(b.Darmowe)) - Sort (cost=122756.63..124189.21 rows=573034 width=42) (actual time=49821.318..51902.438 rows=865978 loops=1) Sort Key: b.NTA, b.NKA, b.KodBłędu, r.LP Sort Method: external sort Disk: 42824kB - Merge Left Join (cost=19.16..60017.63 rows=573034 width=42) (actual time=0.337..26655.744 rows=865978 loops=1) Merge Cond: (((b.NTA)::text = (r.Numer stacji)::text) AND ((b.NKA)::text = (r.Numer kierunkowy)::text) AND ((b.KodBłędu)::text = (r.Kod Błędu)::text)) Join Filter: ((replace((b.Data)::text, ':'::text, '.'::text))::cube r.Zakres) Filter: (r.Wartość IS NULL) - Index Scan using Bladpol2_nta_nka_kod_błędu_btree on Bladpol2 b (cost=0.00..46593.65 rows=1452312 width=38) (actual time=0.152..8513.305 rows=1439383 loops=1) Filter: ((KodBłędu)::text ~ '^(?:74|80|81)$'::text) - Index Scan using NumerStacji_NumerKierunkowy_KodBłędu_LP on Rejestr stacji do naprawy r (cost=0.00..1405.83 rows=70476 width=47) (actual time=0.014..3146.595 rows=1045687 loops=1) Total runtime: 57183.476 ms In second query planner skip sort before groupping: SELECT NKA,NTA, count(nullif(b.Pierwszy zwrot=b.DataPliku,false)) as CDR, min(b.Data) || ',' || max(b.Data) as Biling, b.KodBłędu, sum((b.Pierwszy zwrot=b.DataPliku and not b.Darmowe)::integer) as Odpłatne, max(r.LP) FROM Bladpol2 b left join Rejestr stacji do naprawy r on ( b.NTA = r.Numer stacji and b.NKA = r.Numer kierunkowy and b.KodBłędu = r.Kod Błędu and replace(b.Data,':','.')::cube r.Zakres ) WHERE b.KodBłędu similar to '74|80|81' and r.Wartość is null GROUP BY b.NTA,b.NKA,b.KodBłędu--,r.LP HAVING not bool_and(b.Darmowe) ORDER BY max(b.Data) desc LIMIT 4000; QUERY PLAN Limit (cost=91667.54..91677.54 rows=4000 width=42) (actual time=32004.992..32029.539 rows=4000 loops=1) - Sort (cost=91667.54..92030.62 rows=145232 width=42) (actual time=32004.983..32013.844 rows=4000 loops=1) Sort Key: (max((b.Data)::text)) Sort Method: top-N heapsort Memory: 660kB - GroupAggregate (cost=19.16..82252.30 rows=145232 width=42) (actual time=1.954..31534.246 rows=63759 loops=1) Filter: (NOT bool_and(b.Darmowe)) - Merge Left Join (cost=19.16..60017.63 rows=573034 width=42) (actual time=0.339..26669.766 rows=865978 loops=1) Merge Cond: (((b.NTA)::text = (r.Numer stacji)::text) AND ((b.NKA)::text = (r.Numer kierunkowy)::text) AND ((b.KodBłędu)::text = (r.Kod Błędu)::text)) Join Filter: ((replace((b.Data)::text, ':'::text, '.'::text))::cube r.Zakres) Filter: (r.Wartość IS NULL) - Index Scan using Bladpol2_nta_nka_kod_błędu_btree on Bladpol2 b (cost=0.00..46593.65 rows=1452312 width=38) (actual time=0.145..8622.003 rows=1439383 loops=1) Filter: ((KodBłędu)::text ~ '^(?:74|80|81)$'::text) - Index Scan using NumerStacji_NumerKierunkowy_KodBłędu_LP on Rejestr stacji do naprawy r (cost=0.00..1405.83 rows=70476 width=47) (actual time=0.014..3113.398 rows=1045687 loops=1) Total runtime: 32045.317 ms -- pasman -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dump functions alone
Any help in getting function argument names is appreciated. Thank you take a look at pg_catalog.pg_get_function_arguments(oid) regards, jan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drop big index
Albe Laurenz laurenz.a...@wien.gv.at wrote: Vojtech Rylko wrote: I need to drop some b-tree indexes because they are not used anymore. Size of indexes vary between 700 MB and 7 GB. I tried common DROP INDEX... but this query performed so long and blocked table so I had to interrupt it. Is there any way how to drop large indexes in non-blocking or /faster/ way? Unfortunately not (yet). PostgreSQL 9.2 will have DROP INDEX CONCURRENTLY. Really? Great! I have a use-case for that: an partial index, daily re-created (the index based on the date and includes the last N days), and sometimes we have trouble with this. Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) If I was god, I would recompile penguin with --enable-fly. (unknown) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Dynamic update of a date field
Dear all, I have a question concerning default value/trigger function which supposed to update/fill field called time_stamp whenever a row is inserted. Let say that we have a table: CREATE TABLE dummy (year smallint,month smallint,day smallint,time_stamp date); I would like to update time_stamp dynamically without knowledge of a table name and using the values placed in the columns: year,month,day. The trick is that I have ~2000 tables which I populate with some time information, so either I could somehow fetch it to_timestamp() function in the Default definition of the field (while creating a table) or create a trigger function which doesn't require the table name (or retrieve it dynamically) and which is executed whenever a row is added to any table. As I am new to postgres/plpgsql any suggestions are more than welcome. Thank you in advance, Jan Musial -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [postgis-users] ST_AsJpeg
2012/2/16 Sandro Santilli s...@keybit.net: I don't think there's much to discuss. I'm sure a patch to psql would be welcome. Sorry, I did not realize that the solution is straight forward :- --Stefan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic update of a date field
Musial, Jan (GIUB) jan.mus...@giub.unibe.ch wrote: Dear all, I have a question concerning default value/trigger function which supposed to update/fill field called time_stamp whenever a row is inserted. Let say that we have a table: CREATE TABLE dummy (year you can use 'default now()' or an insert-trigger smallint,month smallint,day smallint,time_stamp date); I would like to That's silly, use one (and only one) field, timestamp (or timestamptz) Don't use never ever multiple columns for the same information! Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) If I was god, I would recompile penguin with --enable-fly. (unknown) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue
On Wednesday, February 15, 2012 10:21:02 pm Venkat Balaji wrote: Andrian, Thanks a lot ! So in this case you are not waiting for confirmation of the commit being flushed to disk on the standby. It that case you are bypassing the primary reason for sync replication. The plus is transactions on the master will complete faster and do so in the absence of the standby. The minus is that you are in sort of an in between state. I understand. My worry and requirement is to ensure master is not disturbed for any reason. In sync rep, the biggest worry is if standby server is unavailable and is down for longer time, master hangs and will be in the same state until standby comes back up or replication must be broken temporarily (until standby comes back up) so that master runs without interruption. This is a costly exercise on production from downtime perspective. So just use regular streaming replication without sync rep. You get record based transaction shipping without having to wait for the standby. You will need to make sure that wal_keep_segments is big enough to cover any down time on the standby(you would need that for sync rep also). Personally, I take sync replication to be basically an all or nothing proposition. By setting it up you are saying you want, at minimum, two database clusters to be in sync at any point in time all the time (except for start up). If that is not possible then you are really looking for async replication. Yeah. We will need to make a decision accordingly. Thanks again, VB -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic update of a date field
On Thursday, February 16, 2012 6:05:40 am Musial, Jan (GIUB) wrote: Dear all, I have a question concerning default value/trigger function which supposed to update/fill field called time_stamp whenever a row is inserted. Let say that we have a table: CREATE TABLE dummy (year smallint,month smallint,day smallint,time_stamp date); I would like to update time_stamp dynamically without knowledge of a table name and using the values placed in the columns: year,month,day. The trick is that I have ~2000 tables which I populate with some time information, so either I could somehow fetch it to_timestamp() function in the Default definition of the field (while creating a table) or create a trigger function which doesn't require the table name (or retrieve it dynamically) and which is executed whenever a row is added to any table. As I am new to postgres/plpgsql any suggestions are more than welcome. I would agree with Andreas, there is no need to replicate the date/time information across fields. You can pull that information out of a timestamp. If the timestamp field name is going to be the same on all the tables you can create a generic function in plpgsl and point a trigger on each table to it. The INSERT value can be handled by a default value on the field. It is the update you will need a function for. So something like: CREATE OR REPLACE FUNCTION public.ts_update() RETURNS trigger AS $Body$ BEGIN new.ts_update:=now(); RETURN NEW; END; $Body$ LANGUAGE 'plpgsql' VOLATILE; and associated trigger CREATE TRIGGER some_table_ts_update BEFORE UPDATE ON some_table FOR EACH ROW EXECUTE PROCEDURE public.ts_update() Thank you in advance, Jan Musial -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Drop big index
Dne 16.2.2012 9:53, Marti Raudsepp napsal(a): 2012/2/15 Vojtěch Rylkovojta.ry...@seznam.cz: this query performed so long and blocked table so I had to interrupt it. Is there any way how to drop large indexes in non-blocking or /faster/ way? Usually the problem is not with the size of the index -- but some other running transactions that hold a read lock on the table, and preventing the DROP INDEX from getting an exclusive lock. If the exclusive lock is granted, the drop index is usually very fast. Run 'select * from pg_stat_activity' and see if there are any IDLE in transaction connections. It's normal to have these for a second or few, but longer idle transactions usually indicate an application bug -- it started a transaction, but forgot to rollback or commit. These are problematic for this exact reason -- locks can't be released until the transaction finishes. Regards, Marti Thanks! Caused by IDLE in transaction. My nightmare solved. Droping 7 GB index in 2353 ms. Regards, Vojtěch R. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to dereference 2 dimensional array?
I would like to construct hstore array from 2 dimensional array. For example, '{{f1, 1, f2, 123, f3, ABC, ...}, {f1, 2, f2, 345, f3, DEF, ...}}' Should be converted to 2 hstore values f1=1, f2=123, f3=ABC, ... f2=2, f2=345, f3=DEF, ... create or replace function convertHStore(p1 text[][]) RETURNS hstore[] hstore function requires text[] to convert array to hstore. Therefore I should be able to dereference 2 dimensional array element. Inside this custom plpgsql function, p1[1] is not valid syntax to dereference the 1st element in p1. Anyone knows how to solve this problem? Thank you, Choon Park
Re: [GENERAL] Dynamic update of a date field
Dear Adrian Andreas, Thank you very much for this pieces of advice. I end up with creating a insert-triger function, which does the trick. As far as the redundant time information within my database is concerned this is an data import issue. Simply it is easier for me to import year,month day separately than create a timestamp and drop the columns. All the best, Jan Von: Adrian Klaver [adrian.kla...@gmail.com] Gesendet: Donnerstag, 16. Februar 2012 15:56 An: pgsql-general@postgresql.org Cc: Musial, Jan (GIUB) Betreff: Re: [GENERAL] Dynamic update of a date field On Thursday, February 16, 2012 6:05:40 am Musial, Jan (GIUB) wrote: Dear all, I have a question concerning default value/trigger function which supposed to update/fill field called time_stamp whenever a row is inserted. Let say that we have a table: CREATE TABLE dummy (year smallint,month smallint,day smallint,time_stamp date); I would like to update time_stamp dynamically without knowledge of a table name and using the values placed in the columns: year,month,day. The trick is that I have ~2000 tables which I populate with some time information, so either I could somehow fetch it to_timestamp() function in the Default definition of the field (while creating a table) or create a trigger function which doesn't require the table name (or retrieve it dynamically) and which is executed whenever a row is added to any table. As I am new to postgres/plpgsql any suggestions are more than welcome. I would agree with Andreas, there is no need to replicate the date/time information across fields. You can pull that information out of a timestamp. If the timestamp field name is going to be the same on all the tables you can create a generic function in plpgsl and point a trigger on each table to it. The INSERT value can be handled by a default value on the field. It is the update you will need a function for. So something like: CREATE OR REPLACE FUNCTION public.ts_update() RETURNS trigger AS $Body$ BEGIN new.ts_update:=now(); RETURN NEW; END; $Body$ LANGUAGE 'plpgsql' VOLATILE; and associated trigger CREATE TRIGGER some_table_ts_update BEFORE UPDATE ON some_table FOR EACH ROW EXECUTE PROCEDURE public.ts_update() Thank you in advance, Jan Musial -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Check if backup is in progress
Hi, is there any way how to check that postgres 9.1 is currently in backup mode? Imagine the situation that my DB is running and the backup is remotely started by # select pg_start_backup('backup', true); However the remote site crashed and can't stop the backup. Is there any way how to check if something like it happen or somehow configure the timeout for backup procedure? Currently only solution that comes to my mind is to try to start another backup, if this fails, I know that it is in progress. Thanks Stano
Re: [GENERAL] Unable to execute \copy from Client Application
On 02/16/12 1:05 AM, sujayr06 wrote: Please find the usage of \copy and the corresponding API. strPgSqlQuerybuf\\copy hnb_registration FROM '/root/Sujay/hnbfile.txt' USING DELIMITERS ','; The \ commands are all metacommands for the PSQL utility, none of it is SQL that can be sent as a query. you need to use the plain sql 'COPY ... FROM STDIN' command, and use PQputCopyData() to send the data (eg, read your /root/Sujay/hnbfile.txt and send it to that API). ugh, /root ?!? Why is ROOT running *ANY* SQL at all ?? -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to recover data from cluster
PostgreSql 8.4 in windows crashes. After that Windows disk repairing was used to repair hard drive. After that Data/base directory from crashed server contains lot of files, all files are readable. PostgreSql 8.4 was reinstalled in new server and data directory was set to directory from crashed server. pgAdmin shows that there is only public schema without any tables. Actually there was two schemas with lot of tables. How to fix this so old data is accessible ? Andrus.
Re: [GENERAL] How to recover data from cluster
On 02/16/2012 11:18 AM, Andrus wrote: PostgreSql 8.4 in windows crashes. After that Windows disk repairing was used to repair hard drive. After that Data/base directory from crashed server contains lot of files, all files are readable. There is readable and there is not-corrupt :( PostgreSql 8.4 was reinstalled in new server and data directory was set to directory from crashed server. pgAdmin shows that there is only public schema without any tables. Actually there was two schemas with lot of tables. What does the Postgres log show when you start the database and try to access the tables? How to fix this so old data is accessible ? Andrus. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to recover data from cluster
On Thursday, February 16, 2012 09:18:34 PM Andrus wrote: PostgreSql 8.4 in windows crashes. After that Windows disk repairing was used to repair hard drive. After that Data/base directory from crashed server contains lot of files, all files are readable. PostgreSql 8.4 was reinstalled in new server and data directory was set to directory from crashed server. pgAdmin shows that there is only public schema without any tables. Actually there was two schemas with lot of tables. How to fix this so old data is accessible ? It sounds like your new installation is not in fact using the old data directory, but a new empty one. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to recover data from cluster
On 02/16/2012 11:37 AM, Alan Hodgson wrote: On Thursday, February 16, 2012 09:18:34 PM Andrus wrote: It sounds like your new installation is not in fact using the old data directory, but a new empty one. To add, are you sure that pgAdmin is pointed at the right server? -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [GENERAL] conexão no windows 7
2012/2/15 vossistemas vossiste...@ibest.com.br Instalei o Postgresql 8.3 no windows 7 como servidor. Em uma estação com xp estou tentando me conectar e ocorre a seguinte mensagem: FATAL: no pg_hba.conf entry for host 192.168.1.51, user Vilson, database postgres, SSL off . No servidor com windows 7 está configurado: postgresql.conf: listen_addresses = '*' pg.hba.conf: host all all 192.168.1.0/24 o ip do servidor : 192.168.1.48 o ip da estação: 192.168.1.51 da estação chamo o ip do servidor 192.168.1.48 Já configurei em várias máquinas e não deu problema. Esta é a primeira vez que estou configurando no WINDOWS 7 ULTIMATE Vilson Zin VOS Software Ltda Seems that you forgot the METHOD portion, usually 'md5': # host all all 192.168.1.0/24 md5 # ^ Regards, diego -- View this message in context: http://postgresql.1045698.n5.nabble.com/conexao-no-windows-7-tp5487197p5487197.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Check if backup is in progress
On Thu, Feb 16, 2012 at 18:53, sodik stanislav.miklik...@gmail.com wrote: is there any way how to check that postgres 9.1 is currently in backup mode? You can check the existence of the backup_label file in your data directory. If it's there, then a backup is in progress. If your monitoring system is on a diferent server from the database, then you can create a custom procedure to check for this file and call it via SQL. Regards, Marti -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] running multiple versions
Hi Is it possible to run more than one instance of Postgresql on Windows? Can you run different versions simultaneously? If so can you run different minor versions or only different major versions? If possible how do you do this? Thanks H.F.
Re: [GENERAL] running multiple versions
On 02/16/12 2:11 PM, Heine Ferreira wrote: Is it possible to run more than one instance of Postgresql on Windows? Can you run different versions simultaneously? If so can you run different minor versions or only different major versions? If possible how do you do this? install to different directories, configure to use different port numbers. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
Hi, On Fri, 2012-02-17 at 00:11 +0200, Heine Ferreira wrote: [...] Is it possible to run more than one instance of Postgresql on Windows? Yes. Can you run different versions simultaneously? Yes If so can you run different minor versions or only different major versions? Only major versions. If possible how do you do this? You need to have a data directory per version, and you need to set a different port number. Which is mostly what you'll have using the PostgreSQL Windows installer. -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Fwd: Re: [GENERAL] Dynamic update of a date field
On 2/16/12 7:27 AM, Andreas Kretschmer wrote: Musial, Jan (GIUB)jan.mus...@giub.unibe.ch wrote: smallint,month smallint,day smallint,time_stamp date); I would like to That's silly, use one (and only one) field, timestamp (or timestamptz) Don't use never ever multiple columns for the same information! Would it not be advantageous to replicate information in the above form if you wanted to, say, get all records in the month of May, and therefore create an index on the month field? I would think that would be more efficient than creating a functional index on a timestamp. And if you're not too picky, that would give an index that couldn't be done on a timestamptz field, as that's mutable. -ds -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: Fwd: Re: [GENERAL] Dynamic update of a date field
On 02/16/12 2:34 PM, David Salisbury wrote: Would it not be advantageous to replicate information in the above form if you wanted to, say, get all records in the month of May, and therefore create an index on the month field? I would think that would be more efficient than creating a functional index on a timestamp. And if you're not too picky, that would give an index that couldn't be done on a timestamptz field, as that's mutable. an index on a timestamp will work just fine on date_trunc('month', timestampfield) -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to dereference 2 dimensional array?
Hi, this could be start point for discussion: CREATE OR REPLACE FUNCTION public.arraysToHstore (IN a TEXT[], OUT c hstore[]) RETURNS hstore[] AS $BODY$ DECLARE i INT; elements INT; dim INT; BEGIN elements := array_length(a,2); -- # of elements in each dimension dim := array_length(a,1); -- # of dimensions a := string_to_array(array_to_string(a, '|', 'NULL'), '|', 'NULL'); --rewrite multidimensional array into single dimensional FOR i IN 0..(dim -1) LOOP --loop throug all dimensions to create hsore array (0 is for first elment in rewrited array) c[i+1] = hstore (a[(i*elements)+1: (i+1)*elements]); --create hstore array element using part of array a, this part (window) is moving while loop is evaluated) END LOOP; END $BODY$ LANGUAGE plpgsql SECURITY DEFINER IMMUTABLE STRICT; --usage example SELECT (public.arraysToHstore('{{f1, 1, f3, 123, f4, ABC}, {f5, 2, f6, 345, f7, DEF}}')::hstore[])[2] This works for me (PostgreSQL 9.1.2). You can pass whatever text array You want (any size, but 2 dimensions only) and You will receive 1 dimentional hstore array with number of elements corresponding to input array's dimensions) Regards, Bartek 2012/2/16 ChoonSoo Park luisp...@gmail.com I would like to construct hstore array from 2 dimensional array. For example, '{{f1, 1, f2, 123, f3, ABC, ...}, {f1, 2, f2, 345, f3, DEF, ...}}' Should be converted to 2 hstore values f1=1, f2=123, f3=ABC, ... f2=2, f2=345, f3=DEF, ... create or replace function convertHStore(p1 text[][]) RETURNS hstore[] hstore function requires text[] to convert array to hstore. Therefore I should be able to dereference 2 dimensional array element. Inside this custom plpgsql function, p1[1] is not valid syntax to dereference the 1st element in p1. Anyone knows how to solve this problem? Thank you, Choon Park
Re: [GENERAL] running multiple versions
On Thursday, February 16, 2012 2:17:33 pm Guillaume Lelarge wrote: Hi, If so can you run different minor versions or only different major versions? Only major versions. This is a limitation of the Postgres Windows Installer? Compiling from source would allow running different minor versions. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
On Thu, 2012-02-16 at 15:00 -0800, Adrian Klaver wrote: On Thursday, February 16, 2012 2:17:33 pm Guillaume Lelarge wrote: Hi, If so can you run different minor versions or only different major versions? Only major versions. This is a limitation of the Postgres Windows Installer? Compiling from source would allow running different minor versions. Yes, that's right, except that I wouldn't call that a limitation. -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to dereference 2 dimensional array?
On Thu, Feb 16, 2012 at 9:48 AM, ChoonSoo Park luisp...@gmail.com wrote: I would like to construct hstore array from 2 dimensional array. For example, '{{f1, 1, f2, 123, f3, ABC, ...}, {f1, 2, f2, 345, f3, DEF, ...}}' Should be converted to 2 hstore values f1=1, f2=123, f3=ABC, ... f2=2, f2=345, f3=DEF, ... create or replace function convertHStore(p1 text[][]) RETURNS hstore[] hstore function requires text[] to convert array to hstore. Therefore I should be able to dereference 2 dimensional array element. Inside this custom plpgsql function, p1[1] is not valid syntax to dereference the 1st element in p1. Anyone knows how to solve this problem? This is a good use of the 9.1 SLICE feature: CREATE FUNCTION slice_hstore(text[]) RETURNS SETOF hstore AS $$ DECLARE x text[]; BEGIN FOREACH x SLICE 1 IN ARRAY $1 LOOP return next hstore(x); END LOOP; END; $$ LANGUAGE plpgsql; select slice_hstore('{{g1, 1, f2, 123, f3, ABC}, {f1, 2, f2, 345, f3, DEF}}'::text[]); postgres=# select slice_hstore('{{g1, 1, f2, 123, f3, ABC}, {f1, 2, f2, 345, f3, DEF}}'::text[]); slice_hstore - f2=123, f3=ABC, g1=1 f1=2, f2=345, f3=DEF (2 rows) postgres=# select array(select slice_hstore('{{g1, 1, f2, 123, f3, ABC}, {f1, 2, f2, 345, f3, DEF}}'::text[])); ?column? --- {\f2\=\123\, \f3\=\ABC\, \g1\=\1\,\f1\=\2\, \f2\=\345\, \f3\=\DEF\} (1 row) merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
On 02/16/12 3:00 PM, Adrian Klaver wrote: This is a limitation of the Postgres Windows Installer? Compiling from source would allow running different minor versions. probably the windows service names is the sticking point. you'd need to create a custom service, which could be done with the 'sc' command (windows built in utility). E:\sc qc postgresql-8.4 [SC] GetServiceConfig SUCCESS SERVICE_NAME: postgresql-8.4 TYPE : 10 WIN32_OWN_PROCESS START_TYPE : 2 AUTO_START ERROR_CONTROL : 1 NORMAL BINARY_PATH_NAME : D:/postgres/8.4/bin/pg_ctl.exe runservice -N postgresql-8.4 -D D:/postgres/8.4/data -w LOAD_ORDER_GROUP : TAG: 0 DISPLAY_NAME : postgresql-8.4 - PostgreSQL Server 8.4 DEPENDENCIES : RPCSS SERVICE_START_NAME : .\postgres so, if I was to install 8.4.somethignelse into a different directory, I'd need to create a seperate service for it, like... sc create postgres-8.4.5 binPath=D:/postgres/8.4.5/bin/pg_ctl.exe runservice -N postgresql-8.4.5 -D D:/postgres/8.4.5/data -w type=own start=auto error=normal DisplayName=postgresql-8.4.5 - PostgreSQL Server 8.4.5 depend=RPCSS obj=.\postgres (thats all one gnarly command line) then sc start postgres-8.4.5 -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
On Thursday, February 16, 2012 3:03:45 pm Guillaume Lelarge wrote: On Thu, 2012-02-16 at 15:00 -0800, Adrian Klaver wrote: On Thursday, February 16, 2012 2:17:33 pm Guillaume Lelarge wrote: Hi, If so can you run different minor versions or only different major versions? Only major versions. This is a limitation of the Postgres Windows Installer? Compiling from source would allow running different minor versions. Yes, that's right, except that I wouldn't call that a limitation. Well from a purely functional standpoint: 1) A user asked if it could be done. 2) The Postgres program allows it to be done. 3) The Windows Installer does not. That looks like a limit to me:) I am not saying it is good or bad just that it is a option that does not exist. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: Fwd: Re: [GENERAL] Dynamic update of a date field
On 02/16/2012 02:45 PM, John R Pierce wrote: On 02/16/12 2:34 PM, David Salisbury wrote: Would it not be advantageous to replicate information in the above form if you wanted to, say, get all records in the month of May, and therefore create an index on the month field? I would think that would be more efficient than creating a functional index on a timestamp. And if you're not too picky, that would give an index that couldn't be done on a timestamptz field, as that's mutable. an index on a timestamp will work just fine on date_trunc('month', timestampfield) You can try but PostgreSQL will respond: ...functions in index expression must be marked IMMUTABLE... The current month returned by extract or date_trunc depends on the current time zone. New York will see March 3-hours ahead of us left-coasters. Note: storing the month in a separate field does not solve this problem - it just shuffles it around and requires additional mechanisms to update that field when the timestamp field changes. You can, if it is appropriate to the situation, specify the time zone in which you are interested: ... (date_trunc('month', timestampfield at time zone 'posix/America/Los_Angeles') ) ... Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Create duplicate of existing operator
Hi, Is it possible to use CREATE OPERATOR to make behave like and? In general, for the built-in operators, is it possible to see their CREATE OPERATOR statements? Cheers, Andy -- Andy Chambers *Software Engineer * *MCNA Dental Plans* 200 West Cypress Creek Road Suite 500 Fort Lauderdale, FL 33309 954-730-7131 X186 (Office) 954-628-3347 (Fax) 1-800-494-6262 X141 (Toll Free) achamb...@mcna.net glip...@mcna.net (Email) www.mcna.net (Website) CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.
Re: [GENERAL] Create duplicate of existing operator
Andy Chambers achamb...@mcna.net writes: Is it possible to use CREATE OPERATOR to make behave like and? Hmm ... AND is not really an operator, but a primitive expression construct. So I was about to say no, but then it occurred to me you could do something like (untested): create function nonstandard_and(bool,bool) returns bool as 'select $1 and $2' language sql; create operator (procedure = nonstandard_and, leftarg = bool, rightarg = bool); This would be completely unworkable unless the implementation function is an inline-able SQL function, because otherwise the planner will not recognize that your means AND, resulting in spectacularly bad optimization. But with that, maybe you could get away with it. I would not recommend it though, because you'll be paying through the nose (inlining isn't a remarkably cheap operation) for what fundamentally is gratuitously nonstandard, unportable SQL syntax with no obvious redeeming value. The above hack is a cute hack, but it's just a hack not something I'd recommend for production. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] MD5 salt in pg_authid password hashes
On 2012-02-16 04:18, Adrian Klaver wrote: When you alter the role name you are told the password has been cleared. It would be fairly easy to wrap the rename and the setting of the password in a transaction. But this shouldn't be necessary. I don't get why the salt has to be linked with the role name. This problem would be a non-issue with a random salt. Encrypting Passwords Across A Network The MD5 authentication method double-encrypts the password on the client before sending it to the server. It first MD5-encrypts it based on the user name, and then encrypts it based on a random salt sent by the server when the database connection was made. It is this double-encrypted value that is sent over the network to the server. Double-encryption not only prevents the password from being discovered, it also prevents another connection from using the same encrypted password to connect to the database server at a later time. I must be missing something here... the *client* applies the salt, because he knows it, and then sends the salted hash? Then what's the point of using a salt at all? The second encryption layer just protects the communication channel, and has nothing to do with what I'm concerned with. It's redundant if a secure channel already exists (SSL or TLS or whatever). But I have to admit that I didn't read the source for this part, so I may indeed be missing something. Encrypting Data Across A Network SSL connections encrypt all data sent across the network: the password, the queries, and the data returned. The pg_hba.conf file allows administrators to specify which hosts can use non-encrypted connections (host) and which require SSL-encrypted connections (hostssl). Also, clients can specify that they connect to servers only via SSL. Stunnel or SSH can also be used to encrypt transmissions. Just so. But this still leaves the question why the hashing/salting in PG works differently than just about anywhere else. The client isn't supposed to know or care about the salt. Normally, salting is a purely server-side protection against attackers who would generate lookup tables for common password hashes, in the hope of getting their hands on a list of actual password hashes. If the salt is as predictable as a user/role name, it's nowhere near good enough to protect against such an attack. At best, it might increase the size of the lookup tables by 2 or 3 orders of magnitude, which is no challenge at all with a good word list (backup, dba, slony, postgres, master, ...). Compare this to a 4-byte random salt. regards, stefan -- LOADPh'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn!,8,1 RUN! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Set returning functions in select column list
* Apologies if anyone receives this twice. I previously sent it from another address and it did not appear to go through. Can someone explain how set returning functions in a select clause work? It seems that it is doing some sort of implicit cross join. deliverance_development=# select id, generate_series(1, 3) from users; id | generate_series +- 0 | 1 0 | 2 0 | 3 1 | 1 1 | 2 1 | 3 (6 rows) But if multiple set returning functions that return the same number of rows are in the same select it doesn't further cross join it. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 6) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 6 1 | 1 | 4 1 | 2 | 5 1 | 3 | 6 (6 rows) But if the set returning functions return a different number of rows then it goes back to a cross join. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 5) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 4 0 | 1 | 5 0 | 2 | 4 0 | 3 | 5 1 | 1 | 4 1 | 2 | 5 1 | 3 | 4 1 | 1 | 5 1 | 2 | 4 1 | 3 | 5 (12 rows) I really don't understand what is going on here. I have checked Google and the PostgreSQL docs, but it appears either I do not know the key words to search for or it is sparsely documented. Jack -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Functions that return a set in select column list
* Apologies if anyone receives this multiple times. I previously sent it with a subject that started with Set and it triggered some sort of admin filter. Can someone explain how set returning functions in a select clause work? It seems that it is doing some sort of implicit cross join. deliverance_development=# select id, generate_series(1, 3) from users; id | generate_series +- 0 | 1 0 | 2 0 | 3 1 | 1 1 | 2 1 | 3 (6 rows) But if multiple set returning functions that return the same number of rows are in the same select it doesn't further cross join it. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 6) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 6 1 | 1 | 4 1 | 2 | 5 1 | 3 | 6 (6 rows) But if the set returning functions return a different number of rows then it goes back to a cross join. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 5) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 4 0 | 1 | 5 0 | 2 | 4 0 | 3 | 5 1 | 1 | 4 1 | 2 | 5 1 | 3 | 4 1 | 1 | 5 1 | 2 | 4 1 | 3 | 5 (12 rows) I really don't understand what is going on here. I have checked Google and the PostgreSQL docs, but it appears either I do not know the key words to search for or it is sparsely documented. Jack -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
Run them in different locations with different addresses (5432 5433 for example) see this thread:http://archives.postgresql.org/pgsql-admin/2008-02/msg00084.php Brent Wood GIS/DBA consultant NIWA +64 (4) 4 386-0300 From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] on behalf of Heine Ferreira [heine.ferre...@gmail.com] Sent: Friday, February 17, 2012 11:11 AM To: pgsql-general@postgresql.org Subject: [GENERAL] running multiple versions Hi Is it possible to run more than one instance of Postgresql on Windows? Can you run different versions simultaneously? If so can you run different minor versions or only different major versions? If possible how do you do this? Thanks H.F. -- Please consider the environment before printing this email. NIWA is the trading name of the National Institute of Water Atmospheric Research Ltd. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Set returning functions in select column list
Can someone explain how set returning functions in a select clause work? It seems that it is doing some sort of implicit cross join. deliverance_development=# select id, generate_series(1, 3) from users; id | generate_series +- 0 | 1 0 | 2 0 | 3 1 | 1 1 | 2 1 | 3 (6 rows) But if multiple set returning functions that return the same number of rows are in the same select it doesn't further cross join it. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 6) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 6 1 | 1 | 4 1 | 2 | 5 1 | 3 | 6 (6 rows) But if the set returning functions return a different number of rows then it goes back to a cross join. deliverance_development=# select id, generate_series(1, 3), generate_series(4, 5) from users; id | generate_series | generate_series +-+- 0 | 1 | 4 0 | 2 | 5 0 | 3 | 4 0 | 1 | 5 0 | 2 | 4 0 | 3 | 5 1 | 1 | 4 1 | 2 | 5 1 | 3 | 4 1 | 1 | 5 1 | 2 | 4 1 | 3 | 5 (12 rows) I really don't understand what is going on here. I have checked Google and the PostgreSQL docs, but it appears either I do not know the key words to search for or it is sparsely documented. -- Jack Christensen ja...@hylesanderson.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] running multiple versions
On 02/16/12 3:10 PM, John R Pierce wrote: so, if I was to install 8.4.somethignelse into a different directory, I'd need to create a seperate service for it, like... sc create postgres-8.4.5 binPath=D:/postgres/8.4.5/bin/pg_ctl.exe runservice -N postgresql-8.4.5 -D D:/postgres/8.4.5/data -w type=own start=auto error=normal DisplayName=postgresql-8.4.5 - PostgreSQL Server 8.4.5 depend=RPCSS obj=.\postgres let me also point out here, I was careful to avoid spaces in the postgres path. I don't know how or even if you can quote the paths properly in the binPath= argument to 'sc create' the obj= is the username that the service runs as, .\ meaning local as opposed to domain. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] cast type bytea to double precision
HI All, I have a database column which type is bytea. It contains floats converted as byte array (4 bytes per one float) and encoding is Escape. I would be able to get corresponding bytea string using substring function. My question is how can I convert bytea string to float inside a SQL function. Earlier I converted to float in C# side. I used dataReader.getByte method to retrieve bytes and then Converted to float using BitConverter.ToSingle (.Net build in class) method. Now I can't use intermediate component as Npqsql driver. I want SQL to directly convert bytea into floats and return the corresponding number when execute a query from 3rd party application. Thanks and Regards Amila Jayasooriya