Re: [GENERAL] greatest/least semantics different between oracle and postgres
> > Er ... your example doesn't actually seem to involve greatest() or > least()? > So sorry, it's been a long day, I misread. Yes, greatest/least definitely does work on Oracle as the OP said. Apologies again. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] greatest/least semantics different between oracle and postgres
"paul rivers" <[EMAIL PROTECTED]> writes: > But I question the original poster's report of Oracle's behavior. I don't > have 9.2.0.8 to test, but on 9.2.0.7: Er ... your example doesn't actually seem to involve greatest() or least()? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] greatest/least semantics different between oracle and postgres
Ben <[EMAIL PROTECTED]> writes: > On Jun 29, 2007, at 9:15 PM, Tom Lane wrote: >> Hmm ... I fear Oracle's behavior is more correct, because if any >> argument is null (ie, unknown), then who can say what the greatest or >> least value is? It's unknown (ie, null). But I suspect our behavior >> is more useful. Comments? > I agree with you. I don't know what the spec says, but it seems clear > Oracle is doing the proper thing and Postgres is doing the useful thing. GREATEST/LEAST aren't in the spec, so there's not much help there. Except ... if they ever do get added to the spec, what do you think the spec will say? The odds it'd contradict Oracle seem about nil. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] greatest/least semantics different between oracle and postgres
I believe the spec says nulls are ignored for min/max. Postgres is as far as I know behaving according to spec. But I question the original poster's report of Oracle's behavior. I don't have 9.2.0.8 to test, but on 9.2.0.7: SQL> select f1, case when f1 is not null then 'not null' else 'null' end if from t; F1 IF -- 1 not null 2 not null null SQL> select max(f1) from t; MAX(F1) -- 2 SQL> select version from v$instance; VERSION --- 9.2.0.7.0 > -Original Message- > From: [EMAIL PROTECTED] [mailto:pgsql-general- > [EMAIL PROTECTED] On Behalf Of Ben > Sent: Friday, June 29, 2007 10:18 PM > To: Tom Lane > Cc: PostgreSQL General ((EN)) > Subject: Re: [GENERAL] greatest/least semantics different between oracle > and postgres > > On Jun 29, 2007, at 9:15 PM, Tom Lane wrote: > > > Hmm ... I fear Oracle's behavior is more correct, because if any > > argument is null (ie, unknown), then who can say what the greatest or > > least value is? It's unknown (ie, null). But I suspect our behavior > > is more useful. Comments? > > I agree with you. I don't know what the spec says, but it seems clear > Oracle is doing the proper thing and Postgres is doing the useful thing. > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] greatest/least semantics different between oracle and postgres
On Jun 29, 2007, at 9:15 PM, Tom Lane wrote: Hmm ... I fear Oracle's behavior is more correct, because if any argument is null (ie, unknown), then who can say what the greatest or least value is? It's unknown (ie, null). But I suspect our behavior is more useful. Comments? I agree with you. I don't know what the spec says, but it seems clear Oracle is doing the proper thing and Postgres is doing the useful thing. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Sun Cluster and PGSQL
Hello Folks, I hope that you are well, was just reading this article: http://www.sun.com/featured-articles/2007-0627/feature/index.jsp?intcmp=hp2007jun27_cluster_read Thought that it maybe of interest to you. Any thoughts ? Cheers, Aly. -- Aly Dharshi [EMAIL PROTECTED] Got TELUS TV ? 310-MYTV or http://www.mytelus.com/tv "A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject" ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] greatest/least semantics different between oracle and postgres
On Sat, Jun 30, 2007 at 00:15:42 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: > "Andrej Ricnik-Bay" <[EMAIL PROTECTED]> writes: > > On 6/30/07, Bruno Wolff III <[EMAIL PROTECTED]> wrote: > >> I was recently doing some stuff with greatest() on oracle (9.2.0.8.0) and > >> noticed that it returned null if ANY of the arguments were null. Out of > >> curiosity I checked postgres' definition of that function and found that it > >> returns null only if ALL of the arguments are null. > > > W/o knowing the SQL standard (just from what I'd perceive > > as sensible) I'd say Oracle is broken. :} > > Hmm ... I fear Oracle's behavior is more correct, because if any > argument is null (ie, unknown), then who can say what the greatest or > least value is? It's unknown (ie, null). But I suspect our behavior > is more useful. Comments? In my case I would have prefered Postgres' behavior. I wanted to take the max of values coming from two columns by taking the greatest of two subselects. I ended up rewriting the query to take the max of a union. The annoying thing was I didn't have a good way to use coalesce as I wanted to get a null if both subselects were empty. Also what value should I have used in a coalesce to guaranty still getting the maximum? I think having it work like aggregates and ignoring null values is more convenient. However if the feature was added for oracle compatibility then not working the same is an issue. I was just hoping that perhaps the fact that the semantics are different between oracle and postgres would get noted somewhere so people porting would have a better chance to become aware of the issue. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] greatest/least semantics different between oracle and postgres
On 6/30/07, Tom Lane <[EMAIL PROTECTED]> wrote: Hmm ... I fear Oracle's behavior is more correct, because if any argument is null (ie, unknown), then who can say what the greatest or least value is? It's unknown (ie, null). But I suspect our behavior is more useful. Comments? But in min/max scenarios NULL values are simply ignored, too, no? regards, tom lane Cheers, Andrej -- Please don't top post, and don't use HTML e-Mail :} Make your quotes concise. http://www.american.edu/econ/notes/htmlmail.htm ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] greatest/least semantics different between oracle and postgres
"Andrej Ricnik-Bay" <[EMAIL PROTECTED]> writes: > On 6/30/07, Bruno Wolff III <[EMAIL PROTECTED]> wrote: >> I was recently doing some stuff with greatest() on oracle (9.2.0.8.0) and >> noticed that it returned null if ANY of the arguments were null. Out of >> curiosity I checked postgres' definition of that function and found that it >> returns null only if ALL of the arguments are null. > W/o knowing the SQL standard (just from what I'd perceive > as sensible) I'd say Oracle is broken. :} Hmm ... I fear Oracle's behavior is more correct, because if any argument is null (ie, unknown), then who can say what the greatest or least value is? It's unknown (ie, null). But I suspect our behavior is more useful. Comments? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] greatest/least semantics different between oracle and postgres
On 6/30/07, Bruno Wolff III <[EMAIL PROTECTED]> wrote: The following is just FYI. I was recently doing some stuff with greatest() on oracle (9.2.0.8.0) and noticed that it returned null if ANY of the arguments were null. Out of curiosity I checked postgres' definition of that function and found that it returns null only if ALL of the arguments are null. W/o knowing the SQL standard (just from what I'd perceive as sensible) I'd say Oracle is broken. :} -- Cheers, Andrej ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Looking for help regarding getting the latest inserted sequence value.
On Thursday 28 June 2007 01:31:33 Mavinakuli, Prasanna (STSD) wrote: > .And getting the max(id) from the table. Instead of that, use select currval('sequence'). currval will " Return the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.) Notice that because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did. " (or so the docs tell me: http://www.postgresql.org/docs/8.2/interactive/functions-sequence.html). jan -- -- Jan de Visser [EMAIL PROTECTED] Baruk Khazad! Khazad ai-menu! -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] varchar(n) VS text
Kev <[EMAIL PROTECTED]> writes: > On Jun 26, 12:16 am, [EMAIL PROTECTED] (Tom Lane) wrote: >> "Pierre Thibaudeau" <[EMAIL PROTECTED]> writes: >>> I am puzzling over this issue: >>> 1) Is there ever ANY reason to prefer "varchar(n)" to "text" as a column >>> type? >> >> In words of one syllable: no. > If you have any input from untrusted sources (like in a web app) > wouldn't it be easier to attack the server if you had a text field, > like by sending a couple TB of data over that pgsql then needs to > store, on a server that doesn't have that amount of space? Well, the hard limit on a text field (or any other field) is 1Gb, so it'd not be quite as easy as that, even assuming that the webapp doesn't fall over first. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] greatest/least semantics different between oracle and postgres
The following is just FYI. I was recently doing some stuff with greatest() on oracle (9.2.0.8.0) and noticed that it returned null if ANY of the arguments were null. Out of curiosity I checked postgres' definition of that function and found that it returns null only if ALL of the arguments are null. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [pgsql] Simple backup utility like mysqldump?
Bjorn Boulder wrote: Hello People, I'm running PostgreSQL 8.1.1 on my freebsd box. I'm curious if PostgreSQL has a utility for backing up small databases like mysqldump or Oracle's export utility. -b ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Perhaps google "Postgresql dump" very first result is http://www.postgresql.org/docs/8.1/static/backup.html - Oisin Glynn smime.p7s Description: S/MIME Cryptographic Signature
[GENERAL] [pgsql] Simple backup utility like mysqldump?
Hello People, I'm running PostgreSQL 8.1.1 on my freebsd box. I'm curious if PostgreSQL has a utility for backing up small databases like mysqldump or Oracle's export utility. -b ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] a JOIN on same table, but 'slided over'
It _is_ the optimised version as you can see from the explain plans posted in the other mail, the planner shows that the cost is drastically less than the 'distinct on' version. For smaller data-sets 'distinct-on' version might seem faster, but for reasonably larger datasets, it's performance deteriorates exponentially... This is because of the Nested-loops involved in the plan... I increased your data-set to 10240 rows by executing the following query 10 times: insert into test select id+(select max(id) from test), thread, info from test; On such data-set (which is not very large by any means), the standard SQL version executes in almost a second, and on the other hand, I had to cancel the EXPLAIN ANALYZE of the 'distinct on' query after letting it run for over three minutes!!! postgres=# explain analyze postgres-# select t1.id as id, t2.id as "id+1", postgres-# t1.thread as thread, t2.thread as "thread+1", postgres-# t1.info as info, t2.info as "info+1" postgres-# from test as t1, test as t2 postgres-# where t2.id = ( select min(id) from test as t3 where t3.id > t1.id ) postgres-# order by t1.id asc; QUERY PLAN -- Sort (cost=2971.36..2996.96 rows=10240 width=24) (actual time= 1004.031..1030.116 rows=10239 loops=1) Sort Key: t1.id Sort Method: external sort Disk: 416kB -> Merge Join (cost=840.48..2289.28 rows=10240 width=24) (actual time= 834.218..956.595 rows=10239 loops=1) Merge Cond: (t2.id = ((subplan))) -> Index Scan using test_id_key on test t2 (cost=0.00..332.85rows=10240 width=12) (actual time= 0.060..24.503 rows=10240 loops=1) -> Sort (cost=840.48..866.08 rows=10240 width=12) (actual time= 834.129..854.776 rows=10240 loops=1) Sort Key: ((subplan)) Sort Method: quicksort Memory: 928kB -> Seq Scan on test t1 (cost=0.00..158.40 rows=10240 width=12)(actual time=0.196..797.752 rows=10240 loops=1) SubPlan -> Result (cost=0.04..0.05 rows=1 width=0) (actual time=0.062..0.064 rows=1 loops=10240) InitPlan -> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=0.047..0.050 rows=1 loops=10240) -> Index Scan using test_id_key on test t3 (cost=0.00..121.98 rows=3413 width=4) (actual time=0.038..0.038rows=1 loops=10240) Index Cond: (id > $0) Filter: (id IS NOT NULL) Total runtime: 1052.802 ms (18 rows) Time: 1056.740 ms postgres=# explain analyze postgres-# select postgres-# distinct on (t1.id) postgres-# t1.*, t2.* postgres-# from postgres-# test t1 postgres-# join test t2 on t2.id > t1.id postgres-# order by t1.id asc, t2.id asc; Cancel request sent ERROR: canceling statement due to user request postgres=# On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote: OK. Have tried this one looks like close to 6 times slower then the 'non-standard' phrase with 'distinct on'. On the small dataset that I've included in my original post (ten rows of data within TEST), I've run both queries through EXPLAIN ANALYSE, with the following result summary (for clearity, I've cut away the details from EXPLAIN output): ---STANDARD Total runtime: 10.660 ms ---DISTINCT-ON Total runtime: 1.479 ms --- Would there be ways to optimise the standard query to get the performance closer to the none-standard one? -R On Tue, 2007-06-26 at 18:05 +0530, Gurjeet Singh wrote: > Hi Rafal, > > Just a note that this is not standard SQL... 'distinct on' is an > extension to SQL provided by postgres. > > Following query utilizes the standard SQL to get the same results: > > selectt1.id as id, t2.id as "id+1", > t1.thread as thread, t2.thread as "thread+1", > t1.info as info, t2.info as "info+1" > from test as t1, test as t2 > where t2.id = ( select min(id) from test as t3 where t3.id > t1.id); > > HTH > -- > [EMAIL PROTECTED] > [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com > > 17°29'34.37"N 78°30'59.76"E - Hyderabad * > 18°32'57.25"N 73°56'25.42 "E - Pune > > Sent from my BlackLaptop device > > On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote: > Marvelous! Thenx! > > -R > > On Tue, 2007-06-26 at 10:06 +0200, hubert depesz lubaczewski > wrote: > > On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote: > > Is there an SQL construct to get it? > > > > select > > distinct on (t1.id) > > t1.*, t2.* > > from > > test t1 > > join test t2 on t2.id > t1.id > > order by t1.id asc, t2.id asc > > >
Re: [GENERAL] varchar(n) VS text
On Jun 26, 12:16 am, [EMAIL PROTECTED] (Tom Lane) wrote: > "Pierre Thibaudeau" <[EMAIL PROTECTED]> writes: > > I am puzzling over this issue: > > 1) Is there ever ANY reason to prefer "varchar(n)" to "text" as a column > > type? > > In words of one syllable: no. If you have any input from untrusted sources (like in a web app) wouldn't it be easier to attack the server if you had a text field, like by sending a couple TB of data over that pgsql then needs to store, on a server that doesn't have that amount of space? Er...I guess the web server would then be the cap? Or whatever other tiers you had between the client and pgsql? Kev ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] what's wrong with this conf file?
The server is dual Xeon with 4Gb RAM and 10k RPM RAID 1. There is no workload, we are running test conversion hence autovacuum off. I tried with on too, to no avail. Pg version is now 8.2. Here's my pg sql config file, unabridged. hba_file = '/etc/postgresql/8.2/main/pg_hba.conf' # host-based authentication file ident_file = '/etc/postgresql/8.2/main/pg_ident.conf' # ident configuration file external_pid_file = '/var/run/postgresql/8.2-main.pid' # write an extra PID file port = 5432 # (change requires restart) max_connections = 100 # (change requires restart) unix_socket_directory = '/var/run/postgresql' # (change requires restart) ssl = true shared_buffers = 512MB work_mem = 1024MB maintenance_work_mem = 1024MB max_fsm_pages = 30 checkpoint_segments = 60# me: increased from 3 based on warnings effective_cache_size = 2048MB stats_row_level = on autovacuum = off# enable autovacuum subprocess? autovacuum_naptime = 120min # time between autovacuum runs datestyle = 'iso, mdy' lc_messages = 'C' # locale for system error message lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C'# locale for number formatting lc_time = 'C' # locale for time formatting escape_string_warning = off Here's my actual database schema up to names taskcomments_csv table has ~10mil rows, id - sequence-bound identity column taskid - bigint, usernick - varchar(40), and comment - text. Index on usernick. taskcomments table is the same but has userid int instead of usernick and is empty, index on userid. users has 7k rows, nick varchar(40), id int. THe following query: "INSERT INTO taskcomments (comment, userid, taskid) SELECT comment, users.uid, taskid FROM taskcomments_csv INNER JOIN users ON taskcomments_csv.usernick = users.nick ran for 9 hours before I killed it, taking 1Gb RAM, 95% CPU in ps -eo report and doing some painfully slow (~1Kb per several seconds) HD writing in df output (e.g. it's not a HDD bottleneck). Obvisouly explain analyze is not an option, explain for select predicts a runtime of less than an hour. There's virtually nothing else running on the server. There's no workload on the database. Can someone tell me what is wrong with this and what do I fix? Or how else do I diagnose it? I understand I can do it via updates within the same table and that's what I am about to do, but for other tables that is not an option cause as I have already learned dropping a varchar column after conversion and running vacuum full to get rid of its data could take days. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Interval overflow?
On Fri, 2007-06-29 at 17:31 -0400, Tom Lane wrote: > It looks like his case is overflowing the int8 microseconds field of > the interval. On my machine, the seconds field is double so it does not > overflow, but interval_out tries to convert the computed hours value > to int32, and *that* overflows. > > The best we can do for Jeff is throw an error in interval addition. That sounds reasonable to me. > In the float case it could be wished that we could print any result > we can store ... > Agreed. Regards, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] How does one impolement lists, or arrays, pin pl/pgsql ?
How does one implement a simple, general purpose, assignable array (or list) in pl/pgsql? From what I've found/read, it appears that you can declare static, read-only arrays. I'm guessing (and please correct if I'm wrong), PG does not support modifyable arrays. Rather, one might consider defining a temporary table and using sql to insert/select to/from it. True?
Re: [GENERAL] LC_CTYPE and matching accented chars
Alvaro Herrera wrote: > Martin Langhoff wrote: >> # this is apparently the right way to >> # select base character based on the "equivalence class" >> # as defined in the LC_CTYPE >> =# select * from test where value ~ 'mart[=i=]n'; > > I think it would be much easier if you did something like > > select * from test where lower(to_ascii(value)) = lower(to_ascii('martín')); > > When to_ascii doesn't work (for example because it doesn't work in UTF8) > you may want to use convert() to recode the text to latin1 or latin9. Well, with the example above to_ascii doesn't work. select to_ascii(value) from test ; ERROR: encoding conversion from UTF8 to ASCII not supported And neither does convert select convert(value using utf8_to_ascii) from test ; ERROR: character 0xc3 of encoding "MULE_INTERNAL" has no equivalent in "SQL_ASCII" select convert('martín' using utf8_to_ascii) from test ; ERROR: character 0xc3 of encoding "MULE_INTERNAL" has no equivalent in "SQL_ASCII" Any other alternatives? BTW, this is on Pg8.2 as seen on Ubuntu Feisty PPC. $ dpkg -l postgresql-8.2 ii postgresql-8.28.2.4-0ubuntu0.7.04 cheers, martín -- --- Martin @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St OFFICE: +64(4)916-7224 UK: 0845 868 5733 ext 7224 MOB: +64(21)364-017 Make things as simple as possible, but no simpler - Einstein --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] SQL problem..
Hello, I'm an Access/SQL novice and I have an sql problem: I have the following table Price: FuelID PriceDate Price LPG1/05/2007 0,2 LPG13/05/2007 0,21 SPS 2/05/2007 1,1 SPS 15/05/2007 1,08 And I have to make the following query: FuelID PriceDate_from PriceDate_To Price LPG1/05/2007 13/05/2007 0,2 SPS 2/05/2007 15/05/20071,1 LPG13/05/2007 0,21 SPS 15/05/2007 1,08 I tried this: SELECT FuelID, min(FuelDate) AS Pricedate_from, max(FuelDate) AS PriceDate_to FROM Price GROUP BY FuelID; Problem is, when I put Price in the select, I get the error 'Price not part of an aggregate function' :s Eitherway, it doesnt work, I only have one FuelDate_from and one FuelDate_to if I use min & max. While there should be several from...to... dates for a particular fuel. How can I solve this ? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] table disk space usage query?
Hello all - I was looking for a way to find out how much disk space each table is using. I stumbled upon this page ( http://www.ffnn.nl/pages/articles/linux/postgresql-tips-and-tricks.php ) which gave me a query to show the number of disk pages per object. Given that a page is 8kb, I added these calculated columns to the query: SELECT relname, reltuples, relpages, relpages * 8 AS relpagesKB, (relpages * 8 )/1024 AS relpagesMB FROM pg_class ORDER BY relpages DESC ; Is this correct? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Execution variability
Hi all. I understand this can be a ridiculous question for most you. The very same query on the very same db shows very variable timings. I'm the only one client on an unpupolated server so I'd expect a rather constant timing. INstead for a while the query become very slooow and the CPU reached 60 to 70% and the time needed is about 1.5 minutes0. Again with just me on it. Normally the same query rises the usage to a mere 5% to 7% with timing with the tenth of a second. I have disable both the autovacuum and the "stats_start_collector" in the attempt to disable possibe reasons for slow down. No luck in this. Do you have any hint? -- Vincenzo Romano -= NotOrAnd.IT Information Technologies =- tel +39 0823 454163 | cel +39 339 8083886 | fax +39 02 700506964 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Why does postgres take time to shut down sometimes?
Hi, I have an application (multi-threaded C++) on Solaris 9 platform, and it spawns approximately 18 PoostGres connections. When I try to shutdown the postgres (after the application is taken down), I have noticed that it sometimes takes more than one minute (default for 'assuming' graceful shutdown). Any idea why it is that way? Also, if anyone can detail about the steps Postgres postmaster executes when a shutdown is called, that would be great. Thanks and regards, Yateen Vishnu Joshi Starent Networks (India) Private Limited, 17, Phase I, Infotech Park, Hinjewadi, Pune - 411057 Phone-91-20-22930100 Fax-91-20-22934723 "This email message and any attachments are confidential information of Starent Networks, Corp. The information transmitted may not be used to create or change any contractual obligations of Starent Networks, Corp. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this e-mail and its attachments by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient, please notify the sender immediately -- by replying to this message or by sending an email to [EMAIL PROTECTED] -- and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you."
[GENERAL] Looking for help regarding getting the latest inserted sequence value.
Hello All, We are looking for your help.The scenarion which we need to address is,There are 2 threads and both of them are in separate transction and insert the value to a same table and also sequence number field gets incremented automotically for each of them.The problem we are facing is,We will need to get back the appropriate id inserted for that particualr record as it is used in some other places. Right now we are doing it in 2 steps.inserting the record to table.And getting the max(id) from the table.Now the problem is assume there is another thread also does the insertion and commits that transction both of the thread return the same id which is not desirable in our case. It would be really very much helpful to know the form of a query which inserts record and also returns the latest inserted ID for that record in a single query. Thanks, Prasanna. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] installing pljava on windows xp
Hi, When I try to install pljava 1.3 on postgresql 8.2 (or 8.1) I get an error stating that it cannot load pljava.dll from a location specificed in the config file (while the dll is clearly there). I have followed the postgredql and pljava manual installation instructions exactly with no luck. This error is listed as fixed on the pljava wiki, yet i'm still experiencing it. Any ideas? Pouria ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] growing disk usage problem: alternative solution?
In article <[EMAIL PROTECTED]>, Vivek Khera <[EMAIL PROTECTED]> wrote: % % On Jun 26, 2007, at 3:31 PM, Bill Moran wrote: % % > VACUUM FULL and REINDEX are not required to maintain disk usage. % > Good old- % > fashoned VACUUM will do this as long as your FSM settings are high % > enough. % > % % I find this true for the data but not necessarily for indexes. The % other week I reindexed a couple of O(100,000,000) row tables and % shaved about 20Gb of index bloat. Those tables are vacuumed % regularly, but we do a large data purge every few weeks. I think % that causes some issues. I'm running 8.1. If you have an index on some monotonically increasing field (i.e., a sequence or date), and you purge by deleting from the low end of this index, then that space won't be reclaimed by vacuum. Vacuum full won't help, either. You (only) need to rebuild the affected indices. -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Move a database from one server to other
In article <[EMAIL PROTECTED]>, Nicholas Barr <[EMAIL PROTECTED]> wrote: % Only copy the data directory if both servers are offline and not running % and if both servers use the same version of postgres. This method is not % recommended AFAIK. It _is_ recommended for setting up a warm-standby server (it's the only way). I copy database clusters around all the time. If the database is shut down, there's no harm in it and it's usually faster and always simpler than dump/restore. Copying a snapshot of a running system only has a teenly little bit of harm in it and you don't even have to shut down the db. -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL]
good day webmaster. > > i want to ask some help from you. my problem is this. i have already > installed postgres on a certain computer. i have already created tables and > put necessary data in it. one day, my operating system bugged down (i am > using windows xp). how could i possible retireve the data in my database? i > have not created a backup. if you have any idea with regard to this problem > of mine, i would appreciate it a lot. thanks... god bless... - Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when.
Re: [GENERAL] Interval overflow?
Michael Glaesemann <[EMAIL PROTECTED]> writes: > On Jun 29, 2007, at 16:07 , Jeff Davis wrote: >> Is this expected behavior or a bug? > Bug. In general the range checking in the date time code can > definitely be improved. Apparently Jeff's using enable-integer-datetimes; what I see is regression=# select '20 hours'::interval + '20 hours'::interval; ?column? -- 2147483647:00:00 (1 row) It looks like his case is overflowing the int8 microseconds field of the interval. On my machine, the seconds field is double so it does not overflow, but interval_out tries to convert the computed hours value to int32, and *that* overflows. The best we can do for Jeff is throw an error in interval addition. In the float case it could be wished that we could print any result we can store ... regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Interval overflow?
On Jun 29, 2007, at 16:07 , Jeff Davis wrote: Is this expected behavior or a bug? Bug. In general the range checking in the date time code can definitely be improved. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Interval overflow?
Is this expected behavior or a bug? => select version(); version PostgreSQL 8.2.4 on i386-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305 (1 row) => select '20 hours'::interval + '20 hours'::interval; ?column? -- -1124095576:01:49.551616 (1 row) Regards, Jeff Davis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [ADMIN] i need a rad/ide open source for work with postgresql
Look at: http://zeoslib.sourceforge.net/ Adam P.S. And remember, that You can use libpq.DLL too - the fastest way to work with PostgreSQL. Mario Jose Canto Barea pisze: why are you can make a good database relational server as postgresql 8.1, and dont make a rad/ide open source for programming with postgresql 8.1 as delphi\c++builder\progress 4gl ? thanks ___ Do You Yahoo!? La mejor conexión a Internet y 2GB extra a tu correo por $100 al mes. http://net.yahoo.com.mx ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] date time function
On Jun 29, 2007, at 13:17 , John D. Burger wrote: I can't anything in the docs that explain how intervals print out. They seem to show like this: > select now() - '1990-01-01'; ?column? --- 6388 days 13:06:26.3605600595 Without being anchored with a timestamp, we have no way to know how long a given month is in the result, so it plays it safe by returning everything in days. or like this: > select now() - current_date; ?column? - 14:06:46.119788 unless you use age(), which supposedly also returns an interval: > select age(now(), '1990-01-01'); age - 17 years 5 mons 28 days 14:08:04.524803 Why do the first and third intervals print out differently? The timestamp[tz]_age functions currently don't use the same algorithm the timestamp_mi code does. This should probably be reconciled in the future so results are consistent. But age() is documented as simply producing an interval - where is the magic that makes the first and third results above look different? src/backend/utils/adt/timestamp.c Ah, wait a minute - does this have to do with the varying number of days in different months? Yes. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] date time function
"John D. Burger" <[EMAIL PROTECTED]> writes: > Why do the first and third intervals print out differently? The underlying storage is months, days, and seconds --- "1 year" is the same as "12 months", but not the same as "365 days". IIRC plain timestamp subtraction produces an interval with days and seconds but the month part is always 0. You can use justify_days to convert days to months (at an assumed 30 days/month). I'm not sure exactly what age() does but it's probably producing a value that has nonzero months to start with. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] db replication
andrew quaresma wrote: > i developing an aplication with a postgresql+postgis... i need to replicate > the database to various pda, as well as insure the synchronization between > all repliques... > > can someone with experience tell me what is the best free solution to my > problem?... There is hardly experience with this out, as it does not exist :) You might abuse dblink or slony to hack that. -- Best regards, Hannes Dorbath ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] date time function
I can't anything in the docs that explain how intervals print out. They seem to show like this: > select now() - '1990-01-01'; ?column? --- 6388 days 13:06:26.3605600595 or like this: > select now() - current_date; ?column? - 14:06:46.119788 unless you use age(), which supposedly also returns an interval: > select age(now(), '1990-01-01'); age - 17 years 5 mons 28 days 14:08:04.524803 Why do the first and third intervals print out differently? I see this in the docs for age: Subtract arguments, producing a "symbolic" result that uses years and months But age() is documented as simply producing an interval - where is the magic that makes the first and third results above look different? Ah, wait a minute - does this have to do with the varying number of days in different months? Thanks. - John D. Burger MITRE ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Standby servers and incrementally updated backups
On Jun 29, 2007, at 10:15 AM, Jim Nasby wrote: On Jun 25, 2007, at 4:54 PM, Erik Jones wrote: On Jun 25, 2007, at 4:40 PM, Simon Riggs wrote: On Mon, 2007-06-25 at 16:00 -0500, Erik Jones wrote: On Jun 25, 2007, at 3:40 PM, Simon Riggs wrote: If I'm correct, then for large databases wherein it can take hours to take a base backup, is there anything to be gained by using incrementally updated backups? If you are certain there are parts of the database not touched at all between backups. The only real way to be sure is to take file level checksums, or you can trust file dates. Many backup solutions can do this for you. Wait, um, what? I'm still not clear on why you would want to run a backup of an already caught up standby server. Sorry, misread your question. While you are running a warm standby config, you will still want to take regular backups for recoverability and DR. These are additional backups, i.e they are not required to maintain the warm standby. You can backup the Primary, or you can backup the Standby, so most people will choose to backup the Standby to reduce the overhead on the Primary. Ok, yeah, that's what I was thinking and is where we are headed in the next month or so here at work: we already have a standby running and will be adding a second standby server that we will be using for snapshot backups (packaged with the pertinent wal files...) as well as periodically bringing the second standby up to run dumps from just to cover all of our bases and also to be able to take our main primary server down for maintenance and still have both a production and standby running. I guess I was really just wanting to make sure I wasn't missing some other big usage for incremental backups from the standby. Note that (currently) once you bring a standby up you can't go back to standby mode without restoring the filesystem level backup you started with and replaying everything. Right, got that. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Standby servers and incrementally updated backups
On Jun 25, 2007, at 4:54 PM, Erik Jones wrote: On Jun 25, 2007, at 4:40 PM, Simon Riggs wrote: On Mon, 2007-06-25 at 16:00 -0500, Erik Jones wrote: On Jun 25, 2007, at 3:40 PM, Simon Riggs wrote: If I'm correct, then for large databases wherein it can take hours to take a base backup, is there anything to be gained by using incrementally updated backups? If you are certain there are parts of the database not touched at all between backups. The only real way to be sure is to take file level checksums, or you can trust file dates. Many backup solutions can do this for you. Wait, um, what? I'm still not clear on why you would want to run a backup of an already caught up standby server. Sorry, misread your question. While you are running a warm standby config, you will still want to take regular backups for recoverability and DR. These are additional backups, i.e they are not required to maintain the warm standby. You can backup the Primary, or you can backup the Standby, so most people will choose to backup the Standby to reduce the overhead on the Primary. Ok, yeah, that's what I was thinking and is where we are headed in the next month or so here at work: we already have a standby running and will be adding a second standby server that we will be using for snapshot backups (packaged with the pertinent wal files...) as well as periodically bringing the second standby up to run dumps from just to cover all of our bases and also to be able to take our main primary server down for maintenance and still have both a production and standby running. I guess I was really just wanting to make sure I wasn't missing some other big usage for incremental backups from the standby. Note that (currently) once you bring a standby up you can't go back to standby mode without restoring the filesystem level backup you started with and replaying everything. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Need help with generic query
No idea on the function, but why not have a 'master' ticket table and have the ones in each schema inherit from it? Then you could query all tables by just querying the master table. On Jun 20, 2007, at 5:55 AM, David Abrahams wrote: Background: I have a number of schemas all of which contain a "ticket" table having the same columns. The goal of the function xticket1 below is to collect all ticket rows satisfying some condition from all those schemas, and additionally label each one by adding a new column containing the name of the schema it belongs to. -- Create a temporary table with the right layout -- for our function's return type (know a better way?) CREATE TEMP TABLE tix ( LIKE master.ticket ); ALTER TABLE tix ADD COLUMN schema_name text; CREATE OR REPLACE FUNCTION xticket1(condition TEXT) RETURNS SETOF tix AS $BODY$ DECLARE scm RECORD; result RECORD; BEGIN FOR scm IN SELECT schema_name FROM public.instance_defs LOOP FOR result IN EXECUTE 'SELECT ' || scm.schema_name || '.ticket.*,' || ' ''' || scm.schema_name || ''' AS schema_name' || ' FROM ' || scm.schema_name || '.ticket' || ' WHERE ' || condition LOOP RETURN NEXT result; END LOOP; END LOOP; RETURN; END; $BODY$ LANGUAGE plpgsql; The problem is, psql is complaining: ERROR: wrong record type supplied in RETURN NEXT I don't know why, and I don't know how to get psql to give me useful debugging info that would help me discover why. Can someone help? Thanks ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_catalog.pg_get_serial_sequence() returns NULL
On Jun 26, 2007, at 10:28 AM, Sergey Karin wrote: I use PG 8.1.5 I execute in psql next comands: create table t_table (gid serial, name varchar); select pg_catalog.pg_get_serial_sequence('t_table', 'gid'); pg_get_serial_sequence public.t_table_gid_seq create table t_table_1() inherits (t_table); \d t_table_1 Table "public.t_table_1" column | Type| Модификаторы -+--- +--- gid | integer | not null default nextval ('t_table_gid_seq'::regclass) name| character varying | Inherit: t_table select pg_catalog.pg_get_serial_sequence('t_table_1', 'gid'); pg_get_serial_sequence The issue is that t_table_1.gid isn't actually a SERIAL column... it's an int that pulls a default value from the t_table_gid_seq sequence. Note that that sequence is the one defined for t_table. In this case for right now you're stuck hard-coding the sequence name in, or referring to the parent table :(. How I can get name of sequence used by gid column of partition table? And why for t_table_1 pg_get_serial_sequence() returns NULL? It returns NULL also if I create new sequence manually and use ALTER TABLE ALTER COLUMN SET DEFAULT nextval (::regclass). In http://www.postgresql.org/docs/8.1/static/datatype.html#DATATYPE- SERIAL I can read that type SERIAL is equivalent to CREATE SEQUENCE tablename_colname_seq; CREATE TABLE tablename ( colname integer DEFAULT nextval('tablename_colname_seq') NOT NULL );Thanks in advance Sergey Karin -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] commit transaction failed
I don't know about the error, but I think there's far more efficient ways to do what you're doing see below: On Jun 20, 2007, at 1:25 AM, [EMAIL PROTECTED] wrote: I m having a problem while calling the procedure in prostgresql 8.2 from adoconnection, It gets executed for some time and after 5-10 call it gives error startTransaction failed or CommitTransaction Failed. CREATE OR REPLACE FUNCTION sp_getnewfiles(IN strserverid character varying, IN nmaxcount integer, OUT stroutrecno character varying) AS $BODY$ DECLARE cur RECORD; i integer; BEGIN i:=0; LOCK TABLE inputtable IN ROW EXCLUSIVE MODE NOWAIT; Why are you locking the table? You likely don't need to. I suspect that at most you just need a serialized transaction. FOR cur IN select recno from InputTable where FileState=0 order by recno limit nMaxCount for update LOOP if i=0 then strOutRecNo:='recno='; else strOutRecNo:=strOutRecNo || ' or recno='; end if; strOutRecNo:=strOutRecNo||cur.recno; Rather than a giant OR clause, have you considered an IN list? I'd look at populating an array of values, and then using array_to_string to turn that into a list of numbers. update inputtable set filestate=1,serverid=strServerID where recno=cur.recno; i:=i+1; END LOOP; EXCEPTION WHEN no_data_found THEN --DO NOTHING WHEN OTHERS THEN --rollback; RAISE EXCEPTION 'some error'; Why test for other exceptions if you're just going to re-raise them? Having said all that, I think a function is simply the wrong way to go about this. Instead I think you want is: UPDATE input_table SET file_state = 1, server_id = ... WHERE file_state = 0 RETURNING * ; (Sorry, my brain/fingers don't do camel case. :P) -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] CREATE FUNCTION ... performance boost?
Jan Danielsson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hello all, In order to speed up commonly used queries, I use prepared statements. I assume that 'prepare' tells some database query planner to take a look at a query, and do all preparations for it, then store those preparations somewhere for when it's needed. You might well be slowing things down. PREPARE/EXECUTE can't replan as you change the parameters to the query. Now, in many cases that won't matter, e.g. "SELECT * FROM foo WHERE x=? AND y=?" where there's a unique index on x or y. What you are gaining is the cost of planning on each call. When I use CREATE FUNCTION to create a function, does that also get "planned" automatically (plsql, specifically). My gut feeling is that it doesn't (mainly since it can be any external language). Well, plpgsql will have its plans "compiled" on the first call. That will last as long as the connection. The reason I'm asking is because I'd like to have permanently stored "prepared" statements in a database. On even intervals, I connect to it, and perform a number of queries. Many of them are the same statements being run over and over. And these prepared statements really are "hard coded", so they would do fine in my database initialization script. I assume that "functions" aren't what I am looking for.. But is is possible to store preparations some way so they survive past connection/disconnections? Nope. Plans last only until the end of a session. What you might be able to do is use a connection pool/proxy to maintain connections to the db while you connect to the proxy. That way the connection can be made to last longer. Before you go to too much effort though, I'd check that you're really going to gain a useful performance boost. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] CREATE FUNCTION ... performance boost?
On 6/29/07, Jan Danielsson <[EMAIL PROTECTED]> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hello all, In order to speed up commonly used queries, I use prepared statements. I assume that 'prepare' tells some database query planner to take a look at a query, and do all preparations for it, then store those preparations somewhere for when it's needed. When I use CREATE FUNCTION to create a function, does that also get "planned" automatically (plsql, specifically). My gut feeling is that it doesn't (mainly since it can be any external language). by the way, query plans are exposed through spi interface so all pl languages have ability to store plans. pl/sql saves plans as well as static sql in pl/pgsql. ymmv on various other pls. merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] CREATE FUNCTION ... performance boost?
On 6/29/07, Jan Danielsson <[EMAIL PROTECTED]> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hello all, In order to speed up commonly used queries, I use prepared statements. I assume that 'prepare' tells some database query planner to take a look at a query, and do all preparations for it, then store those preparations somewhere for when it's needed. When I use CREATE FUNCTION to create a function, does that also get "planned" automatically (plsql, specifically). My gut feeling is that it doesn't (mainly since it can be any external language). 'execute' is pretty much the fastest possible way to execute a query (or, even better, PQexecPrepared via C)...it's less cpu cycles than a function although barely. I would suggest maybe rethinking your query into a view and querying that with or without prepare. merlin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] CREATE FUNCTION ... performance boost?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hello all, In order to speed up commonly used queries, I use prepared statements. I assume that 'prepare' tells some database query planner to take a look at a query, and do all preparations for it, then store those preparations somewhere for when it's needed. When I use CREATE FUNCTION to create a function, does that also get "planned" automatically (plsql, specifically). My gut feeling is that it doesn't (mainly since it can be any external language). The reason I'm asking is because I'd like to have permanently stored "prepared" statements in a database. On even intervals, I connect to it, and perform a number of queries. Many of them are the same statements being run over and over. And these prepared statements really are "hard coded", so they would do fine in my database initialization script. I assume that "functions" aren't what I am looking for.. But is is possible to store preparations some way so they survive past connection/disconnections? - -- Kind regards, Jan Danielsson -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.7 (NetBSD) iD8DBQFGhSxLuPlHKFfKXTYRCjb0AJ4utz24vsbwNkU0dGMpPyqg/c7azACfW+b7 0Ecy+yD3ayrvR+C+B8G9shQ= =NFfn -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] db replication
On Fri, Jun 29, 2007 at 03:42:32PM +0100, andrew quaresma wrote: > hi.. > > i developing an aplication with a postgresql+postgis... i need to replicate > the database to various pda, as well as insure the synchronization between > all repliques... > > can someone with experience tell me what is the best free solution to my > problem?... AFAIK there isn't one. PDA replication requires disconnected multimaster asynchronous replication, and I don't know of a project that has delivered that yet. A -- Andrew Sullivan | [EMAIL PROTECTED] This work was visionary and imaginative, and goes to show that visionary and imaginative work need not end up well. --Dennis Ritchie ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Query optimization (select single record and join)
Orest Kozyar wrote: What I am wondering is whether the database first eliminate all rows in tableA that don't meet the criteria before performing the join, or does it perform the join first then eliminate all records that don't meet the criteria? If you use "EXPLAIN SELECT ..." then PostgreSQL will tell you how it's going to execute your query. If you use "EXPLAIN ANALYZE SELECT ..." it will tell you what it actually did too. See manuals and google for details and discussion. Oh, and if you don't know about VACUUM and ANALYZE you'll probably want to read up on those too. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] xpath_string namespace issue...
I'm not sure what I'm missing here... :) select xpath_string($xml$ baz $xml$ ,'//f:bar/text()') This does not give me back "baz" as I was expecting it to... How does one clue-in the xpath functions to the namespaces in the XML document? Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Query optimization (select single record and join)
I have the following sql: CREATE OR REPLACE FUNCTION foo (in x integer) RETURNS float AS $$ SELECT max(tableB.columnC) FROM tableA inner join tableB on (tableA.columnA = tableB.columnB) WHERE tableA.columbA = x ... (additional code to select which of the many "foo" records referencing the same row in the foreign table is the one we want) What I am wondering is whether the database first eliminate all rows in tableA that don't meet the criteria before performing the join, or does it perform the join first then eliminate all records that don't meet the criteria? Thanks, Orest ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] db replication
hi.. i developing an aplication with a postgresql+postgis... i need to replicate the database to various pda, as well as insure the synchronization between all repliques... can someone with experience tell me what is the best free solution to my problem?... thanks... andy...
Re: [GENERAL] Simple backup utility like mysqldump?
On Jun 29, 12:32 am, Bjorn Boulder <[EMAIL PROTECTED]> wrote: > Hello People, > > I'm running PostgreSQL 8.1.1 on my freebsd box. > > I'm curious if PostgreSQL has a utility for backing up small databases > like mysqldump or Oracle's export utility. > > -b See: http://www.postgresql.org/docs/8.2/static/backup.html ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] AutoVacuum Behaviour Question
Bruce McAlister wrote: > Denis Gasparin wrote: > > RESET SESSION command is available only in 8.2 branch, isn't it? > > I tried to issue the command in a 8.1 server and the answer was: ERROR: > > unrecognized configuration parameter "session" > > I had a look in our configuration and there is a "session" option that > comes up if you type "RESET " and press TAB twice. However, if you do > try to run it, it comes back with: > > blueface-crm=# RESET session ; > ERROR: unrecognized configuration parameter "session" If you continue tab-completing you'll notice that it's actually "reset session authorization" which is a different animal completely. > We're running PostgreSQL 8.2.4. Alvaro said it was only going to be > available in 8.3. I hope they backport it to 8.2 though, No way. > 90% of our connections are via connection pooled drivers, so being > able to reset the session after each transaction would be a handy way > to ensure the environment is sane for the next transaction :) Seems you'll be one of the early adopters of 8.3 solely for this reason ;-) -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] OFFSET and LIMIT - performance
And what about using cursors and move. Which is faster - OFFSET/LIMIT OR CURSOR/MOVE. Best Regards, Kaloyan Iliev Tom Lane wrote: "Jan Bilek" <[EMAIL PROTECTED]> writes: I'm using PGDB with JDBC. In my app i need to select only portion of all = available rows. I know i can do it two ways: 1. I can use OFFSET and LIMIT SQL statements or 2. I can select all rows and then filter requested portion in Java. My question - Does the second way significantly affect performance = especially when used with JDBC? Network transmission costs alone would make the second way a loser. Large OFFSETs are pretty inefficient because the backend generates and discards the rows internally ... but at least it never converts them to external form or ships them to the client. Rows beyond the LIMIT are not generated at all. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Create user
Ashish Karalkar wrote: > I am trying to create a user and i dont understand why it > is showing me any massage even after giving parameter -e to > the command. > > command : > > C:\Program Files\PostgreSQL\8.2\bin>createuser ashish -S -d \ > -R -l -P -E -e -U postgres Use -q instead of -e. Yours, Laurenz Albe ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Create user
Ashish Karalkar wrote: > Hello All, > > I am trying to create a user and i dont understand why it is showing > me any massage even after giving parameter -e to the command. > > command : > > C:\Program Files\PostgreSQL\8.2\bin>createuser ashish -S -d -R -l -P -E > -e -U postgres > Enter password for new role: > Enter it again: > Password: > > The following massage appers, I dont want to display this massage > > CREATE ROLE ashish ENCRYPTED PASSWORD > 'md5e34280fa67e9a7d842044e45f2d5d325' NOSU > PERUSER CREATEDB NOCREATEROLE INHERIT LOGIN; > CREATE ROLE > > is there any option? Remove the -e option. That turns on the SQL echoing. Regards, Dave. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Create user
am Fri, dem 29.06.2007, um 13:31:03 +0530 mailte Ashish Karalkar folgendes: > Hello All, > > I am trying to create a user and i dont understand why it is showing me any > massage even after giving parameter -e to the command. Maybe you should use -q: -q --quiet Do not display a response. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] OFFSET and LIMIT - performance
On Thu, 2007-06-28 at 11:36 -0700, David Wall wrote: > > Network transmission costs alone would make the second way a loser. > > > > Large OFFSETs are pretty inefficient because the backend generates and > > discards the rows internally ... but at least it never converts them to > > external form or ships them to the client. Rows beyond the LIMIT are > > not generated at all. > > > Some of this would depend on the query, too, I suspect, since an ORDER > BY would require the entire result set to be determined, sorted and then > the limit/offset could take place. In 8.3 a LIMIT clause will be evaluated at the same time as ORDER BY, so that the full sort cost is avoided. This means that queries with LIMIT are more likely to return in constant time, whether you have no ORDER BY, an ORDER BY on an index, or an ORDER BY with no index. So indexes specifically to provide a fast ORDER BY/LIMIT are no longer required. Courtesy of Greg Stark. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Create user
Hello All, I am trying to create a user and i dont understand why it is showing me any massage even after giving parameter -e to the command. command : C:\Program Files\PostgreSQL\8.2\bin>createuser ashish -S -d -R -l -P -E -e -U postgres Enter password for new role: Enter it again: Password: The following massage appers, I dont want to display this massage CREATE ROLE ashish ENCRYPTED PASSWORD 'md5e34280fa67e9a7d842044e45f2d5d325' NOSU PERUSER CREATEDB NOCREATEROLE INHERIT LOGIN; CREATE ROLE is there any option? Thanks In advance Ashish...
Re: [GENERAL] a JOIN on same table, but 'slided over'
Thank you All for this extensive help! BTW: google helps, once you know that the construct is called "correlated subquery" - there is no way to get an answer before one knows the question :) Thenx again! -R On Thu, 2007-06-28 at 23:23 +0530, Gurjeet Singh wrote: > On 6/28/07, Alban Hertroys <[EMAIL PROTECTED]> wrote: > > This is called a 'correlated subquery'. Basically the subquery > is > performed for each record in the top query. > > Google gave me this: > > http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm > > I think the sub-section titled "Example: Correlated subquery in a > WHERE Clause" is appropriate to explain our query at hand. > > Simply put, correlated queries are like nested FOR loops of any high > level programming language. > > 1. FOR( record R in result of outer-query ) > 2. execute inner query, using any R.colname1 > 3. compare R.colname2 with the result of the correlated-subquery > 4. produce R in output, iff the above comparison succeeded > > Line 2 can be treated as another FOR loop, where every record of > inner-query is being processed, and comparing the local expressions > with a column (or expression) that comes from outer query. > > The comparison in step 3 can be against any expression, with columns > or against a pure constant too! > > For example, the following query produces the name of all the > employees, who manage at least one other employee. > > select empno, ename > from emp e1 > where exists (select 1 >from emp e2 >where e2.mgr = e1.empno); > > The only thing I would add for our query is that, that the outer > SELECT of our query produces a cartesian product (no join-condition > between t1 and t2), but only one row from t2 qualifies for the join, > since the WHERE condition is on a unique column, and the correlated > subquery returns just the required value (lowest of the IDs that are > greater than current t1.ID being processed). > > I know the above one-line-paragraph may sound a bit cryptic for > someone new to correlated subqueries, but if you understand the > example in the link above, then this would start making some sense. > > > And there's probably more to find. Interestingly enough > wikipedia > doesn't seem to have an article on the subject. > > > > > > Regards, > -- > [EMAIL PROTECTED] > [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com > > 17°29'34.37"N 78°30'59.76"E - Hyderabad * > 18°32'57.25"N 73°56'25.42"E - Pune > > Sent from my BlackLaptop device ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] AutoVacuum Behaviour Question
Denis Gasparin wrote: > Martijn van Oosterhout ha scritto: >> On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote: >> >>> I just want to verify that I understand you correctly here, do you mean >>> that the temporary table is created by specific sql, for example, create >>> temp table, then perform some actions on that temp table, then, either >>> you remove the temp table, or, if you close the session/connection the >>> postmaster will clean up the temp table? What happens if you're using >>> connection pools, i mean are those sessions deemed "closed" after the >>> queries complete, when the pool connections are persistent. >>> >> >> Yes, the temp table is private to the session and will be removed once >> the session closes, if not sooner. As for connection pools, IIRC there >> is a RESET SESSION command which should also get rid of the temporary >> tables. >> > RESET SESSION command is available only in 8.2 branch, isn't it? > I tried to issue the command in a 8.1 server and the answer was: ERROR: > unrecognized configuration parameter "session" > I had a look in our configuration and there is a "session" option that comes up if you type "RESET " and press TAB twice. However, if you do try to run it, it comes back with: blueface-crm=# RESET session ; ERROR: unrecognized configuration parameter "session" We're running PostgreSQL 8.2.4. Alvaro said it was only going to be available in 8.3. I hope they backport it to 8.2 though, 90% of our connections are via connection pooled drivers, so being able to reset the session after each transaction would be a handy way to ensure the environment is sane for the next transaction :) > Is there available a patch for the 8.1 version of postgresql? > > Thank you, > Denis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] AutoVacuum Behaviour Question
Martijn van Oosterhout ha scritto: > On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote: > >> I just want to verify that I understand you correctly here, do you mean >> that the temporary table is created by specific sql, for example, create >> temp table, then perform some actions on that temp table, then, either >> you remove the temp table, or, if you close the session/connection the >> postmaster will clean up the temp table? What happens if you're using >> connection pools, i mean are those sessions deemed "closed" after the >> queries complete, when the pool connections are persistent. >> > > Yes, the temp table is private to the session and will be removed once > the session closes, if not sooner. As for connection pools, IIRC there > is a RESET SESSION command which should also get rid of the temporary > tables. > RESET SESSION command is available only in 8.2 branch, isn't it? I tried to issue the command in a 8.1 server and the answer was: ERROR: unrecognized configuration parameter "session" Is there available a patch for the 8.1 version of postgresql? Thank you, Denis
Re: [GENERAL] Execution variability
On Thursday 28 June 2007 16:08:06 Alvaro Herrera wrote: > Vincenzo Romano escribió: > > The values are here below. I suppose that the "hashed" > > ones imply a default value. > > Correct (widely known as "commented out") > > > By the way, it seems that the problem arises with only one query, > > while the other ones behave almost the same all the time. > > Let's see the EXPLAIN ANALYZE. Does it involve the big table? I need to check. The "bad guy" is a function and I have to get the body out in order to make the PG explain it. -- Vincenzo Romano -- Maybe Computer will never become as intelligent as Humans. For sure they won't ever become so stupid. [VR-1988] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/