[GENERAL] equivalent in postgres of over(partition) function in db2
Hi, I have the following query in db2: insert into xyz (a,b) select 10, MIN(CM_LEDGERITEM.tril_gid) OVER(PARTITION BY CM_LEDGERITEM.MMPAYEE, CM_PROMOTION.MMCOMPTYPE, CM_LEDGERITEM.MODEL) from I am not able to find a corresponding thing for min(...) over(partition by col1, col2, col3..) in postgres.What will be its equivalent in postgres?? Regards, Vinita Bansal _ Manage information better. Optimise your tasks. http://www.microsoft.com/india/office/experience/ Experience MS Office System. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] stats collector appears to be dying
Shelby Cain [EMAIL PROTECTED] writes: I'm having an issue with what appears to be the stats collector process dying on Postgresql 8.0.1 running on Windows XP w/sp1. 2005-02-23 14:59:17 FATAL: could not read from statistics collector pipe: No such file or directory 2005-02-23 14:59:17 LOG: statistics collector process (PID 2748) was terminated by signal 1 Odd. We probably shouldn't take the message entirely at face value because of the error code mapping that happens under Windows. Looking at backend/port/win32/error.c, there seem to be several Windows codes that are translated to ENOENT: ERROR_FILE_NOT_FOUND ERROR_PATH_NOT_FOUND ERROR_INVALID_DRIVE ERROR_NO_MORE_FILES ERROR_BAD_NETPATH ERROR_BAD_NET_NAME ERROR_BAD_PATHNAME ERROR_FILENAME_EXCED_RANGE I wonder if any of those throws more light on it ... regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] [PATCHES] A way to let Vacuum warn if FSM settings are low.
Ron Mayer [EMAIL PROTECTED] writes: +if (needed MaxFSMPages) +ereport(WARNING, +(errmsg(max_fsm_pages(%d) is smaller than total pages needed(%.0f), + MaxFSMPages, needed))); An unconditional WARNING seems a bit strong to me for a case that is not necessarily wrong. Depending on the needs of the installation, this might be a perfectly acceptable situation --- for example if you have lots of large read-mostly tables. On the other side of the coin, the test could pass (ie no warning) in situations where in fact MaxFSMPages is too small, because what we are comparing it to is the number of pages requested for relations that are being tracked. If MaxFSMRelations is too small then we can't really tell whether MaxFSMPages is adequate. +if (numRels MaxFSMRelations) +ereport(WARNING, +(errmsg(max_fsm_relations(%d) is smaller than the number of relations (%d), + MaxFSMRelations, numRels))); This part is just plain dead code, since it's not possible for numRels to exceed MaxFSMRelations. I think it might be useful to warn when numRels == MaxFSMRelations, since if you don't have even one spare fsmrel slot then you probably have too few (it's unlikely you got it on the nose). But I don't know how to produce a warning about MaxFSMPages that's worth anything. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] multiple runs of the same query cause out of memory - WAS [Re: capturing/viewing sort_mem utilization on a per query basis]
Lonni J Friedman [EMAIL PROTECTED] writes: We have found that this query can indeed cause terrible things to happen to postgresql - it can run several times with response times of a few seconds, and then the next time cause postgres to go out of control on memory usage. (I've had to reboot my machine twice to get control). On a beefier machine with more memory and two CPUs, the query worked okay for a few dozen repetitions (with a few in parallel), then eventually one instance would cause the postgres process to spiral out of control and consume more and more memory at the rate of a megabyte every second or so. I'm not quite sure where to go from here, but this is definitely reproducable now. Help?! That's a bit hard to believe --- if nothing is changing, the query should get processed the same way every time. Can you package up a test case for other people to look at? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] multiple runs of the same query cause out of memory - WAS [Re: capturing/viewing sort_mem utilization on a per query basis]
On Wed, 23 Feb 2005 19:56:56 -0500, Tom Lane [EMAIL PROTECTED] wrote: Lonni J Friedman [EMAIL PROTECTED] writes: We have found that this query can indeed cause terrible things to happen to postgresql - it can run several times with response times of a few seconds, and then the next time cause postgres to go out of control on memory usage. (I've had to reboot my machine twice to get control). On a beefier machine with more memory and two CPUs, the query worked okay for a few dozen repetitions (with a few in parallel), then eventually one instance would cause the postgres process to spiral out of control and consume more and more memory at the rate of a megabyte every second or so. I'm not quite sure where to go from here, but this is definitely reproducable now. Help?! That's a bit hard to believe --- if nothing is changing, the query should get processed the same way every time. Its definitely not the same every time after this additional testing. Can you package up a test case for other people to look at? I could try, but i'm not sure what you'd need. Right now, i can only replicate this behavior with a snapshot of this one customer's DB. Running on the same schema, with little or no data, it always completes fine, so it looks to be partially data driven in nature, or perhaps its the volume of data. -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] multiple runs of the same query cause out of memory - WAS [Re: capturing/viewing sort_mem utilization on a per query basis]
List Tom, Following up on this thread from a weeks ago: http://archives.postgresql.org/pgsql-general/2005-02/msg00106.php We have found that this query can indeed cause terrible things to happen to postgresql - it can run several times with response times of a few seconds, and then the next time cause postgres to go out of control on memory usage. (I've had to reboot my machine twice to get control). On a beefier machine with more memory and two CPUs, the query worked okay for a few dozen repetitions (with a few in parallel), then eventually one instance would cause the postgres process to spiral out of control and consume more and more memory at the rate of a megabyte every second or so. I'm not quite sure where to go from here, but this is definitely reproducable now. Help?! -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Emacs and postgres
Sean Davis [EMAIL PROTECTED] writes: This is a bit off-topic Does anyone know of an interface between emacs and psql? I currently use it as my default editor and do my share of save and then \i. I just gave pgEdit a try and liked many aspects of it, but I still like Emacs as an editor and wondered if anyone else has tricks/plugins for emacs users. You could just set EDITOR to emacsclient and run M-x server-start and use \e. Personally I just use psql directly under something like shell-mode. That doesn't give me completion but it gives me a history and editing. -- greg ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Seguimos buscando webdeveloper...
Aunque parezca increible, seguimos buscando webdeveloper, se ofrece buen sueldo y excelente ambiente de trabajo. Patojo, vos me dijiste que tenias a alguien, que paso? Si alguien mas esta interesado (el sueldo es en $) porfavor, porfavor, envienme un correo... Xaludos...
Re: [GENERAL] Recovering db from cracked server
Richard Huxton wrote: You're close - the only thing it was complaining about was the missing postgres database. That's because it defaults to using the same database as the username. Try postgres -D copy_of_old_data_dir template1 Or use the name of your old database. Woohoo! Thanks a million; I've got my data back. Don Doumakes Email: doumakes at loganet.net Do not reply to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Ways to speed up dumpreload
max out your checkpoint_segments. 128 or 256 is good if you have plenty of spare space for the pg_xlog directory. you will also want to increase checkpoint_timeout to something large. I like 900 seconds personally. if you can put pg_xlog on a separate physical RAID you're best off. increase the amount of ram your index operations can use (sort_mem or work_mem depending on PG version) to a very large number and then reset that to a normal value when you're done. then just let pg_dump + pg_restore do its work. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: khera@kciLink.com Rockville, MD +1-301-869-4449 x806 AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Tracking down cause of duplicated oid in table.
We have a table that we use to store aggregated data, nightly we add a row for the previous day, and update all the rows in the table with the new information. The table is then vacuumed full. Our setup is: Linux Kernel 2.6.4 SCSI Drives Hardware Raid 10 (4 Drives) Reiserfs PostgreSQL 7.4.5 The table looks like: qualitysmith=# \d admin_report_materialized_view Table public.admin_report_materialized_view Column| Type | Modifiers --+-+ id | integer | not null default nextval('public.admin_report_materialized_view_id_seq'::text) new_order_count | integer | not null default 0 out_of_area_count| integer | not null default 0 fake_order_count | integer | not null default 0 fake_zip_count | integer | not null default 0 in_area_zip_count| integer | not null default 0 customer_estimates | integer | not null default 0 total_estimate_count | integer | not null default 0 cancelled_appointment_count | integer | not null default 0 gross_estimate_count | integer | not null default 0 total_repair_estimate_count | integer | not null default 0 total_replace_estimate_count | integer | not null default 0 active_contractors_count | integer | not null default 0 estimate_percentage | numeric | not null default 0 trades_id| integer | not null master_day | date| not null Indexes: admin_report_materialized_view_pkey primary key, btree (id) admin_report_materialized_view_unique_day_trades_id unique, btree (master_day, trades_id) Yesterday the following popped up in our log files: Feb 22 06:12:34 bigbrother postgres[9832]: [162-1] WARNING: index admin_report_materialized_view_pkey contains 18677 row versions, but table contains 18674 row versions Feb 22 06:12:34 bigbrother postgres[9832]: [162-2] HINT: Rebuild the index with REINDEX. Feb 22 06:12:34 bigbrother postgres[9832]: [163-1] WARNING: index admin_report_materialized_view_unique_day_trades_id contains 18677 row versions, but table contains 18674 row Feb 22 06:12:34 bigbrother postgres[9832]: [163-2] versions Feb 22 06:12:34 bigbrother postgres[9832]: [163-3] HINT: Rebuild the index with REINDEX. Reindex then gives: qualitysmith=# reindex index admin_report_materialized_view_pkey; ERROR: could not create unique index DETAIL: Table contains duplicated values. So digging through the table I found the following rows: qualitysmith=# select tableoid, oid, ctid, xmin, xmax, cmin, cmax, id from admin_report_materialized_view where oid = '104649735'; tableoid |oid| ctid | xmin| xmax| cmin| cmax | id ---+---+-+---+---+---+---+-- 104471713 | 104649735 | (13,42) | 704184382 | 704967810 | 704967810 | 15 | 2284 104471713 | 104649735 | (14,68) | 704184382 |15 |15 | 707367279 | 2284 And pg_filedump -i -f -R 13 $PGDATA/base/29539533/104471713 gives (partial): Item 42 -- Length: 96 Offset: 4160 (0x1040) Flags: USED XID: min (704184382) CMIN|XMAX: 704967810 CMAX|XVAC: 15 Block Id: 470 linp Index: 65 Attributes: 16 Size: 28 infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED) 1040: 3e00f929 82f4042a 0f00 d601 ..)...* 1050: 41001000 12291c00 07d43c06 ec08 A). 1060: 1070: 1080: 1090: 0800 0a00 e006 pg_filedump -i -f -R 14 $PGDATA/base/29539533/104471713 gives (partial): Item 68 -- Length: 96 Offset: 1440 (0x05a0) Flags: USED XID: min (704184382) CMIN|XMAX: 15 CMAX|XVAC: 707367279 Block Id: 14 linp Index: 68 Attributes: 16 Size: 28 infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED) 05a0: 3e00f929 0f00 6f91292a 0e00 ..)o.)* 05b0: 44001000 12291c00 07d43c06 ec08 D). 05c0: 05d0: 05e0: 05f0: 0800 0a00 e006 I am not positive my calculations here are correct. Finding the appropriate place in the CLOG for the XMIN I come up with file 029F byte offset 2400F which is 10. For XMAX (704967810) I come up with file 02A0 offset 13D20 which has value 20. (I don't know what these mean so they are included for anyone who may.) Ok, so trying to hazard a best guess as to what happened. At this point it
[GENERAL] maximum size
Hello, could you be so kind to help me with following requests ? I need the maximum size of the following types : varchar(n) numeric(n,p) text bytea Could you please give me the name of articles or publications and where I can find it for these requests ? I want to transfer a oracle database 8i to postgreSQL 7.3.2. Is there an article or publication which related about this ? Thanks MOLINET S. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Help with tools...
I've tryied with some tools as pgEditor and EMS PostgreSQL Manager, but I need a really good pgsql and database Editor to use in Linux or Windows. Options?
[GENERAL] Pg 8.0rc5 to 8.0.1 update
Hi, Just a quick question. Do I need to do an initdb to upgrade a cluster from v8.0rc5 to v8.0.1 or can I just do a make install. TIA Ben ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Help with tools...
Cristian Prieto [EMAIL PROTECTED] writes: I've tryied with some tools as pgEditor and EMS PostgreSQL Manager, but I need a really good pgsql and database Editor to use in Linux or Windows. Options? Have you take a look at pgAdmin III? http://www.pgadmin.org/ What sort of features are you looking for? Jason ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster