Re: [GENERAL] stored function - array parameter - how many element in array ?
On Jul 13 08:28, Claire McLister wrote: Have you considered using a set instead? We had a similar need and were using an array as a parameter. That turned out to be taking too long. Recently we have changed it to a set and it seems to work faster, although I must admit I haven't timed it yet. Did you experience same results when you use an indexable (integer) array type supplied by intarray contrib module? Furthermore, there're lots of useful procedures and operators which supplies any kind of functionality you'll ever need with arrays. Also it's obviouse that there's no need to tell that these libraries are written in C with quite optimized algorithms. Regards. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Timestamp vs timestamptz
* Agent M.: timestamp with time zone does not record the timezone you inserted it with- it simply stores the GMT version and converts to whatever timezone you like on demand. Are you sure? This behavior is not documented, and I can't reproduce it with PostgresQL 8.1.4. -- Florian Weimer[EMAIL PROTECTED] BFK edv-consulting GmbH http://www.bfk.de/ Durlacher Allee 47tel: +49-721-96201-1 D-76131 Karlsruhe fax: +49-721-96201-99 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] cant connect to the database, even after doing start
Title: Re: [GENERAL] cant connect to the database, even after doing start this is what is happening stop(){ echo "Stopping ${NAME} service: " if [ "`uname`" = "Linux" ]; then /bin/sh -c "$PGCTL stop -D $PGDATA -s -m fast" /dev/null 21 fi ret=$? if [ $ret -eq 0 ] then echo_success else echo_failure if [ "`uname`" = "Linux" ]; then /bin/sh -c "$PGCTL stop -D $PGDATA -s -m immediate" /dev/null 21 fi fi echo} u mean to say that /bin/sh -c "$PGCTL stop -D $PGDATA -s -m immediate" /dev/null 21is causing problem what shd be done, shd it be removed? From: [EMAIL PROTECTED] on behalf of Scott MarloweSent: Thu 7/13/2006 9:27 PMTo: surabhi.ahujaCc: pgsql generalSubject: Re: [GENERAL] cant connect to the database, even after doing start ***Your mail has been scanned by InterScan VirusWall.***-***On Thu, 2006-07-13 at 01:20, surabhi.ahuja wrote: Hi all, i have a script to stop and start postmaster However i have noticed this many a time sdnow. I stop postmaster using that script and then start using a script. However if i try to do psql dbname, it gives me an error saying that the postmaster is not up. Why is it that the postmaster went down on its own, even though i had done a srart after stop.How are you stopping the database? Let me guess that it's "pg_ctl -mimmediate stop".pg_ctl --help tells us:Shutdown modes are: smart quit after all clients have disconnected fast quit directly, with proper shutdown immediate quit without complete shutdown; will lead to recovery onrestartBasically, -m immediate does a kill -9 on all the postgresql processes.It's an inch away from pulling the plug, except that lying hardwarestill gets to flush its caches.So, if you're stopping pgsql that way, then when it starts up, it doesso in recovery mode, and it can't allow connections until recovery isfinished.If you're stopping it some other way though, then things might be goingwrong in some other way.---(end of broadcast)---TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Timestamp vs timestamptz
On Thu, Jul 13, 2006 at 04:35:20PM -0700, Antimon wrote: Hi, I'm working on a web project with pgsql, i did use mysql before and stored epoch in database so i'm not familiar with these datatypes. What i wanna ask is, if i don't need to display timestamps in different timezones, shall i use timestamptz anyway? I mean, i'm gonna need timestamp columns on some tables for internal calculations and stuff like delaying actions, adding a row with a timestamp of 10 minutes later and check for them every minute, fetch elapsed ones and process, not to display them to users. The choice between timezone and timezonetz depends on what you're using it for: timestamptz identifies a specific point in time. It will be adjusted before output to reflect the timezone of the person selecting it. timestamp is a representation of a wall clock. The difference is easy to show when you're dealing with daylight savings times. In central european time the date '2006-03-26 02:30:00' doesn't exist, yet you can store it in a timestamp, but not in a timestamptz. For timestamptz, the time jumps from 2006-03-26 02:00:00 +0100 to 2006-03-26 03:00:00 +0200. And calculations take this into account. Similarly when daylight savings ends, a timestamptz can handle the fact that 2:30 am occurs twice, whereas timestamp won't. You can use the X AT TIME ZONE Y construct to convert between the two. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Timestamp vs timestamptz
Thanks for the replies, and thanks for mentioning the DST thing. So, i'm going to use tstz. I just don't want my data to be affected by timezone changes and dst etc. I had a game server which had a timer system, when i delay something it was creating an object with timestamp now + delaytime and check for the timers in main loop. Was a windows server and automatically adjucted dst, and everything stopped in game :) All timers was pointing like 1hour and 13ms later. I had to restart it. I just don't wanna have problems like this. Thanks again all for helping. Martijn van Oosterhout wrote: On Thu, Jul 13, 2006 at 04:35:20PM -0700, Antimon wrote: Hi, I'm working on a web project with pgsql, i did use mysql before and stored epoch in database so i'm not familiar with these datatypes. What i wanna ask is, if i don't need to display timestamps in different timezones, shall i use timestamptz anyway? I mean, i'm gonna need timestamp columns on some tables for internal calculations and stuff like delaying actions, adding a row with a timestamp of 10 minutes later and check for them every minute, fetch elapsed ones and process, not to display them to users. The choice between timezone and timezonetz depends on what you're using it for: timestamptz identifies a specific point in time. It will be adjusted before output to reflect the timezone of the person selecting it. timestamp is a representation of a wall clock. The difference is easy to show when you're dealing with daylight savings times. In central european time the date '2006-03-26 02:30:00' doesn't exist, yet you can store it in a timestamp, but not in a timestamptz. For timestamptz, the time jumps from 2006-03-26 02:00:00 +0100 to 2006-03-26 03:00:00 +0200. And calculations take this into account. Similarly when daylight savings ends, a timestamptz can handle the fact that 2:30 am occurs twice, whereas timestamp won't. You can use the X AT TIME ZONE Y construct to convert between the two. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. --x+6KMIRAuhnl3hBn Content-Type: application/pgp-signature Content-Disposition: inline; filename=signature.asc Content-Description: Digital signature X-Google-AttachSize: 190 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] How to read cleartext user password from pgsql database
Hi, Is it possible to read cleartext user password from pgsql database? In this link http://www.postgresql.org/docs/8.1/interactive/view-pg-user.html explained that password always reads as . But I need to use pgsql login/password as authentication info for another service. -- Thanks, Eugene Prokopiev ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] table replication, without master-slave setup
Hello, I have a certain setup, so that two computers are running nearly identical databases: identical setup, tables, users and permissions, only the contents differ. Now I'd like to keep them in sync, WITHOUT an extra machine, hence master-slave setup. The problem is, that either one could fail eventually. Either one or the other machine (with the same database) get data, but not yet both at the same time. This is some sort of load balancing. Is there software out there that rsyncs database tables both ways? Or will I have to write scripts for this task? I've already taken a look at Slony but it is unfortunately a master to multiple slaves. And I want both (or maybe in future three) machines to communicate with each other. That means one gets data, and sends it to other machines running the same db. Cheers, Alex -- shakespeare: /(bb|[^b]{2})/ signature.asc Description: PGP signature
Re: [GENERAL] How to read cleartext user password from pgsql database
On Fri, Jul 14, 2006 at 03:21:01PM +0400, Eugene Prokopiev wrote: Hi, Is it possible to read cleartext user password from pgsql database? In this link http://www.postgresql.org/docs/8.1/interactive/view-pg-user.html explained that password always reads as . But I need to use pgsql login/password as authentication info for another service. You can't get back the cleartext password, it's hashed. To see the hashed password you need to bypass the view, see pg_shadow. The docs should say something about how the hash is calcualted. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] How to read cleartext user password from pgsql database
Martijn van Oosterhout wrote: On Fri, Jul 14, 2006 at 03:21:01PM +0400, Eugene Prokopiev wrote: Is it possible to read cleartext user password from pgsql database? In this link http://www.postgresql.org/docs/8.1/interactive/view-pg-user.html explained that password always reads as . But I need to use pgsql login/password as authentication info for another service. You can't get back the cleartext password, it's hashed. To see the hashed password you need to bypass the view, see pg_shadow. The docs should say something about how the hash is calcualted. From advice of some previous thread, I developed the following function to help me remember the password hash: CREATE OR REPLACE FUNCTION public.authenticate_user(name, name) RETURNS bool AS ' DECLARE ls_usename ALIAS FOR $1; ls_passwd ALIAS FOR $2; BEGIN RETURN EXISTS(SELECT 1 FROM pg_shadow WHERE ''md5''||encode(digest(ls_passwd||ls_usename , ''md5''), ''hex'') = passwd); END;' LANGUAGE 'plpgsql' VOLATILE; So, you can see that pg_shadow.passwd stores the md5 hash of the concatinated plaintext password and username. Regards, Berend Tober ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Problem with archive_command
Hi list, with the following archive_command (on Windows) archive_command = 'copy %p c:\\Archiv\\DBArchiv\\%f' I constantly have entries like the following in my log file: 2006-07-14 14:26:59 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:00 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:01 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:01 WARNING: transaction log file 000100020037 could not be archived: too many failures Thanks for any advice! Chris ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] How to see function triggers definition?
Hi, Could anyone help me with this. I want to see the definition of functions triggres for a perticular table of database. How do I do this? How do I find whether is it Postgres SQL block or implemented in C/C++? Thanks Regards,Hiren.
Antw: Re: [GENERAL] Performance problem with query
Hi, somehow my reply yesterday got lost, but nevertheless here comes the explain analyze again. It's the explain of the operation that causes this huge performance discrepancy. Unfortunately i had to perform the explain analyze with an empty temp_table, because after the whole operation is done, i delete the data again to save some space. QUE RY PLAN - Result (cost=0.12..16.95 rows=390 width=108) (actual time=0.025..0.025 rows=0 loops=1) One-Time Filter: split_part(($1)::text, '_'::text, 2))::smallint = 1) IS NOT TRUE) AND (((split_part(($2)::text, '_'::text, 2))::smallint = 22) IS NOT TR UE)) InitPlan - Limit (cost=0.00..0.04 rows=1 width=28) (never executed) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (nev er executed) - Limit (cost=0.00..0.04 rows=1 width=28) (actual time=0.005..0.005 rows =0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (act ual time=0.001..0.001 rows=0 loops=1) - Limit (cost=0.00..0.04 rows=1 width=28) (actual time=0.002..0.002 rows =0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (act ual time=0.000..0.000 rows=0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=108) (actual tim e=0.000..0.000 rows=0 loops=1) Total runtime: 0.424 ms Result (cost=0.08..16.90 rows=390 width=108) (actual time=0.005..0.005 rows=0 loops=1) One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 1) InitPlan - Limit (cost=0.00..0.04 rows=1 width=28) (never executed) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (nev er executed) - Limit (cost=0.00..0.04 rows=1 width=28) (actual time=0.002..0.002 rows =0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (act ual time=0.001..0.001 rows=0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=108) (never exec uted) Total runtime: 0.267 ms Result (cost=0.08..16.90 rows=390 width=108) (actual time=0.005..0.005 rows=0 loops=1) One-Time Filter: ((split_part(($1)::text, '_'::text, 2))::smallint = 22) InitPlan - Limit (cost=0.00..0.04 rows=1 width=28) (never executed) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (nev er executed) - Limit (cost=0.00..0.04 rows=1 width=28) (actual time=0.002..0.002 rows =0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=28) (act ual time=0.001..0.001 rows=0 loops=1) - Seq Scan on temp_table (cost=0.00..13.90 rows=390 width=108) (never exec uted) Total runtime: 0.189 ms (31 Zeilen) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] databases hidden in phppgadmin
My apologies if this is not the correct group, but I did not find a 'phppgadmin' specific group. When I login to phppgadmin, the list of databases does not include any databases that are owned by a 'group' (i.e. a role with NOLOGIN set). Databases owned by postgres or any specific user do show up. I have tried logging in as superuser, as well as a user belonging to the said group, but the databases do not show up. Is there a config option I need to set to allow this, or is it simply not possible. Thanks! --Darren Hromas ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Physical block structure in PostgreSQL
In article [EMAIL PROTECTED], Spendius [EMAIL PROTECTED] wrote: % (I read the pages % http://www.postgresql.org/docs/8.1/interactive/storage.html % and saw things regarding files and pages that are usually 8k-big % etc. but % saw no further info about blocks - they speak of items here: what % is it ?) An item is the thing that's stored on the page. For instance, a database table is stored in a bunch of pages in some file. Each row in the table is stored as an item on a page, starting with a HeapTupleHeaderData. The structure of an item for an index page might be different, though. I found there was enough information in the section you cite to write a simple data dumping tool in an emergency a while ago. -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Q: Table scans on set difference
What's happening here? I have two tables, encounter_properties_table with about 10 rows and xfiles with about 50 rows. The structures of these tables is as follows: Table public.encounter_properties_table Column | Type | Modifiers +--+--- timestamp | timestamp with time zone | not null practice_id| integer | not null patient_id | bigint | not null properties | text | modified_by| bigint | not null client_version | integer | file_name | character varying(255) | Indexes: encounter_properties_table_pkey primary key, btree (patient_id) fn_ix btree (file_name) and Table public.xfiles Column | Type | Modifiers --++--- filename | character varying(100) | not null Indexes: xfiles_ix1 btree (filename) The following query shows that PostgreSQL 7.4 is doing table scans on both tables: explain select file_name from encounter_properties_table where file_name not in (select filename from xfiles); QUERY PLAN Seq Scan on encounter_properties_table (cost=0.00..1030610198.10 rows=85828 width=58) Filter: (NOT (subplan)) SubPlan - Seq Scan on xfiles (cost=0.00..10755.44 rows=500944 width=59) (4 rows) I ran vacumm analyze on both tables. We aborted this query when it had not finished after 4 hours. We ran the same query on SQLServer 2005 with the same data and it took under one second to finish. Any ideas? BEGIN:VCARD VERSION:2.1 N:Kuntz;G. Ralph FN:G. Ralph Kuntz ([EMAIL PROTECTED]) ORG:meridianEMR, Inc TITLE:Chief Software Architect TEL;WORK;VOICE:(973) 994-3220 TEL;HOME;VOICE:(973) 989-4392 TEL;CELL;VOICE:(973) 214-4464 TEL;WORK;FAX:(973) 994-0027 ADR;WORK:;;354 Eisenhower Parkway;Livingston;NJ;07039;United States LABEL;WORK;ENCODING=QUOTED-PRINTABLE:354 Eisenhower Parkway=0D=0ALivingston, NJ 07039=0D=0AUnited States EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:20051130T173408Z END:VCARD PGP.sig Description: PGP signature
[GENERAL] Windows Local Security Policy Rights?
What are the exact specific Windows Local Security Policy RightsREAUIRED for the user created to runthe PostgreSQL8 service? And what are the exact specific rights that this user absolutely CANNOT have? Thanks in advance. Sincerely, Arah Leonard
[GENERAL] apparent wraparound
After a system crash, postgresql 8.1.4 restarted but reported that I have an apparent wraparound: 2006-07-13 14:03:40 PDT [10092] LOG: database system was interrupted at 2006-07-13 13:22:19 PDT 2006-07-13 14:03:40 PDT [10092] LOG: checkpoint record is at 1DD/26283E18 2006-07-13 14:03:40 PDT [10092] LOG: redo record is at 1DD/26283E18; undo record is at 0/0; shutdown FALSE 2006-07-13 14:03:40 PDT [10092] LOG: next transaction ID: 169855318; next OID: 787933 2006-07-13 14:03:40 PDT [10092] LOG: next MultiXactId: 5475264; next MultiXactOffset: 13765525 2006-07-13 14:03:40 PDT [10092] LOG: database system was not properly shut down; automatic recovery in progress 2006-07-13 14:03:40 PDT [10092] LOG: record with zero length at 1DD/26283E68 2006-07-13 14:03:40 PDT [10092] LOG: redo is not required 2006-07-13 14:03:40 PDT [10092] LOG: could not truncate directory pg_multixact/offsets: apparent wraparound 2006-07-13 14:03:40 PDT [10092] LOG: could not truncate directory pg_multixact/members: apparent wraparound 2006-07-13 14:03:41 PDT [10093] [unknown]%[unknown] LOG: connection received: host=[local] 2006-07-13 14:03:41 PDT [10093] postgres%postgres FATAL: the database system is starting up 2006-07-13 14:03:41 PDT [10092] LOG: database system is ready 2006-07-13 14:03:41 PDT [10092] LOG: transaction ID wrap limit is 1243594092, limited by database csb-dev This is from: PostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) I'm using autovacuum and it ran around 13:15 on database csb-dev. I see no errors in the logs and therefore assume that the vacuum was successful. There are two files left in pg_multixact: -rw--- 1 postgres postgres 16K 2006-07-13 14:13 pg_multixact/members/00D2 -rw--- 1 postgres postgres 144K 2006-07-13 14:13 pg_multixact/offsets/0053 The system crash occurred during scsi rescanning that was initiated by an admin. The machine has been extremely stable otherwise and I have no reason to suspect hardware flakiness. In http://www.mail-archive.com/pgsql-general@postgresql.org/msg76635.html Tom Lane implies that I can probably ignore these messages. So, the 64K questions: Can I really ignore this? Is there anything I can do to ascertain whether it's a false alarm? Thanks, Reece -- Reece Hart, Ph.D. [EMAIL PROTECTED], http://www.gene.com/ Genentech, Inc.650-225-6133 (voice), -5389 (fax) Bioinformatics and Protein Engineering 1 DNA Way, MS-93 http://harts.net/reece/ South San Francisco, CA 94080-4990[EMAIL PROTECTED], GPG:0x25EC91A0 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] table replication, without master-slave setup
You can use 'pgpool' (http://pgpool.projects.postgresql.org/) for that purpose.Shoaib MirEnterpriseDBOn 7/14/06, Alexander Bluem [EMAIL PROTECTED] wrote: Hello,I have a certain setup, so that two computers are running nearlyidentical databases: identical setup, tables, users and permissions,only the contents differ. Now I'd like to keep them in sync, WITHOUT an extra machine, hence master-slave setup. The problem is, that either onecould fail eventually. Either one or the other machine (with the samedatabase) get data, but not yet both at the same time. This is some sort of load balancing.Is there software out there that rsyncs database tables both ways? Orwill I have to write scripts for this task? I've already taken a look atSlony but it is unfortunately a master to multiple slaves. And I want both (or maybe in future three) machines to communicate with each other.That means one gets data, and sends it to other machines running thesame db.Cheers,Alex--shakespeare: /(bb|[^b]{2})/
Antw: [GENERAL] Problem with archive_command
Hi list, well now i know why it doesn't work. As the copy command did not work for a while the wal-files where gathering like mad and, as a beginner, i thought in order to clean up some disk space i should delete them. Well, that was probably the most stupid thing to do, because now pgsql tries to copy non-existing wal-files back and forth, which, as you might imagine, does not work very well. Is there a way to reset pgsql regarding the wal files and to tell it to start all over again to make wal files? I tried to restart the server, but without success. Please help Chris Christian Rengstl [EMAIL PROTECTED] 14.07.06 14.29 Uhr Hi list, with the following archive_command (on Windows) archive_command = 'copy %p c:\\Archiv\\DBArchiv\\%f' I constantly have entries like the following in my log file: 2006-07-14 14:26:59 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:00 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:01 LOG: archive command copy pg_xlog\000100020037 c:\Archiv\DBArchiv\000100020037 failed: return code 1 2006-07-14 14:27:01 WARNING: transaction log file 000100020037 could not be archived: too many failures Thanks for any advice! Chris ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Performance problem with query
On 7/14/06, Christian Rengstl [EMAIL PROTECTED] wrote: Hi, somehow my reply yesterday got lost, but nevertheless here comes the explain analyze again. It's the explain of the operation that causes this huge performance discrepancy. Unfortunately i had to perform the explain analyze with an empty temp_table, because after the whole operation is done, i delete the data again to save some space. that's not much help. remember that explain analyze actually performs your query. so next time you run it, do explain analyze and post results here. just quick tip: run analyze before you run your big query. merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Q: Table scans on set difference
G. Ralph Kuntz, MD wrote: What's happening here? I have two tables, encounter_properties_table with about 10 rows and xfiles with about 50 rows. The structures of these tables is as follows: file_name | character varying(255) | Table public.xfiles Column | Type | Modifiers --++--- filename | character varying(100) | not null These columns are of different types, you're forcing a typecast on every row comparison; I think the varchar(100)'s will be upscaled to varchar(255) on comparison. My advice: use the text type. It's more flexible (practically no size limit) and faster. explain select file_name from encounter_properties_table where file_name not in (select filename from xfiles); What about: explain select file_name from encounter_properties_table where not exists ( select file_name from xfiles where filename = file_name); I often even use select 1 - a constant - because I'm not interested in the value, but apparently selecting a column is marginally faster than selecting a constant. Testing will prove it, I thought I'd mention the possibilit. I ran vacumm analyze on both tables. We aborted this query when it had not finished after 4 hours. Probably due to the type cast. We used to run into this problem when using bigint index columns. We changed them into int (which was sufficient) and the speed went up a lot. Later we determined - with input from this list - that the cause wasn't the size of the column but the type casting required to match the constant integer values in our queries. In our case explicit casting of our constant values helped. We ran the same query on SQLServer 2005 with the same data and it took under one second to finish. -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Timestamp vs timestamptz
Florian Weimer [EMAIL PROTECTED] writes: * Agent M.: timestamp with time zone does not record the timezone you inserted it with- it simply stores the GMT version and converts to whatever timezone you like on demand. Are you sure? This behavior is not documented, and I can't reproduce it with PostgresQL 8.1.4. Huh? Section 8.5.1.3. Time Stamps says For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's timezone parameter, and is converted to UTC using the offset for the timezone zone. and again in section 8.5.3. Time Zones: All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the zone specified by the timezone configuration parameter before being displayed to the client. There's been some talk of modifying timestamptz to store the original timezone specification along with the actual value, but at the moment all it is is a seconds-since-the-epoch numeric value. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Problem with archive_command
On 7/14/06, Christian Rengstl [EMAIL PROTECTED] wrote: Hi list, well now i know why it doesn't work. As the copy command did not work for a while the wal-files where gathering like mad and, as a beginner, i thought in order to clean up some disk space i should delete them. Well, that was probably the most stupid thing to do, because now pgsql tries to copy non-existing wal-files back and forth, which, as you might imagine, does not work very well. Is there a way to reset pgsql regarding the wal files and to tell it to start all over again to make wal files? I tried to restart the server, but without success. Please help you deleted wal files out of pg_xlog from a running server? have a recent backup? merlin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] databases hidden in phppgadmin
Darren [EMAIL PROTECTED] writes: When I login to phppgadmin, the list of databases does not include any databases that are owned by a 'group' (i.e. a role with NOLOGIN set). I'm betting that phppgadmin is using something like an inner join of pg_database and pg_user to produce its display. As of PG 8.1 they need to be using pg_roles instead ... and if I were them, I'd make it a LEFT JOIN so that databases don't disappear completely if the owner can't be found. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: Antw: [GENERAL] Problem with archive_command
On Fri, Jul 14, 2006 at 03:22:43PM +0200, Christian Rengstl wrote: Hi list, well now i know why it doesn't work. As the copy command did not work for a while the wal-files where gathering like mad and, as a beginner, i thought in order to clean up some disk space i should delete them. Well, that was probably the most stupid thing to do, because now pgsql tries to copy non-existing wal-files back and forth, which, as you might imagine, does not work very well. Is there a way to reset pgsql regarding the wal files and to tell it to start all over again to make wal files? I tried to restart the server, but without success. pg_resetxlog might get you out of your current woes. But yeah, that was a pretty dumb thing to do. Got a backup? Have a nice dat, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Antw: Re: [GENERAL] Problem with archive_command
i made a pg_dump. Maybe it would help to delete the files in the pg_xlog/archive_status directory...anyway the server just has a sandbox status so far. Merlin Moncure [EMAIL PROTECTED] 14.07.06 17.18 Uhr On 7/14/06, Christian Rengstl [EMAIL PROTECTED] wrote: Hi list, well now i know why it doesn't work. As the copy command did not work for a while the wal-files where gathering like mad and, as a beginner, i thought in order to clean up some disk space i should delete them. Well, that was probably the most stupid thing to do, because now pgsql tries to copy non-existing wal-files back and forth, which, as you might imagine, does not work very well. Is there a way to reset pgsql regarding the wal files and to tell it to start all over again to make wal files? I tried to restart the server, but without success. Please help you deleted wal files out of pg_xlog from a running server? have a recent backup? merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Is there a way to run tables in RAM?
On 13 Jul 2006 14:32:42 -0700, Karen Hill [EMAIL PROTECTED] wrote: Roy Souther wrote: I would like to know if there is anyway to move a section of some tables into RAM to work on them. I have large table, about 700MB or so and growing. I also have a bizarre collection of queries that run hundreds of queries on a small section of this table. These queries only look at about 100 or so records at a time and they run hundreds of queries on the data looking for patterns. This causes the program to run very slowly because of hard drive access time. Some times it needs to write changes back to the records it is working with. If you are using linux, create a ramdisk and then add a Postgresql tablespace to that. I don't think this will help much. While the ramdisk might be better than the o/s file cache, it just limits the o/s ability to give memory to other things. Any modern o/s essentially has a giant ram disk that runs all the time. It dynamically resizes it depending on what is going on at the time. It is smart enough to keep frequently used portions of file in ram all the time and less frequently used portions on disk to free up memory for sorting, etc. if fast write access is needed (no syncs), just create a temp table. just let the operating system do it's thing. if the table is thrashing, you have two choices, optimize the database to be more cache friendly or buy more ram. merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Timestamp vs timestamptz
* Tom Lane: Florian Weimer [EMAIL PROTECTED] writes: * Agent M.: timestamp with time zone does not record the timezone you inserted it with- it simply stores the GMT version and converts to whatever timezone you like on demand. Are you sure? This behavior is not documented, and I can't reproduce it with PostgresQL 8.1.4. Huh? Section 8.5.1.3. Time Stamps says Oops, I misread what Agent M wrote--timestamp with time zone vs timestamp with time zone. Sorry about that. -- Florian Weimer[EMAIL PROTECTED] BFK edv-consulting GmbH http://www.bfk.de/ Durlacher Allee 47tel: +49-721-96201-1 D-76131 Karlsruhe fax: +49-721-96201-99 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Q: Table scans on set difference
Alban Hertroys [EMAIL PROTECTED] writes: G. Ralph Kuntz, MD wrote: explain select file_name from encounter_properties_table where file_name not in (select filename from xfiles); What about: explain select file_name from encounter_properties_table where not exists ( select file_name from xfiles where filename = file_name); If you only need the file name, an EXCEPT would probably work much better: select file_name from encounter_properties_table except select filename from xfiles; Another possibility is to abuse the outer join machinery: select file_name, ... from encounter_properties_table l left join xfiles r on l.file_name = r.filename where r.filename is null; Generally speaking, NOT IN performance is going to suck unless the sub-select is small enough to fit in a hashtable. You could consider increasing work_mem enough that it would fit, but with 500K filenames needed, that's probably not going to win. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] apparent wraparound
Reece Hart [EMAIL PROTECTED] writes: After a system crash, postgresql 8.1.4 restarted but reported that I have an apparent wraparound: ... 2006-07-13 14:03:40 PDT [10092] LOG: next MultiXactId: 5475264; next MultiXactOffset: 13765525 ... 2006-07-13 14:03:40 PDT [10092] LOG: could not truncate directory pg_multixact/offsets: apparent wraparound 2006-07-13 14:03:40 PDT [10092] LOG: could not truncate directory pg_multixact/members: apparent wraparound There are two files left in pg_multixact: -rw--- 1 postgres postgres 16K 2006-07-13 14:13 pg_multixact/members/00D2 -rw--- 1 postgres postgres 144K 2006-07-13 14:13 pg_multixact/offsets/0053 That's odd. Those files correspond to the next MultiXactId and MultiXactOffset, so there shouldn't have been any such complaint. [ looks at code... ] I wonder if this is happening because shared-latest_page_number hasn't been set up yet when we do the end-of-recovery checkpoint. In http://www.mail-archive.com/pgsql-general@postgresql.org/msg76635.html Tom Lane implies that I can probably ignore these messages. No, I was saying that the invalid server process ID -1 was harmless. The apparent wraparound is a distinct issue, and I'd ask you the same question I asked Thomas: do you continue to get those log messages during subsequent checkpoints? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] I need help creating a query
Nice, Richard, but you use max(startdate), how about the salary? i cant use max(salary) how about if he got a pay cut?My current solution is to write the nested query on the field list, like SELECT worker.*, (select salary FROM position where worker_id=worker.worker_id and fecha='2006-04-01' LIMIT 1) as salary FROM worker;but I can only return 1 column from that subquery and repeating the same subquery for each column needed (position, date and salary) seems a little too much, if I write a procedure would postgres would optimize the access? On 7/13/06, Richard Broersma Jr [EMAIL PROTECTED] wrote: worker: worker_id, name position: position_id, worker_id, position, startdate, salary If I perfom a query joining both tables, I can obtain all the workers and the positions the've had. SELECT name, startdate, position,salary FROM worker JOIN position USING(worker_id); worker1 | 2001-01-01 | boss | 99 worker2 | 2001-01-01 | cleaning| 100 worker2 | 2006-04-01 | programmer | 2 worker2 | 2006-07-04 | management | 25000 so far so good, now I need to obtain all the workers only with the position they had on a given date. if I wanted to know the positions on '2006-05-01' it would return worker1 | 2001-01-01 | boss | 99 worker2 | 2006-04-01 | programmer | 2This is just a quick guess. I am not sure if the logic is correct but it could be a starting point.select P2.worker_id, P2.pdate, P1.position, P1.salaryfrom position as P1join(select worker_id, max(startdate) as pdate from position where startdate = '2006-05-01' group by worker_id, position_id) as P2 on (P1.worker_id = P2.worker_id) and (P1.startdate = P2.pdate);Regards,Richard Broersma Jr.
Re: [GENERAL] I need help creating a query
How about if we make it simpler, only 1 tablecreate table worker( name varchar(50), position varchar(50), startdate date, salary numeric(9,2));insert into worker values ('Jon', 'boss', '2001-01-1', 1000.00);insert into worker values ('Peter', 'cleaning', '2002-01-01', 100.00);insert into worker values ('Peter', 'programming', '2004-01-01', 300.00);insert into worker values ('Peter', 'management', '2006-01-01', 500.00);select * from worker;name | position | startdate | salary---+-++-Jon | boss | 2001-01-01 | 1000.00Peter | cleaning | 2002-01-01 | 100.00 Peter | programming | 2004-01-01 | 300.00Peter | management | 2006-01-01 | 500.00I want to group by name, order by date desc and show the first grouped salary, maybe I should write an aggregate function that saves the first value and ignores the next ones. Is there already an aggregate function that does this? I havent written any aggregate functions yet, can anybody spare some pointers? On 7/14/06, Sergio Duran [EMAIL PROTECTED] wrote: Nice, Richard, but you use max(startdate), how about the salary? i cant use max(salary) how about if he got a pay cut?My current solution is to write the nested query on the field list, like SELECT worker.*, (select salary FROM position where worker_id=worker.worker_id and fecha='2006-04-01' LIMIT 1) as salary FROM worker;but I can only return 1 column from that subquery and repeating the same subquery for each column needed (position, date and salary) seems a little too much, if I write a procedure would postgres would optimize the access? On 7/13/06, Richard Broersma Jr [EMAIL PROTECTED] wrote: worker: worker_id, name position: position_id, worker_id, position, startdate, salary If I perfom a query joining both tables, I can obtain all the workers and the positions the've had. SELECT name, startdate, position,salary FROM worker JOIN position USING(worker_id); worker1 | 2001-01-01 | boss | 99 worker2 | 2001-01-01 | cleaning| 100 worker2 | 2006-04-01 | programmer | 2 worker2 | 2006-07-04 | management | 25000 so far so good, now I need to obtain all the workers only with the position they had on a given date. if I wanted to know the positions on '2006-05-01' it would return worker1 | 2001-01-01 | boss | 99 worker2 | 2006-04-01 | programmer | 2This is just a quick guess. I am not sure if the logic is correct but it could be a starting point.select P2.worker_id, P2.pdate, P1.position, P1.salaryfrom position as P1join(select worker_id, max(startdate) as pdate from position where startdate = '2006-05-01' group by worker_id, position_id) as P2 on (P1.worker_id = P2.worker_id) and (P1.startdate = P2.pdate);Regards,Richard Broersma Jr.
Re: [GENERAL] cant connect to the database, even after doing start
On Fri, 2006-07-14 at 02:48, surabhi.ahuja wrote: this is what is happening stop(){ echo Stopping ${NAME} service: if [ `uname` = Linux ]; then /bin/sh -c $PGCTL stop -D $PGDATA -s -m fast /dev/null 21 fi ret=$? if [ $ret -eq 0 ] then echo_success else echo_failure if [ `uname` = Linux ]; then /bin/sh -c $PGCTL stop -D $PGDATA -s -m immediate /dev/null 21 fi fi echo } u mean to say that /bin/sh -c $PGCTL stop -D $PGDATA -s -m immediate /dev/null 21 is causing problem what shd be done, shd it be removed? It's not how I'd do it, certainly. Which branch gets run most the time? Have you tested to make sure that the -m fast really runs and gets a chance to work? What script is this from? Is it a stock one that came with your distribution, or home grown? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] I need help creating a query
On 15/07/2006, at 2:07 AM, Sergio Duran wrote:How about if we make it simpler, only 1 tablecreate table worker( name varchar(50), position varchar(50), startdate date, salary numeric(9,2));insert into worker values ('Jon', 'boss', '2001-01-1', 1000.00);insert into worker values ('Peter', 'cleaning', '2002-01-01', 100.00);insert into worker values ('Peter', 'programming', '2004-01-01', 300.00);insert into worker values ('Peter', 'management', '2006-01-01', 500.00);select * from worker; name | position | startdate | salary---+-++- Jon | boss | 2001-01-01 | 1000.00 Peter | cleaning | 2002-01-01 | 100.00 Peter | programming | 2004-01-01 | 300.00 Peter | management | 2006-01-01 | 500.00I want to group by name, order by date desc and show the first grouped salary, maybe I should write an aggregate function that saves the first value and ignores the next ones. Is there already an aggregate function that does this? I havent written any aggregate functions yet, can anybody spare some pointers? Try this:SELECT w2.* FROM ( SELECT name, MAX(startdate) AS startdate FROM worker GROUP BY name ) AS w1 JOIN worker AS w2 ON (w1.name = w2.name AND w1.startdate = w2.startdate);Obviously you would use a real primary key instead of 'name' for the join constraint but you get the idea -- Seeya...Q -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- _ / Quinton Dolan - [EMAIL PROTECTED] __ __/ / / __/ / / / __ / _/ / / Gold Coast, QLD, Australia __/ __/ __/ / / - / Ph: +61 419 729 806 ___ / _\
Re: [GENERAL] I need help creating a query
create table worker( name varchar(50), position varchar(50), startdate date, salary numeric(9,2)); insert into worker values ('Jon', 'boss', '2001-01-1', 1000.00); insert into worker values ('Peter', 'cleaning', '2002-01-01', 100.00); insert into worker values ('Peter', 'programming', '2004-01-01', 300.00); insert into worker values ('Peter', 'management', '2006-01-01', 500.00); select * from worker; name | position | startdate | salary ---+-++- Jon | boss| 2001-01-01 | 1000.00 Peter | cleaning| 2002-01-01 | 100.00 Peter | programming | 2004-01-01 | 300.00 Peter | management | 2006-01-01 | 500.00 I want to group by name, order by date desc and show the first grouped salary, maybe I should write an aggregate function that saves the first value and ignores the next ones. Is there already an aggregate function that does this? I havent written any aggregate functions yet, can anybody spare some pointers? This query didn't give you the max salary. First, the subselect give your maximum start date for each employee the occurred before your given date '2006-05-01', regardless if they get a raise or a cut. Then we join the result of the sub-select to the main table to get the specific records that meet the criteria of the sub-select. select W2.name, W1.position, W2.pdate, w1.salary from worker as W1 join (select name, max(startdate) as pdate from worker where startdate = '2005-01-01' group by name ) as W2 on (W1.name = W2.name) and (W1.startdate = W2.pdate) ; name | position | pdate| salary ---+-++- Jon | boss| 2001-01-01 | 1000.00 Peter | programming | 2004-01-01 | 300.00 So with this query, we get what everyones salary would be on the date of '2005-01-01' regardless of raises or cuts. Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] I need help creating a query
This is what I did, I used plpgsql,create or replace function first_accum(anyelement, anyelement) returns anyelement as $$BEGIN IF $1 IS NOT NULL THEN return $1; ELSE return $2; END IF; END' language plpgsql; then I created the aggregateCREATE AGGREGATE first(sfunc = first_accum, basetype = anyelement, stype = anyelement); first_accum is basically the same thing as coalesce, but CREATE AGGREGATE wasn't allowing me to use coalesce.now I can get the salaries and positions each worker had on a given date.SELECT name, first(startdate), first(salary) FROM worker JOIN position ON position.worker_id=worker.worker_id WHERE fecha='2006-05-01'ORDER BY fecha DESC;I'd appreciate some feedback, I hope there's a better way to do this. (maybe without creating the plpgsql function but using an internal function) On 7/14/06, Sergio Duran [EMAIL PROTECTED] wrote: Nice, Richard, but you use max(startdate), how about the salary? i cant use max(salary) how about if he got a pay cut?My current solution is to write the nested query on the field list, like SELECT worker.*, (select salary FROM position where worker_id=worker.worker_id and fecha='2006-04-01' LIMIT 1) as salary FROM worker;but I can only return 1 column from that subquery and repeating the same subquery for each column needed (position, date and salary) seems a little too much, if I write a procedure would postgres would optimize the access? On 7/13/06, Richard Broersma Jr [EMAIL PROTECTED] wrote: worker: worker_id, name position: position_id, worker_id, position, startdate, salary If I perfom a query joining both tables, I can obtain all the workers and the positions the've had. SELECT name, startdate, position,salary FROM worker JOIN position USING(worker_id); worker1 | 2001-01-01 | boss | 99 worker2 | 2001-01-01 | cleaning| 100 worker2 | 2006-04-01 | programmer | 2 worker2 | 2006-07-04 | management | 25000 so far so good, now I need to obtain all the workers only with the position they had on a given date. if I wanted to know the positions on '2006-05-01' it would return worker1 | 2001-01-01 | boss | 99 worker2 | 2006-04-01 | programmer | 2This is just a quick guess. I am not sure if the logic is correct but it could be a starting point.select P2.worker_id, P2.pdate, P1.position, P1.salaryfrom position as P1join(select worker_id, max(startdate) as pdate from position where startdate = '2006-05-01' group by worker_id, position_id) as P2 on (P1.worker_id = P2.worker_id) and (P1.startdate = P2.pdate);Regards,Richard Broersma Jr.
Re: [GENERAL] I need help creating a query
- Original Message - From: Sergio Duran [EMAIL PROTECTED] To: pgsql-general@postgresql.org Sent: Thursday, July 13, 2006 9:20 PM Subject: [GENERAL] I need help creating a query Hello, I need a little help creating a query, I have two tables, worker and position, for simplicity sake worker only has its ID and its name, position has the ID of the worker, the name of his position, a date, and his salary/ worker: worker_id, name position: position_id, worker_id, position, startdate, salary If I perfom a query joining both tables, I can obtain all the workers and the positions the've had. SELECT name, startdate, position, salary FROM worker JOIN position USING(worker_id); worker1 | 2001-01-01 | boss | 99 worker2 | 2001-01-01 | cleaning| 100 worker2 | 2006-04-01 | programmer | 2 worker2 | 2006-07-04 | management | 25000 so far so good, now I need to obtain all the workers only with the position they had on a given date. if I wanted to know the positions on '2006-05-01' it would return worker1 | 2001-01-01 | boss | 99 worker2 | 2006-04-01 | programmer | 2 This should work: select distinct on(W.worker_id) W.name,P.position,P.salary from worker W,position P where P.worker_id=W.worker_id and 'SOME DATE' = P.startdate order by W.worker_id,P.startdate Cheers Marcin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] I need help creating a query
Ok, all the suggestions were good.I think I'll stick with Marcin Mank's query for now, I'll also try to work further with Richard Broersma's query later.Thank you guys, you were really helpful. On 7/14/06, Marcin Mank [EMAIL PROTECTED] wrote: - Original Message -From: Sergio Duran [EMAIL PROTECTED]To: pgsql-general@postgresql.org Sent: Thursday, July 13, 2006 9:20 PMSubject: [GENERAL] I need help creating a query Hello, I need a little help creating a query, I have two tables, worker and position, for simplicity sake worker only has its ID and its name, position has the ID of the worker, the name of his position, a date, and hissalary/ worker: worker_id, name position: position_id, worker_id, position, startdate, salary If I perfom a query joining both tables, I can obtain all the workers and the positions the've had. SELECT name, startdate, position,salary FROM worker JOIN position USING(worker_id); worker1 | 2001-01-01 | boss | 99 worker2 | 2001-01-01 | cleaning| 100 worker2 | 2006-04-01 | programmer | 2 worker2 | 2006-07-04 | management | 25000 so far so good, now I need to obtain all the workers only with theposition they had on a given date. if I wanted to know the positions on '2006-05-01' it would return worker1 | 2001-01-01 | boss | 99 worker2 | 2006-04-01 | programmer | 2This should work:select distinct on(W.worker_id) W.name,P.position,P.salaryfrom worker W,position Pwhere P.worker_id=W.worker_id and 'SOME DATE' = P.startdateorder by W.worker_id,P.startdateCheersMarcin
Re: [GENERAL] Dynamic table with variable number of columns
On Wed, Jul 12, 2006 at 13:38:34 -0700, [EMAIL PROTECTED] wrote: Hi, Thanks again. One more question. Will crosstab function work if i will not know the number/names of columns before hand? Or I need to supply colum headings? I checked a bit into this, and the actual contrib name is tablefunc, not crosstab. It provides crosstab functions for up to 4 columns, but it isn't hard to make ones that handle more columns. You can read the included readme file at: http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/tablefunc/README.tablefunc?rev=1.14 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] apparent wraparound
Tom Lane wrote: I'd ask you the same question I asked Thomas: do you continue to get those log messages during subsequent checkpoints? No, I don't. The error did not reappear during ~2h of continuous inserts since my report, didn't reappear after a forced checkpoint (i.e., via psql), and did not reappear on a recent stop/start cycle. There was a period when my cron-driven vacuuming was broken and, in principle, I might have been susceptible to wraparound. However, I don't see how we could have had 1B transactions in that period. One other tidbit: a colleague inadvertently updated ~10M records. After this, I started getting errors like: number of page slots needed (2952496) exceeds max_fsm_pages (50) I restored from a backup, but still have: 'number of page slots needed (183248) exceeds max_fsm_pages (5)' (I reduced max_fsm_pages after the restore.) I'm not sure whether the vacuum and fsm info is relevant. -Reece -- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] apparent wraparound
Reece -- The number of slots needed exceeds messages are telling you that the current FSM (Free Space Map) does not have enough space allocated to track all of the old tuples that are to be reused. I suspect that having such a situation would effect the wraparound issue, since you'd have dead wood which hasn't been recycled. You need to edit the postgresql.conf file and increase the max_fsm_pages and max_fsm_relations parameters and then restart postgres (I think you have to actually stop and restart, as opposed to a reload, but I could be wrong). You may end up needing to adjust the total amount of RAM allocated to Shared Memory to allow for as large an FSM as you'll need. That requires a system reboot. HTH, Greg Williamson DBA GlobeXplorer LLC -Original Message- From: [EMAIL PROTECTED] on behalf of Reece Hart Sent: Fri 7/14/2006 12:30 PM To: pgsql-general Cc: Subject:Re: [GENERAL] apparent wraparound Tom Lane wrote: I'd ask you the same question I asked Thomas: do you continue to get those log messages during subsequent checkpoints? No, I don't. The error did not reappear during ~2h of continuous inserts since my report, didn't reappear after a forced checkpoint (i.e., via psql), and did not reappear on a recent stop/start cycle. There was a period when my cron-driven vacuuming was broken and, in principle, I might have been susceptible to wraparound. However, I don't see how we could have had 1B transactions in that period. One other tidbit: a colleague inadvertently updated ~10M records. After this, I started getting errors like: number of page slots needed (2952496) exceeds max_fsm_pages (50) I restored from a backup, but still have: 'number of page slots needed (183248) exceeds max_fsm_pages (5)' (I reduced max_fsm_pages after the restore.) I'm not sure whether the vacuum and fsm info is relevant. -Reece -- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org !DSPAM:44b7f15495741414113241! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] How to access a table from one database to another database
Hai all, I have 2 databases namee PAO and CAS. PAO contains 3 schemas named Public,pao,sts CAS contains 4 schemas named Public,cao,sts,reports Now i am in PAO database..now i want access table 'activity' in schema 'cas' inCAS database. How it is posible. 2nd thing is... i have 2 servers access i.e local and mainserver. How access table from one server to another server? please tel me...becausewe need this one _Vivekananda.R | Software Engineer | CGDA Program. Infinite Computer Solutions India Pvt. Ltd.|Exciting Times ... Infinite Possibilities... SEI-CMMI level 5 | ISO 9001:2000 IT SERVICES | BPO | Telecom | Finance | Healthcare | Manufacturing | Energy Utilities | Retail Distribution | Government Tel +91-80-4133 -2 Ext:3006 | Mobile: 9986463365Fax +91-80-513-10853 | www.infics.com USA | United Kingdom | India | China | Singapore | Malaysia |Hong Kong _ Information transmitted by this e-mail is proprietary to Infinite Computer Solutions and / or its Customers and is intended for use only by the individual or the entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please notify us immediately at [EMAIL PROTECTED] and delete this email from your records.
Re: [GENERAL] How to access a table from one database to another
On Sat, 15 Jul 2006 10:26:55 +0530 VivekanandaSwamy R. [EMAIL PROTECTED] wrote: Hai all, I have 2 databases namee PAO and CAS. PAO contains 3 schemas named Public,pao,sts CAS contains 4 schemas named Public,cao,sts,reports Now i am in PAO database..now i want access table 'activity' in schema 'cas' in CAS database. How it is posible. 2nd thing is... i have 2 servers access i.e local and mainserver. How access table from one server to another server? please tel me...because we need this one Vivekananda, I hope you just want the first one. I think what you need is to re-think your database design. From the PostgreSQL manual: A PostgreSQL database cluster contains one or more named databases. Users and groups of users are shared across the entire cluster, but no other data is shared across databases. Any given client connection to the server can access only the data in a single database, the one specified in the connection request. http://www.postgresql.org/docs/8.1/static/ddl-schemas.html Now your CLIENT can connect to two different databases with two different connections and ditto for connecting to two different clusters. John Purser -- Always do right. This will gratify some people and astonish the rest. -- Mark Twain ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] How to access a table from one database to another
VivekanandaSwamy R. wrote: Hai all, I have 2 databases namee PAO and CAS. PAO contains 3 schemas named Public,pao,sts CAS contains 4 schemas named Public,cao,sts,reports Now i am in PAO database..now i want access table 'activity' in schema 'cas' in CAS database. How it is posible. 2nd thing is... i have 2 servers access i.e local and mainserver. How access table from one server to another server? please tel me...because we need this one This can be done with DBlink which is a contrib module. see the FAQ item number 4.17 for more info http://www.postgresql.org/docs/faqs.FAQ.html -- Tony Caduto http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql 8.x ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to access a table from one database to another database
Possible dblink, in the ./contrib directory would help ? I have never had to use it but it seems like it might be what you need. HTH, Greg Williamson DBA GlobeXplorer LLC -Original Message- From: [EMAIL PROTECTED] on behalf of VivekanandaSwamy R. Sent: Fri 7/14/2006 9:56 PM To: pgsql-general@postgresql.org Cc: Subject:[GENERAL] How to access a table from one database to another database Hai all, I have 2 databases namee PAO and CAS. PAO contains 3 schemas named Public,pao,sts CAS contains 4 schemas named Public,cao,sts,reports Now i am in PAO database..now i want access table 'activity' in schema 'cas' in CAS database. How it is posible. 2nd thing is... i have 2 servers access i.e local and mainserver. How access table from one server to another server? please tel me...because we need this one _ Vivekananda.R mailto:[EMAIL PROTECTED] | Software Engineer | CGDA Program. Infinite Computer Solutions India Pvt. Ltd.|Exciting Times ... Infinite Possibilities... SEI-CMMI level 5 | ISO 9001:2000 IT SERVICES | BPO | Telecom | Finance | Healthcare | Manufacturing | Energy Utilities | Retail Distribution | Government Tel +91-80-4133 -2 Ext:3006 | Mobile: 9986463365 Fax +91-80-513-10853 | www.infics.com http://www.infics.com/ USA | United Kingdom | India | China | Singapore | Malaysia |Hong Kong _ Information transmitted by this e-mail is proprietary to Infinite Computer Solutions and / or its Customers and is intended for use only by the individual or the entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please notify us immediately at [EMAIL PROTECTED] and delete this email from your records. !DSPAM:44b875fa155491804284693! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match