Re: [GENERAL] CentOS 4 RPMs for 8.2.4?
On Wed, 2007-07-25 at 19:32 +0300, Devrim GÜNDÜZ wrote: > Hi, > > On Sat, 2007-07-21 at 15:57 -0700, Steve Wampler wrote: > > I need the Java and Python interfaces supplied with > > (from 8.1.9): > > > >postgresql-jdbc-8.1.4-1.centos.1 > >postgresql-python-8.1.9-1.el4s1.1 > > The actual problem is I did not build jdbc packages for latest > releases. > > I just build 8.2 srpm, will test and publish it until weekend. Keep eye > on Planet PostgreSQL -- I will blog after I push the packages to FTP > repos. Er.. can I ask where I can find the actual difference between 8.1 and the 8.2 branch? Like the OP, I too would like to get 8.2.4 since it's the latest stable version and since I'm installing it for use for the 1st time, I _should_ choose the latest stable branch. I've just downloaded 8.2.4 rpms for rhel4 per the OP. Do I have to install everything or.. will some parts do? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Database Select Slow
Hi all, I am facing a performance issue here. Whenever I do a count(*) on a table that contains about 300K records, it takes few minutes to complete. Whereas my other application which is counting > 500K records just take less than 10 seconds to complete. I have indexed all the essential columns and still it does not improve the speed. All helps and advice are appreciated. Thanks. _ Check it out! Windows Live Spaces is here! http://spaces.live.com/?mkt=en-sg Its easy to create your own personal Web site. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] CREATE RULE on VIEW with INSERT after UPDATE does not work
"Peter Marius" <[EMAIL PROTECTED]> writes: > I created a view on all entries with stop=null. > The DB-Interaction should be done over the view, > so I added rules for INSERT, UPDATE an DELETE. > Insert and Update work fine, but the DELETE_RULE > stopps after the first UPDATE statement in the Rule-Body, > any further statements are ignored!! Once you change the stop field to not be null, the row's no longer part of the view, so there's nothing to update. You need to re-order the operations (and, probably, combine the two UPDATE commands) so that you don't remove the row from the view until the last step. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Configuration starting point...
The out-of-the-box configs are pretty awful for you. Read some list archives (from this list and pgsql-performance) and also take a look at http://www.powerpostgresql.com/Downloads/annotated_conf_80.html On Fri, 10 Aug 2007, Nathan Wilhelmi wrote: Hello - Just installed 8.2.4 on a Solaris 9 box. It's an 8-way (15000 MHz sparc) with 32GB of ram. We don't know the exact table structure yet or access patterns, although the first thing that will be looked at is a Sesame triple store DB. I would expect that this DB will be more skewed to reads than writes. Based on this, are the out of the box configs pretty good or are there any recommended changes I should be making to start with? Thanks! -Nate ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] CREATE RULE on VIEW with INSERT after UPDATE does not work
Hi all, I have a table "mytable" to log the validity of data records with start and stop time. To see, which records are still valid, I created a view on all entries with stop=null. The DB-Interaction should be done over the view, so I added rules for INSERT, UPDATE an DELETE. Insert and Update work fine, but the DELETE_RULE stopps after the first UPDATE statement in the Rule-Body, any further statements are ignored!! Multiple Statements are not the Problem (Log=1,2), and the first UPDATE statement works also. (Stop=now()) Is this a known Problem? Am I doing something wrong? Is there any workaround for it? Thanks, Peter PS: Here is the code for testing, mylog should contain 1,2,3,4: DROP VIEW myview; DROP TABLE mytable; DROP TABLE mylog; CREATE TABLE mylog(id int); CREATE TABLE mytable(id serial, proc text, start timestamp(4), stop timestamp(4)); CREATE VIEW myview AS SELECT id, proc, start, stop FROM mytable WHERE stop IS null; CREATE RULE sri AS ON INSERT TO myview DO INSTEAD INSERT INTO mytable (proc, start, stop) VALUES (new.proc, now(), null); CREATE RULE srd AS ON DELETE TO myview DO INSTEAD UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; CREATE RULE sru AS ON UPDATE TO myview DO INSTEAD ( INSERT INTO mylog (id) VALUES (1); INSERT INTO mylog (id) VALUES (2); UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; INSERT INTO mylog (id) VALUES (3); UPDATE mytable SET stop = now() WHERE id = old.id+1 AND stop IS null; INSERT INTO mylog (id) VALUES (4); ); -- Insert some values works fine INSERT INTO myview (proc) VALUES ('alpha'); INSERT INTO myview (proc) VALUES ('omega'); INSERT INTO myview (proc) VALUES ('gamma'); INSERT INTO myview (proc) VALUES ('delta'); -- Both Table and View are identical SELECT * FROM mytable ORDER BY id; SELECT * FROM myview ORDER BY id; SELECT * FROM mylog ORDER BY id; -- Delete a row works fine, too DELETE FROM myview WHERE id = 4; -- Row 4 is deleted SELECT * FROM mytable ORDER BY id; SELECT * FROM myview ORDER BY id; SELECT * FROM mylog ORDER BY id; -- !! The UPDATE_RULE does not work correct !! UPDATE myview SET proc='beta' WHERE id = 2; -- The Process 2 is updated, but there is no entry in the log SELECT * FROM mytable ORDER BY id; SELECT * FROM myview ORDER BY id; SELECT * FROM mylog ORDER BY id; -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Configuration starting point...
Hello - Just installed 8.2.4 on a Solaris 9 box. It's an 8-way (15000 MHz sparc) with 32GB of ram. We don't know the exact table structure yet or access patterns, although the first thing that will be looked at is a Sesame triple store DB. I would expect that this DB will be more skewed to reads than writes. Based on this, are the out of the box configs pretty good or are there any recommended changes I should be making to start with? Thanks! -Nate ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Deadlocks caused by a foreign key constraint
Hello. I have a number of deadlock because of the foreign key constraint: Assume we have 2 tables: A and B. Table A has a field fk referenced to B.idas a foreign key constraint. -- transaction #1 BEGIN; ... INSERT INTO A(x, y, fk) VALUES (1, 2, 666); ... END; -- transaction #2 BEGIN; UPDATE B SET z = z + 1 WHERE id = 666; ... UPDATE B SET z = z + 1 WHERE id = 666; ... UPDATE B SET z = z + 1 WHERE id = 666; END; You see, table A is only inserted, and table B is only updated their field z on its single row. If we execute a lot of these transactions concurrently using multiple parellel threads, sometimes we have a deadlock: DETAIL: Process 6867 waits for ShareLock on transaction 1259392; blocked by process 30444. Process 30444 waits for ShareLock on transaction 1259387; blocked by process 6867. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."B" x WHERE "id" = $1 FOR SHARE OF x" If I delete the foreign key constraint, all begins to work fine. Seems Postgres thinks that "UPDATE B SET z = z + 1 WHERE id = 666" query may modify B.id field and touch A.fk, so it holds the shareable lock on it. The question is: is it possible to KEEP this foreign key constraint, but avoid deadlocks?
Re: [GENERAL] UPDATES hang every 5 minutes
Ok, partial day results. Looks like my changes have not solved the problem, just spread it out a little more (as would be expected based on your responses). The delays are now shorter (about half) but occur more frequently (maybe 1x / minute). The params I used are: bgwriter_lru_percent = 5.0 bgwriter_lru_maxpages = 100 bgwriter_all_percent = 5.0 bgwriter_all_maxpages = 100 wal_buffers = 20 checkpoint_segments = 10 shared_buffers = 196608 I'm going to do some reading today/tonight with the docs/links you have provided and give it another shot on Monday. Right now I'm leaning towards lowering my shared_buffers param as you suggested as third possibility. I will also take a look at the buffer cache data to get a better feel. It seems to me that the real solution is for me to stop using the database as an IPC system to pass somewhat time-critical data between processes. Given the time constraints I'm working under this unfortunately was the quickest route. Again, I can't say thanks enough for the great info you have provided. I'll continue to post any changes/results I make/see. Marc ---(end of broadcast)--- TIP 6: explain analyze is your friend
Timestamp in pg_dump output, was Re: [GENERAL] How I can know a back up database is up to date
On 09/08/2007 23:40, [EMAIL PROTECTED] wrote: My database is restored from a dump file every day. How I know that this database is up to date (as it has no timestamp in any table). If I create a file, I can know when I created it by seeing its property. How I can do the same thing with a back up database. Actually, it *would* be really handy if pg_dump included a timestamp in the plain-text output. The version I use regularly (Windows) doesn't...it simply says "PostgreSQL database dump" which is only helpful to a point. :-) Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] UPDATES hang every 5 minutes
On Fri, 10 Aug 2007, Marc Rossi wrote: Thanks for the heads up. The box in question is a dual cpu (xeon dual cores) with 8 gig & a pair of 10k 146gb raid 1 arrays. I have the pg_xlog dir on one array (along with the OS) & the rest of the data on the other array by itself. Yeah, that's kinda what I thought. Here's a quick way to think about the background writer parameters. Pages are 8KB, and the default bgwriter_delay will execute every 200ms or 5 times per second. If you set one of the maxpages parameters to 100, that means that component of the background writer might write as much as 8KB*100*5=4MB/sec worth of data to the disk continuously. The config you were copying had the maxpages parameters set to 200 and 600, which meant that in a heavy update situation the background writer might be writing out (2+6)*4=32MB/second *on top of* all the other reads ands writes going on. That's a pretty substantial additional load to add to just two pairs of disks. Also, anything written by the all-scan writer (which was by far the more aggressive one in that setup) has the potential to be a wasted write if that particular page gets changed again before the next checkpoint, which sounds highly likely given how you described your workload. Making those parameters too high will actually make the system less efficient--and that can make checkpoints worse. A couple of quick questions. On the fly I can change these params and use 'pg_ctl reload" to put them in effect, correct? I believe all of the ones you'll be touching can be adjusted that way. To confirm a change took, pop into psql and do "show " to see the current value. Also, I have my checkpoint_segments set to 10, if I were to lower this (say 5) would this possible have the effect of checkpointing a little more often with less data? (right now I hit the checkpoint_timeout). You're thinking in the right terms here. It's possible that by forcing more frequent checkpoints, the impact of each of them will be reduced. However, the net will be a less efficient system, because checkpointing is intensive. And having twice as many of them doubles how often you get this spike in response times. The other approach is to increase the timeout and the segments, so maybe you're only running into this every 10 minutes or more which makes the problem less annoying, then try to keep the buffers clean between checkpoints using the background writer. What's shared_buffers set to on this system? A third possibility is to reduce that and rely more on the operating system to buffer the data for you. If there's less data in the buffer cache, it will certainly take less time to write things out at checkpoint time; there's obviously downsides to that in terms of how fast regular queries execute. It will take some experimentation here to get this right. Checkpoint problems like you're running into aren't something you knock out in a day if you're new to this. Try not to make too big a step at any time and expect you'll have to dig into this a bit. I would recommend you start by intalling the contrib/pg_buffercache module against your database (read README.pg_buffercache for an intro), which lets you watch what's in the buffer cache at any time, so you can tell the balance of clean vs. dirty pages and what tables they're in. That will let you monitor how effective the background writer is doing and estimate how bad the checkpoint is going to be before it happens. Here's a sample query to get you started: SELECT c.relname, isdirty, count(*) AS buffers FROM pg_class c INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database()) GROUP BY c.relname,isdirty ORDER BY 2 DESC; I have something I'm working on that covers a lot of this topic at http://developer.postgresql.org/index.php/Buffer_Cache%2C_Checkpoints%2C_and_the_BGW but that's probably a little too low-level for you to chew on usefully right now. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Database Select Slow
On 10.08.2007, at 06:58, .ep wrote: Hi, what if I need to do a count with a WHERE condition? E.g., SELECT count(*) from customers where cust_id = 'georgebush' and created_on > current_date - interval '1 week' ; Can I get the info about this from somewhere in the pg system tables as well? Queries like these are very common in most applications, so I'm hoping I can avoid the sequential scans! If you have a qualified count(*) it goes to the index first, than checks whether the rows are live for your transaction. The problem is only the unqualified count with select count(*) from table_name; without any qualification. Or, of course, if your qualifier is not selective enough and you get a couple of millions rows back from a slow IO system ... I try to do counts only if I know that the selectivity is good enough not to kill the performance. Or I use "pleas wait" pages in the my application to tell the user, that his request is being processed and not hung. cug ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [PROPOSAL] DML value format
On Aug 10, 2007, at 5:56 , Alejandro Torras wrote: Is there some way to put values in a INSERT statement without taking care of apostrophes? In example: INSERT INTO persons VALUES ('Harry', 'O'Callaghan'); This is pretty much a solved problem: don't interpolate into SQL statements. Use bind parameters (and prepared statements, if you have them) instead. For example sth = dbh.prepare "INSERT INTO persons (given_name, family_name) VALUES (?, ?)" sth.execute("Harry", "O'Callaghan") Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: Timestamp in pg_dump output, was Re: [GENERAL] How I can know a back up database is up to date
--- Raymond O'Donnell <[EMAIL PROTECTED]> wrote: > On 09/08/2007 23:40, [EMAIL PROTECTED] wrote: > > > My database is restored from a dump file every day. How I know that this > > database is up to date (as it has no timestamp in any table). > > > > If I create a file, I can know when I created it by seeing its property. > > How I can do the same thing with a back up database. > > Actually, it *would* be really handy if pg_dump included a timestamp in > the plain-text output. The version I use regularly (Windows) > doesn't...it simply says "PostgreSQL database dump" which is only > helpful to a point. :-) If you need to, you can append your own timestamp to the dump file if you need it. I rolled this functionality into a .bat file. Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Cluster and MVCC
On Fri, 2007-08-10 at 10:02 -0400, Brad Nicholson wrote: > I just want to confirm that the cluster/MVCC issues are due to > transaction visibility. Assuming that no concurrent access is happening > to a given table when the cluster command is issued (when takes it > visibility snapshot), it is safe to cluster that table. Correct? Yes, as long as pre-existing transactions do not then access the clustered table. If they do, rows they should have seen will now not be visible, yet you won't get an error message to say so. You can check this by doing something similar to... create temporary table xids as select transactionid from pg_stat_activity a, pg_locks l where a.procpid = l.pid and l.transactionid is not null; cluster select 'Possible MVCC violation if ' || transactionid || ' touches clustered table' from pg_locks where transactionid in (select transactionid from xids); drop table xids; -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PITR for postgresql-7.3
On 8/10/07, Mary Ellen Fitzpatrick <[EMAIL PROTECTED]> wrote: > Hi, > > We are running postgresql-7.3.3 and we had a hardware controller and > disk failure on the system. And of course the database does not appear > to be backup anywhere. > > I was reading about PITR and was wondering if that is applicable to my > version. We do have pg_xlog files and I am wondering if there is anyway > to recover the data. > > The error we get when trying to access a very important table is > > "Invalid page header in block 51 of vuser" > > Any help would be appreciated. > (upgrades and backups lessons learned) PITR feature was introduced in PostgreSQL 8.0 No help there. You are looking at data loss unfortunately. How complete data loss depends on how relatively intact the structures are and how much time and effort you are willing to spend fixing problems causing errors like the above. * did you recover part or all of the database volume? * were you running with fsync off or on? merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] CREATE RULE on VIEW with INSERT after UPDATE does not work
Hi Tom, thanks for your answer, I have also thought of combining the statements, but my SQL-knowledge is too small for that. I thought, the example with "mylog" would be better to demonstrate the problem, but it's missing the point. Below, if have added the code with my real problem. What I want to do is a log of all starts and stops of validity. So if a record is altered, I want the current one to be marked by setting the stop-field to now() and another row to be added to "mytable" with the same ID like the previous and start=now(). The Code below does only the mark-deleted-thing, but does not insert a new record. There is no unique-constraint on ID, I can add lines manually. I also tried to swap the two lines in my Rule, then a new row is inserted (good), but it is set to end!=null by the second statement. (bad) Maybe someone can give me a hint, what's wrong with my code or my thinking? Thanks, Peter PS: Here's the NEW code with the uncooperative update-rule: DROP VIEW myview; DROP TABLE mytable; CREATE TABLE mytable(id serial, proc text, start timestamp(4), stop timestamp(4)); CREATE VIEW myview AS SELECT id, proc, start, stop FROM mytable WHERE stop IS null; CREATE RULE sri AS ON INSERT TO myview DO INSTEAD INSERT INTO mytable (proc, start, stop) VALUES (new.proc, now(), null); CREATE RULE srd AS ON DELETE TO myview DO INSTEAD UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; CREATE RULE sru AS ON UPDATE TO myview DO INSTEAD ( UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; INSERT INTO myview (id, proc, start, stop) VALUES (old.id, old.proc, now(), null); ); -- Insert some values works fine INSERT INTO myview (proc) VALUES ('alpha'); INSERT INTO myview (proc) VALUES ('omega'); INSERT INTO myview (proc) VALUES ('gamma'); -- Both Table and View are identical SELECT * FROM mytable ORDER BY id; SELECT * FROM myview ORDER BY id; -- !! The UPDATE_RULE does not work correct !! UPDATE myview SET proc='beta' WHERE id = 2; -- The Process 2 is updated, but there is no entry in the log SELECT * FROM mytable ORDER BY id; SELECT * FROM myview ORDER BY id; -- Pt! Schon vom neuen GMX MultiMessenger gehört? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PITR for postgresql-7.3
On 8/10/07, Mary Ellen Fitzpatrick <[EMAIL PROTECTED]> wrote: > Merlin, > > I am willing to spend the time, as it is an important table. I am a > newbie at this and it has fallen into my lap. > From what the user tells me, it is only the one table. > Not sure if fsync was running, how can I tell? check postgresql.conf in the database folder. merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: Timestamp in pg_dump output, was Re: [GENERAL] How I can know a back up database is up to date
On 10/08/2007 18:40, Richard Broersma Jr wrote: If you need to, you can append your own timestamp to the dump file if you need it. Heh heh, I just gave this same advice in reply to the post that prompted this idea. :-) Thanks, Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Configuration starting point...
On Fri, 10 Aug 2007, Nathan Wilhelmi wrote: are the out of the box configs pretty good or are there any recommended changes I should be making to start with? The out of the box configuration is wildly inappropriate for your system, and there are few examples of something appropriate to point you at--much of the information floating around is out of date for your class of hardware. See http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm for something current to get you started. That will lead you to additional resources you can drill into from there, and includes some disclaimers about what you should ignore in the guides that haven't been updated recently. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How I can know a back up database is up to date
On 09/08/2007 23:40, [EMAIL PROTECTED] wrote: My database is restored from a dump file every day. How I know that this database is up to date (as it has no timestamp in any table). If I create a file, I can know when I created it by seeing its property. How I can do the same thing with a back up database. Do the backup from a shell script that names the output file with the current date/time. Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] PITR for postgresql-7.3
Hi, We are running postgresql-7.3.3 and we had a hardware controller and disk failure on the system. And of course the database does not appear to be backup anywhere. I was reading about PITR and was wondering if that is applicable to my version. We do have pg_xlog files and I am wondering if there is anyway to recover the data. The error we get when trying to access a very important table is "Invalid page header in block 51 of vuser" Any help would be appreciated. (upgrades and backups lessons learned) -- Thanks Mary Ellen ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Database Select Slow
On Aug 10, 9:42 pm, [EMAIL PROTECTED] ("A. Kretschmer") wrote: > am Fri, dem 10.08.2007, um 17:46:11 +0800 mailte carter ck folgendes: > > > Hi all, > > > I am facing a performance issue here. Whenever I do a count(*) on a table > > that contains about 300K records, it takes few minutes to complete. Whereas > > my other application which is counting > 500K records just take less than > > 10 seconds to complete. > > > I have indexed all the essential columns and still it does not improve the > > speed. > > Indexes don't help in this case, a 'select count(*)' forces a seq. scan. > Do you realy need this information? An estimate for the number of rows > can you find in the system catalog (reltuples in pg_class, > seehttp://www.postgresql.org/docs/current/interactive/catalog-pg-class.html) Hi, what if I need to do a count with a WHERE condition? E.g., SELECT count(*) from customers where cust_id = 'georgebush' and created_on > current_date - interval '1 week' ; Can I get the info about this from somewhere in the pg system tables as well? Queries like these are very common in most applications, so I'm hoping I can avoid the sequential scans! Many thanks for any tips. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] [PROPOSAL] DML value format
-- English -- Hi, Is there some way to put values in a INSERT statement without taking care of apostrophes? In example: INSERT INTO persons VALUES ('Harry', 'O'Callaghan'); ^^^ I think that it can be used some kind of length-marker to help the parsing of the value. In example: INSERT INTO persons VALUES ('Harry', @11:O'Callaghan); I think this approach could help reducing the sql injections. Regards, A. Torras. -- Castellano -- Hola, ¿Hay alguna manera de insertar valores en una sentencia INSERT sin tener en cuenta apóstrofes? Por ejemplo: INSERT INTO persons VALUES ('Harry', 'O'Callaghan'); ^^^ Pienso que puede ser usado algún tipo de marcador de longitud para ayudar el parseo del valor. Por ejemplo: INSERT INTO persons VALUES ('Harry', @11:O'Callaghan); Creo que este enfoque podría ayudar reduciendo las inyecciones SQL (SQL injections). Saludos, Alejandro. Sé un Mejor Amante del Cine ¿Quieres saber cómo? ¡Deja que otras personas te ayuden! http://advision.webevents.yahoo.com/reto/entretenimiento.html ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] timestamp skew during 7.4 -> 8.2 upgrade
On Fri, Aug 10, 2007 at 10:11:29AM +0200, Louis-David Mitterrand wrote: > So if I understand correctly, a timestamp_tz is ... ... stored as UTC in the backend ... sent to clients shifted by whatever timezone was requested by the client by one of several mechanisms: - "set timezone to ..." used by the client - "select ... at time zone ..." used by the client - the server timezone if neither of the above is used > according to the host's timezone configuration? For example if I > travel with my server and cross several timezones, my timestamp_tz's > will display a different time (provided I run the tzselect utility in > Linux) ? Yes, unless the client tells the server to send them shifted to a different timezone (see above). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Allowing LAN connections
Doh! It was the Vista firewall. I've got a couple of other services running on that machine and they worked. That's why I assumed that it wasn't a FW problem (using Vistas internal).But it was thanks.> Date: Thu, 9 Aug 2007 10:06:19 -0700> From: [EMAIL PROTECTED]> To: [EMAIL PROTECTED]> CC: pgsql-general@postgresql.org> Subject: Re: [GENERAL] Allowing LAN connections> > Jonas Gauffin wrote:> > I've installed postgresql 8.2 on a windows vista machine and are trying to connect to it from another one.> > the server has ip 192.168.1.100 and the client 192.168.1.102 > >...> > Any suggestions?> > Yes. Let us know what client you are using to connect and post the error> message you get on the client and any errors from the server log file.> > I don't use Windows anywhere, but I presume Vista could have some> firewall software running that might interfere with external connections> as well.> > Cheers,> Steve _ Skaffa nya Windows Live Messenger! http://get.live.com/messenger/overview
Re: [GENERAL] [SQL] Using function like where clause
On Mon, Aug 06, 2007 at 04:44:29PM -0300, Ranieri Mazili wrote: > 1) Can I use a function that will return a string in a where clause like > bellow? > 2) Can I use a function that will return a string to return the list of > columns that I want to show like below? not in sql. you can in pl/pgsql. depesz -- quicksil1er: "postgres is excellent, but like any DB it requires a highly paid DBA. here's my CV!" :) http://www.depesz.com/ - blog dla ciebie (i moje CV) ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] SQL question: checking all required items
Hi all, Given the following tables - create table people ( person_id text primary key, person_name text, [...etc...] ); create table items ( item_id text primary key, item_name text, is_required boolean, [...etc...] ); create table items_for_people ( person_id text, item_id text, primary key (person_id, item_id), foreign key person_id references people(person_id), foreign key item_id references items(item_id) ); - how can I find those people who don't have _all_ of the items which are marked "required"? In other words, how do I select those rows in "people" which don't have a corresponding row in "items_for_people" for *each* row in "items" which has is_required=true? Many thanks, Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] UPDATES hang every 5 minutes
On Thu, 9 Aug 2007, Marc Rossi wrote: as well as made changes to the bgwriter settings as shown below (taken from a post in the pgsql-performance list) bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round bgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round bgwriter_all_percent = 10.0 # 0-100% of all buffers scanned/round bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round Be warned that these are settings from a much more powerful server than it sounds like you have, and I wouldn't be surprised to find your average performance tanks as a result. Making the background writer this aggressive will waste a lot of I/O, and unless you've got a lot of spare I/O to waste (which was the case on the source of this tuning) it can make your problem worse. I'd hesitate to recommend setting either percentage over 5% or either maxpages>100 as a first step on a production system. You may be in for a bad day tomorrow. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] timestamp skew during 7.4 -> 8.2 upgrade
On Thu, Aug 09, 2007 at 10:49:38AM -0500, Scott Marlowe wrote: > On 8/9/07, Louis-David Mitterrand > <[EMAIL PROTECTED]> wrote: > > Hi, > > > > After our 7.4 to 8.2 upgrade using debian tools, we realized that some > > of our timestamps with tz had shifted: > > > > For example '2007-04-01 00:00:00+02' became '2007-03-31 23:00:00+01' > > which is on a different month. Some of our applications were severely > > disturbed by that. > > > > Has anyone noticed that? Is there a way that would could have avoided > > it? > > Since timestamptz is stored as a GMT time, and then an offset is > applied on retrieval, I'd guess that with 8.2 you're using up to date > timezone files, and with 7.4 they were out of date and therefore > returning the wrong time. I.e. they had the wrong offset for a given > date. > > Not sure how you could avoid it off the top of my head, besides > keeping your 7.4 db tz data up to date. I sheepishly admit I never really understood the timestamp_tz mechanism in postgres, until that issue reared its head. So if I understand correctly, a timestamp_tz is UTC time shifted according to the host's timezone configuration? For example if I travel with my server and cross several timezones, my timestamp_tz's will display a different time (provided I run the tzselect utility in Linux) ? Thanks, ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Multiple operations on single rule, revisited
Hello, the following is a rework of what I wanted to achieve when posting yesterday. Since that post didn't seem to attract attention, I tried to do what I wanted to do differently. Now, creating a RULE for a view allows defining several operations for it. I was happy to discover that actually and quickly rewrote my rule to look this way: CREATE OR REPLACE RULE "_UPDATE_not_existing" AS ON UPDATE TO trade_material_view WHERE NOT material_exists(new.diameter, new.material_length, new.weight, new.loss, new.bar_type_id, new.metal_type_id) DO ( INSERT INTO material (id, diameter, material_length, weight, loss, bar_type_id, metal_type_id) VALUES (nextval('material_id_seq'::regclass), new.diameter, new.material_length, new.weight, new.loss, new.bar_type_id, new.metal_type_id); INSERT INTO trade_material (material_id) VALUES (currval('material_id_seq'::regclass) ); ); material_exists is my own, boolean-returning, custom function. Works by checking whether a given material (with given characteristics) exists in the appropriate table. The problem is, PostgreSQL just performs the first INSERT and happily ignores the rest. As much as I hate swearing, I shouted one large WFT on this one... Can someone PLEASE explain to me what's happening? First insert works like a charm, second one is non-existent to Postgres. I can replace the second INSERT with any other command, including one deleting everything in say 'trade_material' and it doesn't work anyway. -- Best regards, Michal mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SQL question: checking all required items
On 8/10/07, Raymond O'Donnell <[EMAIL PROTECTED]> wrote: > - how can I find those people who don't have _all_ of the items which > are marked "required"? > > In other words, how do I select those rows in "people" which don't have > a corresponding row in "items_for_people" for *each* row in "items" > which has is_required=true? Without writing the exact query you need, I'll give you a couple of ways to solve the problem of finding things in one set that aren't in another. select table1.id from table1 left join table2 on (table1.id=table2.id) where table2.id is null OR select table1.id from table1 where table1.id is not in (select id from table2); ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SQL question: checking all required items
On 10/08/2007 21:29, Scott Marlowe wrote: select table1.id from table1 where table1.id is not in (select id from table2); Duh! I should have thought of that thanks for that, and apologies for the stupidity (blame it on the glass of wine I had with dinner!). Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Cluster and MVCC
I just want to confirm that the cluster/MVCC issues are due to transaction visibility. Assuming that no concurrent access is happening to a given table when the cluster command is issued (when takes it visibility snapshot), it is safe to cluster that table. Correct? -- Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada Corp. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: Timestamp in pg_dump output, was Re: [GENERAL] How I can know a back up database is up to date
"Raymond O'Donnell" <[EMAIL PROTECTED]> writes: > Actually, it *would* be really handy if pg_dump included a timestamp in > the plain-text output. Use the "verbose" option. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Database Select Slow
On 8/10/07, carter ck <[EMAIL PROTECTED]> wrote: > Hi all, > > I am facing a performance issue here. Whenever I do a count(*) on a table > that contains about 300K records, it takes few minutes to complete. Whereas > my other application which is counting > 500K records just take less than 10 > seconds to complete. > > I have indexed all the essential columns and still it does not improve the > speed. As previously mentioned, indexes won't help with a count(*) with no where clause. They might help with a where clause, if it's quite selective, but if you're grabbing a noticeable percentage of a table, pgsql will rightly switch to a seq scan. Here's some examples from my goodly sized stats db here at work: \timing explain select * from businessrequestsummary; QUERY PLAN - Seq Scan on businessrequestsummary (cost=0.00..3280188.63 rows=67165363 width=262) Time: 0.441 ms gives me an approximate value of 67,165,363 rows. explain select * from businessrequestsummary where lastflushtime > now() - interval '1 day'; QUERY PLAN - Index Scan using businessrequestsummary_lastflushtime_dx on businessrequestsummary (cost=0.00..466.65 rows=6661 width=262) Index Cond: (lastflushtime > (now() - '1 day'::interval)) says 6661 rows. and takes 0.9 ms and would use the index. To run the real queries I get much slower times. :) Now, to run the real count(*) queries: select count(*) from businessrequestsummary where lastflushtime > now() - interval '1 day'; count 274192 (1 row) Time: 546.528 ms (data in the buffers makes it fast) select count(*) from businessrequestsummary where lastflushtime > now() - interval '1 week'; count - 1700050 (1 row) Time: 26291.155 ms second run (data now in buffer) select count(*) from businessrequestsummary where lastflushtime > now() - interval '1 week'; count - 1699689 (1 row) Time: 2592.573 ms Note the number changed, because this db is constantly being updated in real time with production statistics. I'm not going to run a select count(*) on that db, because it would take about 30 minutes to run. It's got about 67million rows in it. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] UPDATES hang every 5 minutes
> It seems to me that the real solution is for me to stop using the database as > an IPC system to pass somewhat time-critical data between processes. Given > the time constraints I'm working under this unfortunately was the quickest > route. At least for the first 5 minutes. :) I was wondering about that 1,500 updates/second. PostgreSQL is probably not the optimal solution for IPC. If you are trying to deal with passing this data among machines, memcached may be a good solution. Pretty easy to configure and clients for lots of languages. Blazing fast, too. Cheers, Steve ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] UPDATES hang every 5 minutes
Greg - Thanks for the heads up. The box in question is a dual cpu (xeon dual cores) with 8 gig & a pair of 10k 146gb raid 1 arrays. I have the pg_xlog dir on one array (along with the OS) & the rest of the data on the other array by itself. Given that this is a production system I'm going to tone things down a bit as you suggested prior to the open today. While I don't like the 10-20 second pauses every 5 minutes it's a system I need to have running and I'd rather not take the chance of bringing the system to its knees. A couple of quick questions. On the fly I can change these params and use 'pg_ctl reload" to put them in effect, correct? That way I can play a little today and see what the effects are. Also, I have my checkpoint_segments set to 10, if I were to lower this (say 5) would this possible have the effect of checkpointing a little more often with less data? (right now I hit the checkpoint_timeout). Thanks again, Marc - Original Message From: Greg Smith <[EMAIL PROTECTED]> To: Marc Rossi <[EMAIL PROTECTED]> Cc: pgsql-general@postgresql.org Sent: Friday, August 10, 2007 2:36:28 AM Subject: Re: [GENERAL] UPDATES hang every 5 minutes On Thu, 9 Aug 2007, Marc Rossi wrote: > as well as made changes to the bgwriter settings as shown below (taken > from a post in the pgsql-performance list) >bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round >bgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round >bgwriter_all_percent = 10.0 # 0-100% of all buffers scanned/round >bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round Be warned that these are settings from a much more powerful server than it sounds like you have, and I wouldn't be surprised to find your average performance tanks as a result. Making the background writer this aggressive will waste a lot of I/O, and unless you've got a lot of spare I/O to waste (which was the case on the source of this tuning) it can make your problem worse. I'd hesitate to recommend setting either percentage over 5% or either maxpages>100 as a first step on a production system. You may be in for a bad day tomorrow. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] timestamp skew during 7.4 -> 8.2 upgrade
Karsten Hilbert <[EMAIL PROTECTED]> writes: > On Fri, Aug 10, 2007 at 10:11:29AM +0200, Louis-David Mitterrand wrote: >> So if I understand correctly, a timestamp_tz is ... > ... stored as UTC in the backend > ... sent to clients shifted by whatever timezone was > requested by the client by one of several mechanisms: > - "set timezone to ..." used by the client > - "select ... at time zone ..." used by the client > - the server timezone if neither of the above is used The other point to be clear on is that the "shifting" is done according to whatever timezone rule files the server currently has. Since politicians keep changing daylight-savings rules, the same UTC date/time might be displayed differently after an update of the relevant rule file. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SQL question: checking all required items
On 8/10/07, Raymond O'Donnell <[EMAIL PROTECTED]> wrote: > On 10/08/2007 21:29, Scott Marlowe wrote: > > > > select table1.id from table1 where table1.id is not in (select id from > > table2); > > Duh! I should have thought of that thanks for that, and apologies > for the stupidity (blame it on the glass of wine I had with dinner!). It's only obvious after you've done it a few times... Show us the query when you're done, I'm sure there are enough folks who'd like to see your solution. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] CREATE RULE on VIEW with INSERT after UPDATE does not work
"Peter Marius" <[EMAIL PROTECTED]> writes: > I thought, the example with "mylog" would be better to > demonstrate the problem, but it's missing the point. > Below, if have added the code with my real problem. > CREATE RULE sru AS ON UPDATE TO myview DO INSTEAD > ( > UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; > INSERT INTO myview (id, proc, start, stop) VALUES (old.id, old.proc, now(), > null); > ); AFAICS, all you need to do is swap the ordering of those two operations. It might help to understand that what you write as an INSERT/VALUES is really more like INSERT ... SELECT ... FROM myview WHERE ..., the WHERE condition being the same as was given in the "UPDATE myview" command that the rule rewrites. As soon as you change the stop value in the UPDATE mytable, the SELECT from the view will find nothing. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Database Select Slow
am Fri, dem 10.08.2007, um 17:46:11 +0800 mailte carter ck folgendes: > Hi all, > > I am facing a performance issue here. Whenever I do a count(*) on a table > that contains about 300K records, it takes few minutes to complete. Whereas > my other application which is counting > 500K records just take less than > 10 seconds to complete. > > I have indexed all the essential columns and still it does not improve the > speed. Indexes don't help in this case, a 'select count(*)' forces a seq. scan. Do you realy need this information? An estimate for the number of rows can you find in the system catalog (reltuples in pg_class, see http://www.postgresql.org/docs/current/interactive/catalog-pg-class.html) Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Database Select Slow
In response to ".ep" <[EMAIL PROTECTED]>: > On Aug 10, 9:42 pm, [EMAIL PROTECTED] ("A. > Kretschmer") wrote: > > am Fri, dem 10.08.2007, um 17:46:11 +0800 mailte carter ck folgendes: > > > > > Hi all, > > > > > I am facing a performance issue here. Whenever I do a count(*) on a table > > > that contains about 300K records, it takes few minutes to complete. > > > Whereas > > > my other application which is counting > 500K records just take less than > > > 10 seconds to complete. > > > > > I have indexed all the essential columns and still it does not improve the > > > speed. > > > > Indexes don't help in this case, a 'select count(*)' forces a seq. scan. > > Do you realy need this information? An estimate for the number of rows > > can you find in the system catalog (reltuples in pg_class, > > seehttp://www.postgresql.org/docs/current/interactive/catalog-pg-class.html) > > > > Hi, what if I need to do a count with a WHERE condition? E.g., > > SELECT count(*) from customers where cust_id = 'georgebush' and > created_on > current_date - interval '1 week' ; > > Can I get the info about this from somewhere in the pg system tables > as well? Queries like these are very common in most applications, so > I'm hoping I can avoid the sequential scans! > > Many thanks for any tips. If you only need an estimate, you can do an "explain" of the query, and grep out the row count. The accuracy of this will vary depending on the statistics, but it's very fast and works with a query of any complexity. If you need fast, accurate counts, your best bet is to set up triggers on your tables to maintain counts in a separate table. This can be rather complex to set up, and you take a performance hit during inserts and updates, but I don't know of any other way to do it. -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: Timestamp in pg_dump output, was Re: [GENERAL] How I can know a back up database is up to date
On 10/08/2007 19:10, Tom Lane wrote: Use the "verbose" option. [/me tries it out] That'll do nicely - thanks. Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] CREATE RULE on VIEW with INSERT after UPDATE does not work
> AFAICS, all you need to do is swap the ordering of those two operations. > > It might help to understand that what you write as an INSERT/VALUES is > really more like INSERT ... SELECT ... FROM myview WHERE ..., the WHERE > condition being the same as was given in the "UPDATE myview" command > that the rule rewrites. As soon as you change the stop value in the > UPDATE mytable, the SELECT from the view will find nothing. > > regards, tom lane Ok, but swapping the two statements leads to another problem. When executing these three statements, I want the beta-line to have stop=null. INSERT INTO myview (proc) VALUES ('alpha'); INSERT INTO myview (proc) VALUES ('omega'); UPDATE myview SET proc='beta' WHERE id = 2; But I always get this result, because the id is 2 in both rows: id | proc | start | stop +---+--+-- 1 | alpha | 2007-08-11 02:32:04.7866 | 2 | omega | 2007-08-11 02:32:04.793 | 2007-08-11 02:32:04.8127 2 | beta | 2007-08-11 02:32:04.8127 | 2007-08-11 02:32:04.8127 So maybe, I need another condition in the update-statement, but I don't know, which one to use. Thanks in advance, Peter PS: New Code with swapped lines: DROP VIEW myview; DROP TABLE mytable; CREATE TABLE mytable(id serial, proc text, start timestamp(4), stop timestamp(4)); CREATE VIEW myview AS SELECT id, proc, start, stop FROM mytable WHERE stop IS null; CREATE RULE sri AS ON INSERT TO myview DO INSTEAD INSERT INTO mytable (proc, start, stop) VALUES (new.proc, now(), null); CREATE RULE srd AS ON DELETE TO myview DO INSTEAD UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; CREATE RULE sru AS ON UPDATE TO myview DO INSTEAD ( INSERT INTO mytable (id, proc, start, stop) VALUES (old.id, new.proc, now(), null); UPDATE mytable SET stop = now() WHERE id = old.id AND stop IS null; -- AND ; ); -- Insert some values works fine INSERT INTO myview (proc) VALUES ('alpha'); INSERT INTO myview (proc) VALUES ('omega'); INSERT INTO myview (proc) VALUES ('gamma'); -- !! The UPDATE_RULE does not work correct !! UPDATE myview SET proc='beta' WHERE id = 2; SELECT * FROM mytable ORDER BY id,start; SELECT * FROM myview ORDER BY id,start; -- Pt! Schon vom neuen GMX MultiMessenger gehört? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] LIKE conditions in PGSQL very, very slow!
Hi, I'm moving from the mysql camp and quite liking things like functions and such, but a lot of my functionality depends on queries such as SELECT id, name, start_date FROM customer WHERE name LIKE 'eri%'; These kinds of queries are super fast in MySQL because "eri%" type conditions also use the index. Is this not the case with PG? Here's the EXPLAIN output: CUSTDB=# explain select id,name,start_date from customer where name like 'eri%'; QUERY PLAN Seq Scan on customer (cost=0.00..86032.18 rows=1 width=111) Filter: ((name)::text ~~ 'eri%'::text) (2 rows) Would appreciate any thoughts on how to make these kinds of queries faster. I found a message (http://archives.postgresql.org/pgsql-sql/ 1999-12/msg00218.php) but that's from 1999. While we're at it, are compound indexes ok in PGSQL as well? In MySQL, the order of columns is important if it reflects my WHERE conditions in SQL. Should I follow the same structure in PGSQL? I tried looking at the manual but did not find a section that talks about indexing in detail. Would appreciate pointers. Thanks! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] LIKE conditions in PGSQL very, very slow!
On 8/10/07, .ep <[EMAIL PROTECTED]> wrote: > Hi, > > I'm moving from the mysql camp and quite liking things like functions > and such, but a lot of my functionality depends on queries such as > >SELECT id, name, start_date >FROM customer >WHERE name LIKE 'eri%'; > > These kinds of queries are super fast in MySQL because "eri%" type > conditions also use the index. Is this not the case with PG? See http://www.postgresql.org/docs/faqs.FAQ.html#item4.6 and http://www.postgresql.org/docs/8.2/static/indexes-opclass.html > While we're at it, are compound indexes ok in PGSQL as well? In MySQL, > the order of columns is important if it reflects my WHERE conditions > in SQL. Should I follow the same structure in PGSQL? I tried looking > at the manual but did not find a section that talks about indexing in > detail. Would appreciate pointers. Yes, order is important. This is true for most any database and multi-column indexes. See http://www.postgresql.org/docs/8.2/static/indexes-multicolumn.html Also read the rest of the docs on indexes here: http://www.postgresql.org/docs/8.2/static/indexes.html You can do some interesting things with indexes in pgsql, like partial and expression based indexes. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Unable to connect to PostgreSQL server via PHP
El jue, 09-08-2007 a las 14:51 +, John Coulthard escribió: > Hi > > I'm trying to set up a new webserver running php and pgsql. PHP was > connecting to postgres but I needed to install the php-gd module and now I > get the error... > > "PHP Warning: pg_connect() [ href='function.pg-connect'>function.pg-connect]: Unable to connect to > PostgreSQL server: could not connect to server: Permission denied\n\tIs the > server running on host "localhost" and accepting\n\tTCP/IP > connections on port 5432?" > > and I'm at a loss can anyone tell me why it's not connecting? > > Thanks > > This bit's I know are... > http://xyala.cap.ed.ac.uk/php_info.php say's php's configured for pgsql > > [EMAIL PROTECTED] telnet localhost 5432 > Trying 127.0.0.1... > Connected to localhost.localdomain (127.0.0.1). > Escape character is '^]'. > Connection closed by foreign host. > [EMAIL PROTECTED] > > > [EMAIL PROTECTED] less /var/lib/pgsql/data/pg_hba.conf > # TYPE DATABASEUSERCIDR-ADDRESS METHOD > > # "local" is for Unix domain socket connections only > #local all all ident sameuser > local all all trust > # IPv4 local connections: > #hostall all 127.0.0.1/32 ident sameuser > hostall all 127.0.0.1/32 trust > # IPv6 local connections: > #hostall all ::1/128 ident sameuser > hostall all ::1/128 trust > > [EMAIL PROTECTED] grep 'listen' /var/lib/pgsql/data/postgresql.conf > # "pg_ctl reload". Some settings, such as listen_address, require > #listen_addresses = 'localhost' # what IP interface(s) to listen on; > listen_addresses = '*' > [EMAIL PROTECTED] > > [EMAIL PROTECTED] less /etc/php.d/pgsql.ini > ; Enable pgsql extension module > extension=pgsql.so > > the server I'm going to replace is running the same versions of PHP and > postgres http://zeldia.cap.ed.ac.uk/php_info.php > The /etc/php.ini files on the two machines are the same and the > /var/lib/pgsql/data/postgresql.conf files are only different because I've > set listen_addresses = '*' on the new server (xyala) to see if I can make it > work. > > _ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings show your php source code for help you. -- Regards, Julio Cesar Sánchez González www.sistemasyconectividad.com.mx blog: http://darkavngr.blogspot.com --- Ahora me he convertido en la muerte, destructora de mundos. Soy la Muerte que se lleva todo, la fuente de las cosas que vendran. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq