Re: [GENERAL] COMMIT after an ERROR?
The failed query will abort the transaction, and throw out every command that comes before or after it until your rollback. You don't have to specifically issue a rollback unless you want to issue more commands though. When you disconnect from the DB it will automatically roll back the transaction. At 05:18 AM 10/12/01, J C Lawrence wrote: Does an error during a transaction invalidate the transaction? eg BEGIN SomeQueryThatSucceeds SomeQueryThatFails SomeOtherQueryThatSucceeds COMMIT Will the transaction successfully COMMIT, or will the COMMIT fail? Translation: Do I need to monitor for error conditions during a transaction and manually do the ROLLBACK in those cases? -- J C Lawrence -(*) Satan, oscillate my metallic sonatas. [EMAIL PROTECTED] He lived as a devil, eh? http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Multiple postgresql installations on one machine.
I may be wrong, but I don't think it's possible to have them on the same port. The problem is that the TCP/IP port defaults to the same as the UNIX sockets port, and UNIX sockets are local to the machine, so they are completely independent of IP addresses and can not be duplicated. It is possible to run two installs on different ports though, which is what I do at work. Postgres 6.x on port 5432, and 7.x on port 4321. You could probably use your OS's NAT layer to redirect anything that comes in on port 5432 of the second IP to port 4321 (look at the rewrite rules of the INPUT chain with iptables under Linux) if you don't want to add the "port=xxx" to the connect string of your applications though. At 07:04 AM 10/11/01, Lincoln Yeoh wrote: Hi, Is it possible to have multiple postgresql installations on one machine running on the same port but on different IP addresses? I've specified different IPs/hostnames for the virtual_host but the unix socket clashes when I try. Should I just move the unix sockets somewhere else with -k, what would the impact be? I can't seem to tell psql where to find the moved unix socket. Cheerio, Link. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Re: [GENERAL] PHPBuilder article -- Postgres vs MySQL
Just a note, I've been using Postgres 7.02 and PHP 4.02 or 4.03 for about a month in a couple sites, and haven't experienced any problems with persistent connections. Problem might have been fixed in one of the point releases, or maybe I just don't have enough different db connections to trigger it. At 02:05 PM 11/15/00, Adam Lang wrote: >PHP 4 was having problems with persistent connections (not sure if with just >pgsql or all dbs). Maybe that is why they didn't use it. > >Adam Lang >Systems Engineer >Rutgers Casualty Insurance Company >- Original Message - >From: <[EMAIL PROTECTED]> >To: "carl garland" <[EMAIL PROTECTED]> >Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; ><[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; ><[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> >Sent: Wednesday, November 15, 2000 12:28 PM >Subject: Re: [HACKERS] Re: [GENERAL] PHPBuilder article -- Postgres vs MySQL > > > > > > On Wed, 15 Nov 2000, carl garland wrote: > > > > # >perhaps why, even at 5 clients, the page views he shows never went > > # >significantly above 10/sec? > > # > > # I think alot of it has to do with the web server/db setup not pg. They >are > > # using Apache/PHP and looking at their code every page has the additional > > # overhead of making the db connection. Now if they had used AOLserver >with > > # its persistent db connecction pooling scheme they may have faired better >;) > > > > I haven't actually looked at their code they used to test with to > > see if they are actually using it, but Apache/PHP has the ability to do > > persistent db connections. I would be surprised that someone like Tim ( > > who seems to have done a fair bit of php with db stuff) would not make use > > of such a feature. > > > > If you can point out an example of where they are not using this > > feature I will gladly stand corrected. > > > > > > | Joseph Scott The Office Of Water Programs | > > | [EMAIL PROTECTED] [EMAIL PROTECTED] | > >
Re: [GENERAL] Limit on number of queries from CGI or PHP (security)
I noticed a lot of people gave some good advice, but one thing they forgot to mention is the AddSlashes command of php. It basically does all the necessary special-character escaping for you, so the worst thing someone can do by enterring bad data in your forms is bring up a page with bad results. It works like this: $pgResults=pgExec($dbCon, "SELECT field1, field2 FROM table WHERE field1 = '" . AddSlashes($FormVar) . "'"); It's also a VERY good idea to do some basic sanity checking on all your form data before even starting to build a query string. Ie, if you are expecting $PageNumber to be an integer, then do a if (!ereg("[![:digit:]]", $PageNumber)) { print "Bad form data!"; exit; } At the top of your script. It's annoying to have to validate ALL your variables (especially when you get into forms that have 15-20 fields) but it's necessary if you don't want some script kiddy to come along and screw up your site. At 05:00 AM 10/17/00, Rikul Patel wrote: >Hi, > >Is there any way I can restrict number of queries to >only one? Here's the problem: > >If PHP script gets some data as input from user, and >PHP scripts tries to put this data into Postgresql, >what's keeping the user to modify the data in way to >have postgresql execute two queries. > >So instead of some PHP script generating query like >"select * from table where text='some text' or id=1", >some malicious user could make it generate "select * >from table where text='some text' or id=1;delete from >table" > >Thanks, >Rikul > >__ >Do You Yahoo!? >Yahoo! Messenger - Talk while you surf! It's FREE. >http://im.yahoo.com/
Re[2]: [GENERAL] WTF is going on with PG_VERSION?
Maybe it was moved for PostGres v7 (I'm still using 6.5.3 because it works and I'm too lazy to upgrade. :-) but in older versions the PG_VERSION file was in the data directory (ie, /usr/local/pgsql/data/ ) Try checking what you are using for a data dir ("locate pg_database" should tell you what dir it is) and move PG_VERSION in there. Of course, your data dir could be /var/lib/pgsql, I dunno what evils RedHat does to the default Postgres install path. At 01:49 AM 9/19/00, Alexey V. Borzov wrote: >Greetings. > >Monday, September 18, 2000, 10:38:37 PM, you wrote: > >> óÅÎ 18 15:53:06 arbat logger: FATAL 1: File > '/var/lib/pgsql/PG_VERSION' does not exist or no read permission. > >> > >> Well, '/var/lib/pgsql/PG_VERSION' does exist, it has read permission > >> for user 'postgres' (I made it world readable, in fact, after I > >> discovered this). > >> > >> Now, two questions: > >> 1) Who the hell needs to read this file? > >> 2) Why can't he do it? > >THH> What version fo PostgreSQL are you running? > >I forgot the most important part... >PostgreSQL 7.0.2 >And it runs on Linux 2.2.17 SMP (The box has two Intel Pentiums II) > >-- >Yours, Alexey V. Borzov
Re: [GENERAL] libperl.so
There is also a way to recompile a .a library into a shared library. Something like: ar x library.a ld -shared -o library.so *.o It's documented somewhere, a web search on "convert shared library" will probably turn up some detailed documentation. At 11:52 PM 8/4/00, Alex Pilosov wrote: >On Fri, 4 Aug 2000, Ron Peterson wrote: > > > I can't compile the perl module because libperl is not a shared > > library. So what I'd like to do is simply download the 5.6 perl source, > > compile, and go. > > > > I'm using Red Had 6.1, which has the perl-5.00503-6 rpm installed. Of > > course there are a jillion other packages which depend on this one. > > > > So my question is, what happens if I rpm -e perl, then make, make > > install the latest 5.6 source? Is my whole world going to fall apart? >The best idea for you is to install SRPM for 5.005, and then change the >configuration to build libperl.so, and manually copy libperl.so to >/usr/lib. > >-alex
Re: [GENERAL]
You are probably not telling Postgres to run in the background. You should use the -S parameter. IE: postgres -S -i -D /usr/local/pgsql/data For full info on the startup options, do a man postmaster from your shell. Or take a look at the administration docs. At 01:05 PM 7/14/00, Sean Alphonse wrote: >Pressing enter doesn't solve the problem. > >Sean. > > >___ > >On Fri, Jul 14, 2000 at 10:04:04AM -0500, Sean Alphonse wrote: > > Hello. > > I am using PostgreSQL version 7.0.2 with RedHat 6.1. When I start up the > > postmaster at the prompt, I get the following response and the session >hangs > > and doesn't return to the prompt but I am able to use my databases and > > PostgreSQL. Is this normal or correct? If not, what should I do to fix >this > > problem? > >Hit "enter". You'll find you actually do have a prompt. > >A > >-- >Andrew Sullivan Computer Services ><[EMAIL PROTECTED]>Burlington Public Library >+1 905 639 3611 x158 2331 New Street >Burlington, Ontario, Canada L7R 1J4
Re: [GENERAL] libpq connectivity
Someone mentioned that you had to include libpq, but you are also using the wrong GCC command line. the -c switch tells gcc to just build an object file, ignoring the main routine and not creating an actual executable program. You want something like this: $ gcc conn2.c -o conn2 -lpq At 01:53 AM 7/7/00, anuj wrote: >Hello, > >I am working on Linux RH 6.0 platform. >I want use PostgreSQL as a backend. 'C' language as a front-end. >I am not able to connect to each other. >I am using libpq. >The program is :- > >/*conn2.c*/ >#include >#include "/usr/include/pgsql/libpq-fe.h" > main() > { > char *pghost, *pgport, *pgoptions,*pgtty; > char *dbName; > PGconn *conn; > pghost = NULL; /* host name of the backend server */ > pgport = NULL; /* port of the backend server */ > pgoptions = NULL; /* special options to start up the backend > * server */ > pgtty = NULL; /* debugging tty for the backend server */ > dbName = "template1"; > > /* make a connection to the database */ > conn = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName); >} > >The compiling is ok, but linking have error. >$ gcc conn2.c -c -o conn2 >No error > >The program compile and linking result :- >* >$ gcc conn2.c -o conn2 >/tmp/cchKU26L.o: In function `main': >/tmp/cchKU26L.o(.text+0x47): undefined reference to `PQsetdbLogin' >collect2: ld returned 1 exit status >* > >How to remove this linking error, or how to make link between PostgreSQL and >'C'? >Thanks in advance >Anuj
Re: [GENERAL] Interface Question
All of the different interfaces have there own way of specifying the host to connect to (eg, with pgsql it's "pgsql -h hostname databasename") You also have to setup the pg_hba.conf file in your PGDATA directory (/usr/local/pgsql/data on my machine) to allow access from the remote machine. Usually adding this line will do the trick: host allREMOTE.IP.ADDR.ESS 255.255.255.255 crypt You will have to specify a username/password to connect from that machine, however (use the -u option of pgsql, ie: pgsql -u -h remote.ip.addr.ess dbname.) If you don't care quite so much about security, you can use "trust" or "ident" instead of crypt. The pg_hba.conf man page and the file itself provides good documentation. At 03:18 PM 6/28/00, kurt miller wrote: >How would I connect a Unix client (perl,psql,etc.) on one machine to a >Unix postgres backend on another machine? > >Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com >
Re: [GENERAL] Compiling Error
You aren't including the crypt library, or your system doesn't have it. Try adding -lcrypt to the end of your compile command and try again. At 12:44 PM 6/4/00, Jesus Aneiros wrote: >Hi, > >Could somebody help me with this error. It appears when I try to compile >an ecpg program. It seems that it is something with the linking phase and >the ecpg library. Any ideas? > >/usr/lib/libecpg.so: undefined reference to `crypt' >collect2: ld returned 1 exit status > >Best regards, Jesus.
Re: [GENERAL] Trouble-free vacuum w/concurrent writes? (was "PostgreSQL capabilities")
No, that's now what he said. You can backup the database while it's still being used (the pg_dmp runs in a transaction) but you still can't vacuum a database while it's in use. Vacuuming is more along the lines of a defrag, it updates the indexes and maintains stats. At 12:16 PM 5/31/00, Ed Loehr wrote: >Bruce Momjian wrote: > > > > > Alex Pilosov wrote: > > > > > > > > > http://networkdna.com/database/index.html mentions that PostgreSQL is > > > > > capable of "Online backup". What does that exactly mean? > > > > > > > > It means Postgres can do a reliable backup (a consistent snapshot) of a > > > > database without shutting down the database. > > > > > > Hmmm. My backup procedure, based on earlier discussions in this group, > > > involves blocking all write-access during a pg_dump. That is effectively > > > shutting down the database from my perspective. Is there a quicker way > > > to take a consistent snapshot while still allowing writes? > > > > With 6.5 and 7.0, pg_dump grabs a consistent snapshot of the database at > > the time it starts, and dumps that. No reason to shut out users. > >Can other folks confirm that this is no longer a problem? Are people >successfully vacuuming while allowing full read/write access to the db? > >Regards, >Ed Loehr
Re: [GENERAL] createdb -- alternate locations
Try createdb -D $PGDATA2 testdb At 11:26 PM 5/25/00, J.R. wrote: > Hello: > >Novice, just installed PGSQL 7.0 on Redhat 6.1. > >Everything went smoothly with initial installation and db setup. >However, I've run into some problems subsequently. > >To keep it simple, here is a specific problem. Attempting to create a >database in an alternate location, I went through the following steps as >the superuser (postgres): > >mkdir /home/postgres/testdata >export PGDATA2=/home/postgres/testdata >initlocation PGDATA2 >createdb testdb -D 'PGDATA2' > >the last command failed with this error: >"The database path 'PGDATA2' in invalid. This may be due to a character >that is not allowed or because the chosen path isn't permitted for >databases" > >I also tried createdb -D 'PGDATA2' testdb, same result. > > >Any ideas what I'm doing wrong? > >Thanks, >J.R. Belding >[EMAIL PROTECTED] > > >
Re: [GENERAL] Performance
Have you done a VACUUM ANALYZE on your database after recreating the index? At 04:56 PM 5/15/00, Diego Schvartzman wrote: >I have an application via PHP. For example, a SELECT query that must return >one and only one row, with a where clause with and index (I droped it and >created again) that took about 3 seconds (v6.5.3), now (v7.0.0) takes about >15 seconds. Really I don't lnow what is happening. Same hardware, same php >version, etc etc. > >Thanks again! > >Diego Schvartzman >Email: [EMAIL PROTECTED] >ICQ# 1779434 >- Original Message - >From: Alfred Perlstein <[EMAIL PROTECTED]> >To: Diego Schvartzman <[EMAIL PROTECTED]> >Cc: Lista PGSQL <[EMAIL PROTECTED]> >Sent: Monday, May 15, 2000 5:18 PM >Subject: Re: [GENERAL] Performance > > > > * Diego Schvartzman <[EMAIL PROTECTED]> [000515 12:25] wrote: > > > Hi all! > > > I've upgraded to PGSQL 7.0.0 beta 2 from 6.5.3. Unloaded and loaded my >db, > > > vacuumed it, everything ok. I'm not using foreign keys. But now I see >that > > > performance is so much slower !. Is this supposed to be? Do I have to do > > > something else? > > > > > > English is my second language. Thanks in advance ! > > > > A lot of people have said that performance has increased, if you want > > any help you'll need to be more specific, give examples of what's > > worse now than before. > > > > -- > > -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]] > > "I have the heart of a child; I keep it in a jar on my desk." > >
Re: [GENERAL] Arrow keys not working
The problem is that your psql wasn't compiled with readline support, which is the library that allows you to use the arrow up/down command line history. There is a known problem (discussed last week on this list) where the configure script for Postgres won't detect the readline library under certain conditions (I think it was that the readline.h had to be in /usr/include/ as well as /usr/include/readline/) The first thing you will want to do is make sure you have the readline library (running "find /usr/include -name readline.h -print" should show you if you do) and if not, go install it. If you do, try this while logged in as root (I haven't tested it, so it's just a guess): cd /usr/include echo "#include " >readline.h cd /usr/src/pgsql/src (or wherever your Postgres source is installed) make distclean ./configure make cd bin/psql make installl If you installed from a precompiled package (ie, an RPM) then I'm not sure what your options are. Probably go look for a new RPM. :) BTW: Is this in the FAQ? It gets asked a lot. At 06:43 AM 5/12/00, Differentiated Software Solutions Pvt. Ltd. wrote: >Hi, > >In shell script arrow keys retrieve the previous command, but within psql >these arrow keys are not working ?? Can anybody tell me how to set this up. > >Murali > >Differentiated Software Solutions Pvt. Ltd., >176, Gr. Floor, 6th Main >2nd Block RT Nagar >Bangalore - 560 032 >India >Ph: 91 80 3431470 >email : diffs+AEA-vsnl.com >http://www.diffs-india.com
Re: [GENERAL] Querying the field types of a table
There has been discussion on this topic before, so if you check the archives you will find more detail. Generally, the best way to find what you are looking for is to start psql with the -E option (echo queries sent to the backend) and then issue a \dt or \d tablename command to see the query it uses. At 03:30 PM 5/10/00, Dale Anderson wrote: >Is there a way to query the field names and types of a table in either 6.5 >or 7.0?? I am writing an application, and in one piece, I need to get the >field names and field types of any table. Can this be done??? I have >searched both the web and documentation, and I can't seem to find a C >function or system table that has this information. > >Thanks in advance, >Dale. >
Re: [GENERAL] too many clients - Error message
The maximum number of backends that the PostGres postmaster will start is 32. If you want more, you have to specify the maximum on the command line with the -N option. IE, you can start postgres like this to allow internet connections, have it fork into the background, and allow 100 simultaneous connections: /usr/local/pgsql/bin/postmaster -i -B 256 -S -N 100 At 07:16 AM 4/22/00, Joseph wrote: >How do you solve this error message? > >Connection to database 'template1' failed. >Sorry, too many clients already > > >Joseph
Re: [GENERAL] well, shoot. Error loading Pg.so
Well, a couple of suggestions: Restore the /usr/lib/perl5/site_perl directory from a working backup. Try doing a ./configure --with-perl in the src directory, then go into interfaces/Perl5 and type make;make install. Maybe the currently installed src dir wasn't correct. Might also want to check that /usr/local/pgsql/lib is in your /etc/ld.so.conf file and re-run ldconfig. (Probably should do this before trying the rebuild of the perl interface) At 03:04 PM 4/19/00, Steve Wolfe wrote: >Today, our newest employee thought he'd upgrade the Perl interface to >Postgres. So, he went into the source directory, and did "gmake install" >in the /src/interfaces/Perl5 directory. Now, everything's broken. Trying >to use it, we get: > >perl: error in loading shared libraries: >/usr/lib/perl5/site_perl/i386-linux/auto/Pg/Pg.so: undefined symbol: >PQconnectdb > > Now, here's where it gets rely nice. The last system admin, who is >now gone, has about 50 copies of the postgress source lying around on the >disk (really bright guy). They're all 6.5.3, except for the rpm's for >6.5.3-2 and 6.5.3-2nl. > > So, at this point, I have two choices: Dump all data, reinstall, and >restore (a few days of dumping/restoring), or some nice soul can suggest >how I can fix this > > Any suggestions? > >steve
Re: [GENERAL] PHP-Postgres link
Try replacing $connection = pg_connect("", "", "", "users"); with $connection = pg_connect("dbname=users") or you might want to try $connection = pg_connect("dbname=users user=nobody") At 11:53 AM 4/9/00, Ramses v. Pinxteren wrote: >Hi, > >I am running PHP under APache with a link to a postgres database. It gives >me a headache because it is not working. > >I am running Apache as nobody, and I have made a Postgres user also named >nobody. > >I made a postgres database users (as nobody: createdb users) and with psql a >database login (field 1: userindex, field 2 username, field 3 password, >field 4 security level) > >with psql i can do everything I want with this table. adding tuples, >deleting them etc. etc. > >Now what do i need: I need a script that adds a new user to this database. >so I created this: > $connection = pg_connect("", "", "", "users"); > echo ("$connection"); > $query = "INSERT INTO users.login VALUES ( '$userindex', '$uname', >'$passwrd1', '1')"; > echo ("$query"); > $result = pg_exec ($connection, "select *"); > print (pg_cmdtuples($result)); > pg_close($connection); >?> > >This generates an error at the HTML output: > >1INSERT INTO login VALUES ( '1', 'test', 'test', '1'); >Warning: PostgresSQL query failed: ERROR: login: Table does not exist. in >/usr/local/apache/htdocs/isis.cx/database_update.php3 on line 6 > >Warning: 0 is not a PostgresSQL result index in >/usr/local/apache/htdocs/isis.cx/database_update.php3 on line 7 > > >in the logffiles is specifies: >FindExec: found "/usr/local/postgres/bin/postgres" using argv[0] >started: host=localhost user=nobody database=users >InitPostgres >StartTransactionCommand >ERROR: login: Table does not exist. >AbortCurrentTransaction >proc_exit(0) [#0] >shmem_exit(0) [#0] >exit(0) >/usr/local/postgres/bin/postmaster: reaping dead processes... >/usr/local/postgres/bin/postmaster: CleanupProc: pid 19113 exited with >status 0 > > >I am a complete newby, so please go easy on me! > >Tnx! >Ramses > > >
Re: [GENERAL] A Haunted Database
Vacuuming is sort of necessary at the moment, because if you don't vacuum, postgres won't use your indexes. :( This is supposedly going to be fixed in the 7.x series (7.5 I think I heard) but I've never heard of a vacuum corrupting a normally working database in the 4 or 5 months I've been reading the GENERAL list (or at least I don't remember it...) Can you post your vacuum script? Maybe it's doing something besides the vacuum and that's what's corrupting your database. Other than that, the only thing I can think off is that the vacuum is scanning the fields of your table and is changing ones that have a specific pattern. That would be a VERY bad bug, so you would think it would have cropped up before. BTW: What version are you using? We use 6.5.3 here, and haven't had any problems. At 11:56 AM 4/9/00, Robert Cleveland wrote: >Thanks! Turning off the nightly vacuum script did the trick. Now . . . any >idea why vacuum would be so damaging? It certainly appears, at least for me, >that the routine is more trouble than it is worth. Is it a malfunction that >can be overwritten or a bug or something else? > >Again many thanks. I can sleep without fear now > >Rob > > > > >Do you have any automated program accessing the database overnight? IE a > >malfunctioning backup or vacuum script? You might also want to do a diff > >-C1 first_dump second_dump to see what is actually being changed. > > > >At 11:40 AM 4/8/00, Robert Cleveland wrote: > >>Here's a mystery I hope someone can solve for me. > >> > >>We are entering blocks of HTML into a table called bodyparts. We use PHP3 >to > >>break up these blocks into several chunks to keep the length below the > >>maximum. When the end user calls up the section, the "bodyparts" are > >>extracted and re-assembled. > >> > >>The output pages work fine . . . for a while. We set up the output pages > >>during the day, check them for accuracy and go to bed thinking we have >done > >>a great job. Then , in the middle of the night, something happens and when > >>we awake, we find the HTML has been scrambled like so many breakfast eggs. > >>Not all sections are scrambled. In fact it is the same sections every >single > >>time. So we re-enter the data, check it, assume we are done, and then the > >>same thing happens the next day. > >> > >>To gather some empirical evidence, I ran pg_dump at 7pm on the offending > >>table. I check the output pages at midnight the same evening, and they all > >>were good. When I got back in front of the computer at 9am, the pages were > >>scrambled again. I ran pg_dump a second time to a separate file. The file > >>sizes were different (insert scary music here). No one had touched the > >>database or the pages. > >> > >>I reloaded the data and everything is back to normal. But I suspect it >will > >>happen again tonight and I am afraid. Does anyone know what inhuman entity > >>might be causing this to occur? > >> > >>
Re: [GENERAL] Permission denied while importing data from a file?
Try moving the file into /tmp and seeing if that works. Sometimes you run into problems with having permissiosn on the file, but not all of the directories before it. You must have execute permissions on all parent directories in order to access a file in one of those dirs. The file will only need read access (444), although the others won't stop it from working (it will just be less secure if you are on a multi-user system.) Also, I believe that one of the copy commands (either \copy or the COPY SQL command) can only be done as the database super user (postgres or pgsql, depending on how you installed Postgres) Maybe you are using the wrong one? At 01:16 PM 4/9/00, you wrote: >LS, > >i'm trying to copy data into a table using a copy command using: > >COPY tabelName FROM '/somewhere/somebody/datafile.txt'; > >I've given everybody the right to use the datafile.txt (execute, read, >write) with an incredible chmod . Still i get the message: > >ERROR: COPY command, running in backend with effective uid nr 666, >could not open file '/somewhere/somebody/datafile.txt' for reading. >Errno = Permission denied (13). > >Why? I mean how does this happen? >Please help... > >Felix
Re: [GENERAL] Selecting field names?
Yes, if you start psql with the -E switch (ie, psql -E -h dbserver database) then do a \d tablename it will show you the SQL query that's used to display the table definition. You can then use this to do your selects. Here is what I get when I do the above: QUERY: SELECT a.attnum, a.attname, t.typname, a.attlen, a.atttypmod, a.attnotnull, a.atthasdef FROM pg_class c, pg_attribute a, pg_type t WHERE c.relname = 'tablename'and a.attnum > 0 and a.attrelid = c.oid and a.atttypid = t.oid ORDER BY attnum QUERY: SELECT viewname, definition FROM pg_views WHERE viewname like 'tablename' At 12:57 PM 4/9/00, Michael Hall wrote: >Is there a way to 'select' the names of the fields and the field types >of a table? > >-- >Michael J. Hall, CCAIntelligence Technologies Int'l >[EMAIL PROTECTED]http://www.inteltec.com >secure: [EMAIL PROTECTED]
[GENERAL] SQL Special Characters
Can anyone point me to a list of the special characters for SQL strings? Ie, when I do a WHERE textstring = '12%' the % is a special character, and has to be escaped with a backslash. I'm looking for a list of all such characters. The ones I know of are ', \, and %, but I'm sure that there are more.
Re: [GENERAL] Libpq problems
You got the include directories right, but you didn't tell it to link in the library by adding -lpq Try this: cc -s -I/usr/local/pgsql/include -L/usr/local/pgsql/lib program.c -lm -lnsl -lpq -o program.exe At 03:38 PM 3/29/00, Teruel Tony wrote: >Hi, > >I'm interacting with postgeSQL using C languaje, >I'm having compilation problems. >I set the compilation definition as: >PGSQL_INCLUDE=/usr/local/pgsql/include/libpq > >(that's where I got the path to libpq) >and I'm using something like this (I'm using >sockets) to compile: > >cc -s -I/usr/local/pgsql/include -L/usr/local/pgsql/lib program.c -lm -lnsl >-o program.exe > >By the way I'm using Linux. I got the following error: > >/tmp/ccgtGrLd.o: In function `test': >/tmp/ccgtGrLd.o(.text+0xa): undefined reference to `PQerrorMessage' >/tmp/ccgtGrLd.o(.text+0x2e): undefined reference to `PQfinish' >and so on > >Can someone help? What am I doing wrong? > >Thanks > > > > >Have an excellent day >[EMAIL PROTECTED] >Mexico > > > > > >___ >Get 100% FREE Internet Access powered by Excite >Visit http://freelane.excite.com/freeisp
Re: [GENERAL] DB replication
I'd recommend changing the structure of your tables, adding a record_added field, and then do a SELECT * FROM table WHERE record_added > CURRENT_TIMESTAMP - '15min' ::datetime and having a PERL program pipe the results of the select into the other database using DBI with two connections. BTW: The SELECT statement above probably doesn't work, as I can never remember the time functions/macros. It should give you the general idea though. At 04:19 AM 3/24/00, Differentiated Software Solutions Pvt. Ltd. wrote: >Hi, > >We have an application which has databases in 2 different machines. >The databases are small (25 MB). > >Every 15 minutes we want DB of one machine to be synced with another >machine. Are there ready made utilities in postgres to do this. >One way is to take a pgdump, tar it, ftp it and then restore it into the >other machine. >This has a problem if I want to sync selected tables. Also this means the >entire data gets reloaded, whereas only the incremental changes should get >dumped. > >Any help is appreciated. > >Murali > >Differentiated Software Solutions Pvt. Ltd., >176, Gr. Floor, 6th Main >2nd Block RT Nagar >Bangalore - 560 032 >India >Ph: 91 80 3431470 >email : diffs+AEA-vsnl.com >http://www.diffs-india.com
Re: [GENERAL] Regular expressions syntax: is \ the escape character ?
You have to use two backslashes, as the preprocessor will take the first one out (I may be wrong about this, but I remember reading something similar a few months ago on this list.) Also, remember that you have to send two slashes to PostGres, which may mean that you need to use 4 slashes in your code. IE, for a PERL program: $dbi->prepare("SELECT field FROM table WHERE field ~* '\\+'"); is actually only sending: SELECT field FROM table WHERE field ~* '\+' to the backend, so you should use: $dbi->prepare("SELECT field FROM table WHERE field ~* '+'"); At 03:04 PM 3/3/00, Gabriel Fernandez wrote: >Hi, > >I'm using regular expressions in 6.5.3 version but i cannot find the >character to escape special caracters as ?, $, ^. > >I've tried with \ but it doesn't works. > >Example: i want to find the row with the value 'fisica?' and i do: > >select title from libros where title ~* 'fisica\?' ; > >but i also obtain rows with values: 'fisicamente' or 'fisicas' > >What am i doing wrong ? > >Thanks in advance. > >Gabi :-) > > >
Re: [GENERAL] Illegal use of aggregates or non-group column in target list
Try : select target, avg(reaction_time) from data_table group by target; Haven't tested it, but it should be close to what you're looking for. At 02:14 PM 3/3/00, G. Anthony Reina wrote: >I'd like to return an average for one item in my database grouped by >another item. For example, > >The table looks like this: >= >CREATE TABLE data_table ( > subject text, > arm char, > targetint4, -- end of primary >key > reaction_timefloat4 >); >CREATE UNIQUE INDEX pkdata_table ON data_table (subject, arm, target); > > >The command I'd like to use is something like: > >select target, avg(reaction_time) from data_table; > >and I'd like this command to return: > >target| avg >= >1| 301.3 >2| 311.45 >3| 299.1 >4| 300.8 > > >However, this statement just gives me the error: Illegal use of >aggregates or non-group column in target list. > >So apparently I can only return either one or the other but not the two >together. Is there a way to get the output I want in SQL? > >Thanks. >-Tony > > > >
Re: [GENERAL] System requirements
The disks you use also play a big part in the performance you get. If you are going to have multiple simultaneous connections accessed the server, I'd go with a good SCSI disk array. Maybe even RAID if it's going to be under a high load. At 07:49 PM 1/24/00, Peter Eisentraut wrote: >This depends entirely(?) on what kinds of tables you use, especially their >size, and what kind of queries you want to do with it. 128MB RAM should be >fine for average applications, the hard disk size pretty much just limits >how much data you can store (and sort), and the processor factor is hard >to gauge. Having two processors might help more than anything else. But in >general it's for you to find out. > > >On 2000-01-24, J. Roeleveld mentioned: > > > Hi, > > > > I've been checking the archives and documentation for what the > > system requirements are for a PostgreSQL database > > > > It's to be used for a database with 8 users, and has MS-Access front-end > > which causes app. 2 connections per user. > > > > My idea was: > > > > 128Meg Ram > > 20 Gig HD > > 500 Mhz. Celeron > > > > will this be sufficient, or will it have be something else? > > > > Also what is required more, memory or CPU-power? > > > > hoping for an answer, > > > > Joost Roeleveld > > > > > > > > > > > >-- >Peter Eisentraut Sernanders väg 10:115 >[EMAIL PROTECTED] 75262 Uppsala >http://yi.org/peter-e/Sweden > > > >
Re: [GENERAL] Error connecting database
You might want to check that you have run initdb. Log in as user postgres and type: /usr/local/pgsql/bin/initdb Then check to make sure that postmaster is actually running, as someone else suggested. At 11:21 PM 1/15/00, C.D. Gan wrote: >I'm just trying to run PGSQL and running the test for it. However I get >this message, > >Connection to database 'template1' failed. >connectDB() -- connect() failed: No such file or directory >Is the postmaster running at 'localhost' and accepting connections on Unix >socket '5432'? > >createdb: database creation failed on i-city. > >What does that mean? Is it mean that the connect() file is not found under >my /usr/local/pgsql/bin? > >Gan >__ >Get Your Private, Free Email at http://www.hotmail.com > > > >
Re: [GENERAL] Future of PostgreSQL
At 10:27 PM 12/25/99, Bruce Momjian wrote: >[snip] > > Plug-in Oracle 7 compatibility. > >I believe we are adding Oracle compatibility as possible. We are >working on write-ahead log, long tuples, foreign keys, and outer joins. >Anything else? Replication would be nice, or some other form of clustering so the load can be easily split among multiple PostGres servers. My personal pet peeves are the difficulty of the backup/restore process (well, it's not really difficult, it just takes a while) and the 8k query limit. Also, the ability to restrict CREATE [ TABLE | INDEX | SEQUENCE ...] would be nice.
Re: [GENERAL] Postgres install problem
I think your missing the glibc crypt library, look for libcrypt.so.* in your /lib dir. If it's not there, you'll have to grab it from a Linux site like ftp://sunsite.unc.edu/pub/Linux or from Caldera's website. If you do have the file, try adding -lcrypt to the LIBS line of Makefile.global in src/ At 08:35 PM 12/17/99, Peter Ai wrote: >Hi, Postgres guru, > > I am new to Postgres and just downloaded the source to compile on my >Caldera 2.2.5. I got a trouble and here is what i did (follwoing the >INTALL file): > >Hardware : Pentium 200 with 32M ram, 2.1G HD(Linux partition) > >1)Add a new account postgres >2)make dir /usr/scr/pgsql ; /usr/local/pgsql and changed >ownership/group to postgres through root >3) cd /usr/src/pgsql and unzipped the source achive >4)cd /usr/src/pgsql/postgres-6.5.3/src and run > ./configure --with-template=linux_i386 --without-CXX > make all >make.log 2>&1 & > >the make failed (see make.log) !! > >make[3]: Leaving directory >`/usr/src/pgsql/postgresql-6.5.3/src/backend/utils/time' >gcc -I../../include -I../../backend -O2 -mpentium -Wall -Wmissing-prototyp >es -I..-c fmgrtab.c -o fmgrtab.o >ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o error/SUBSYS.o >fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o >sort/SUBSYS.o time/SUBSYS.o >make[2]: Leaving directory >`/usr/src/pgsql/postgresql-6.5.3/src/backend/utils' >gcc -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o >commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o >main/SUBSYS.o parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o >port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o >storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o >../utils/version.o -lnsl -ldl -lm -export-dynamic >libpq/SUBSYS.o: In function `crypt_verify': >libpq/SUBSYS.o(.text+0x3ca2): undefined reference to `crypt' >libpq/SUBSYS.o: In function `verify_password': >libpq/SUBSYS.o(.text+0x3fba): undefined reference to `crypt' >make[1]: *** [postgres] Error 1 >make[1]: Leaving directory `/usr/src/pgsql/postgresql-6.5.3/src/backend' >make: *** [all] Error 2 > >How can i solve this ? > >NB: >I downloaded bison-1.28 and compiled /installed successfully. > >I changed Makefile.global to have: >YFLAGS= -y -d >YACC= /usr/local/bin/bison > >I also changed Makefile.custom to have >CFLAGS = -mpentium > >Any help is much appreciatted . > >Peter > > > > > >
Re: [GENERAL] Performance
Try turning off DBI's autocommit , that way it will cache all the inserts until it's sure there are no errors, then just do the one write to the database. You do this by changing your DBI connect command from: $pg_con=DBI->connect("DBI:Pg:." to $pg_con=DBI->connect("DBI:Pg(AutoCommit=>0):." Don't forget to call $pg_con->commit; before disconnecting or else you will lose your added data. At 04:47 AM 10/29/99, Jason C. Leach wrote: >hi, > >I've been playing with pgsql for a few days now and am getting the hang >of it. I just did a loop that inserts a few thousand records into a >table. I did a statement, prepare, execute; it worked fine although pg >seemed to access the hd for every insert. Is there a way to cache >inserts and then write them all at once later. I'm using Perl with >DBD::Pg/DBI and see with DBI there is a prepare_cached, and a commit. >Not much in the way of docs for the modules though. > >Perhaps I should be doing statement, prepare, statement, prepare, >commit? > >I'd also be curious where you start postgres with the -F option? I did >not see it as a postmaster option. > >This is just a curiosity, but some of the MS Certified people are being >taught that MS SQL server requires something like a PIII with 512MB >RAM. That's pretty crazy, if you ask me, but I can see it as being >dependent on how many requests you server will need to deal with. On >equal machines, could pg handle more or less requests that MS SQL? > >Thanks, > Jason > >-- >. >. Jason C. Leach >.. University College of the Cariboo >... [EMAIL PROTECTED] >.. http://www.ocis.net/~jcl >. > >The Search for Extraterrestrial Intelligence from Home: >http://setiathome.ssl.berkeley.edu > > LINUX! > > > > >
Re: [GENERAL] current_timestamp and default now()
I believe it works if you put now() in single quotes. EG: create table test ( proposetime datetime not null default 'now()', ) At 10:07 PM 10/25/99, Lincoln Yeoh wrote: >At 01:05 PM 25-10-1999 PDT, amy cheng wrote: > >hi, there, > > > >I'm using: > > > >create table test ( > >proposetime datetime not null default now(), > >) > > > >then, I find that I need to make the time uniform through the > >transaction. Is there a way to use current_timestamp('now') > >in the create table statement? Seems impossible? > >How about reading off the current time (select 'now', I >can't remember how to do it- not very obvious at first sight) and then >using that time throughout the transaction? > >Cheerio, > >Link. > > >
Re: [GENERAL] Re: What's WAL
That's why you don't hear anything about them anymore, they are stuck in the past... :) At 09:20 PM 10/22/99, The Hermit Hacker wrote: >On Fri, 22 Oct 1999, Christian Rudow wrote: > > > So - get down from envying the "Illuminati" - build up a working > > linux configuration - step by step - slowly. And ... if you are one of > > the less brighter guy's like me - don't ask for too much at one > > time. > >Actually, 3 out of 4 Illuminati use *BSD ... > >Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy >Systems Administrator @ hub.org >primary: [EMAIL PROTECTED] secondary: >scrappy@{freebsd|postgresql}.org > > >
Re: [GENERAL] Postgres INSERTs much slower than MySQL?
Try turning off Autocommit: MySQL doesn't support transactions, so that might be what's causing the speed boost. Just change the connect line from: $pg_con=DBI->connect("DBI:Pg: to $pg_con=DBI->connect("DBI:Pg(AutoCommit=>0): and add $pg_con->commit before you disconnect. I may have the syntax wrong, so double check the docs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI) At 01:25 AM 10/20/99, Lincoln Yeoh wrote: >Hi everyone, > >Should inserts be so slow? > >I've written a perl script to insert 10 million records for testing >purposes and it looks like it's going to take a LONG time with postgres. >MySQL is about 150 times faster! I don't have any indexes on either. I am >using the DBI and relevant DBD for both. > >For Postgres 6.5.2 it's slow with either of the following table structures. >create table central ( counter serial, number varchar (12), name text, >address text ); >create table central ( counter serial, number varchar (12), name >varchar(80), address varchar(80)); > >For MySQL I used: >create table central (counter int not null auto_increment primary key, >number varchar(12), name varchar(80), address varchar(80)); > >The relevant perl portion is (same for both): > $SQL=<<"EOT"; >insert into central (number,name,address) values (?,?,?) >EOT > $cursor=$dbh->prepare($SQL); > > while ($c<1000) { > $number=$c; > $name="John Doe the number ".$c; > $address="$c, Jalan SS$c/$c, Petaling Jaya"; > $rv=$cursor->execute($number,$name,$address) or die("Error executing >insert!",$DBI::errstr); > if ($rv==0) { > die("Error inserting a record with database!",$DBI::errstr); > }; > $c++; > $d++; > if ($d>1000) { > print "$c\n"; > $d=1; > } > } > > > > >
Re: [GENERAL] Connect PostgreSQL 6.0 Server with php4b
Your problem is probably in the /usr/local/pgsql/data/pg_hba.conf file. That file lists what machines are allowed to connect to your Postgres server and what sort of authentication they have to provide. If the web server and the Postgres server are on the same machine, you should have these two lines in that file: localalltrust host all 127.0.0.1255.255.255.255 trust If they are on seperate machines, you will want to set up something like: host all web.server.ip 255.255.255.255 crypt and set up accounts/passwords for your PHP scripts, then use this sort of thing to connect to the DB: $dbCon = pg_PConnect("host=postgres.server.address user=username password=password dbname=database.to.connect.to"); At 06:45 AM 10/9/99, Matthias Teege wrote: >Moin, > >i have an PostgreSQL 6.0 Server wich I would query with php4b. I have >problems to make the connection because off php gives me the following >error message: > >Warning: Unable to connect to PostgresSQL server: Failed to >authenticate client as Postgres user 'nobody' using authentication >scheme 131072. in /usr/local/share/apache/htdocs/matthias/hellodb.php >on line 2 >An error occured. > >Were is the Problem? > >Many thanks >Matthias > > >
Re: [GENERAL] copying from one table to another
Try something like: INSERT INTO selection (fieldlist) SELECT fieldlist from temp_section; SELECT INTO won't work, as it requires you to be creating the destination table (well, you could drop temp_section first, but that could be annoying.) At 11:51 PM 10/5/99, Omega Weapon wrote: >I have two identical tables -- section and >temp_section. I use the section table for keeping >original values and temp_section for editing. > >How do I copy the values from temp_section to section? > >= >Weapon >__ >Do You Yahoo!? >Bid and sell for free at http://auctions.yahoo.com > > >
Re: [GENERAL] data type "serial"
If you have this table: create table testing ( id serial, datatext, md int ); You would use: insert into testing (data, md) values ('my_data', 12); insert into testing (data, md) values ('more data', 15); The key part is that you don't specify the serial field in your insert command. Anyway, hope this helps. At 09:45 PM 9/22/99, Michael wrote: >> > >> > So how do you store something in data type serial? >> > I've created the column but everytime I attempt to do an update I get >> > back the error message that I can't store a duplicate key. >> > >> > table_col_seq.last_value = 1 >> > increment = 1 >> > maxvalue = 2billion something >> > >> > I've tried storing 1,2,some other number in "serial" in the main >> > table to no avail. What is the magic to get this to work. There >> > appears to be no documentation anywhere on the necessary SQL commands >> > to cycle the generator. I've tried the method used for Oracle SQL but >> > that does not work. I'm a bit of a newbie at this. >> > >> > Michael >> > [EMAIL PROTECTED] >> >> You cant insert into a serial as far as I know. >> >> The serial is simply incrimented automatically whenever a new INSERT >> is done on the table. > >Hmmm that's a real problem when adding it to an existing table. >The values I get back on a select indicate all blanks. I've got >several hundred records in the table at the moment. There must be a >way to update the darn things. > >Michael >[EMAIL PROTECTED] > > >
Re: [GENERAL] how to emulate a simple case of a view on unions
You could try combining it into one query spanning the four tables: select tab1.field, tab2.field, tab3.file, tab4.field from tab1,tab2,tab3,tab4 WHERE clause..; At 10:59 PM 9/21/99, Martin Weinberg wrote: > >I have four tables with identical fields and I would like >to automate the same query on each table and pool the >results. > >Any suggestions on streamlining this? > >Thanks! > >--Martin > > >=== > >Martin Weinberg Phone: (413) 545-3821 >Dept. of Physics and Astronomy FAX: (413) 545-2117/0648 >530 Graduate Research Tower [EMAIL PROTECTED] >University of Massachusetts http://www.astro.umass.edu/~weinberg/ >Amherst, MA 01003-4525 > > >
Re: [GENERAL] User not in pg_shadow
As the postgres user run the program "createuser" It will ask fro a username (dckinder) and some other things, including whether the user can create a database. You can do the same thing by connecting to template1 with psql and using the CREATE USER command. (do \h CREATE USER from within psql) At 04:00 PM 8/15/99, Duncan Kinder wrote: >I have installed postgresql 6.5.1 on a Redhat Linux 5.2 box. > >As "postgres" I am able to create a database. > >However, when I type the command "createdb whatever" as dckinder, I receive >the following response: > >"Connection to database 'templete1' failed >FATAL 1: SetUserId: user 'dckinder' is not in 'pg_shadow'. > >How do I get dckinder to be in 'pg_shadow'? > >Thanks in advance. > >Duncan C. Kinder >[EMAIL PROTECTED] > > > > >
Re: [GENERAL] Autoincremental
Yes, it's called "serial" IE: create table my table ( ndx serial, nametext ); "serial" is just a shortcut that creates a "sequence" (a type that is basically a counter) an sets the ndx field to be an int default nextval('sequence_name') Read the FAQ for a better explanation. At 01:44 PM 8/15/99, Matteo Colombo wrote: >Pgsql support autoincremental field like mysql ? > >Thanks >
Re: [GENERAL] libpq.so open failed
I've never used Solaris, but I know you get a similar error in Linux if you don't have the dir libpq.so is in the /etc/ld.so.conf file. If you have an /etc/ld.so.conf file, add the line: /usr/local/pqsql/lib to it and run ldconfig. Hope it's that simple. :) At 05:04 PM 8/4/99, Ximo Domenech wrote: >Hi to everyone !! >I'm sorry, my last message have arrived to the mailing list with bad >subject. >I dont know why. :-| >Here my problem: > >I'm tring to migrate Postgres from Linux to Solaris. (2.7) >The C source code (cgi's) is running ok in linux, but when execute >from solaris I obtain the next error in execution: > >ld.so.1: cgi_name: fatal: libpq.so: open failed: No such file or >directory > >I've compiled the cgi with: > >-L/usr/local/pgsql/lib -R/usr/local/pqsql/lib -lpq > >The paths of libraries is Ok. >I've been reading something about Solaris in Postgres web site. >Someone, recommend to use > >export LD_RUN_PATH=/usr/local/pgsql/lib > >to find libraries in execution time. Thats no work in my computer. I >dont know how to continue. >Any idea ? >Thank you in advanced. > >__ > >Ximo Domenech [EMAIL PROTECTED] >
[GENERAL] int2/int4 Failure with 6.5.1 and SlackWare 4.0
Hi there, When running the regression tests for Postgres 6.5.1 on my SlackWare 4.0 box, I received an error on the int2 and int4 types. Here's what was in the regress.out file: text .. ok strings .. ok int2 .. failed int4 .. failed int8 .. ok oid .. ok float4 .. ok float8 .. ok Here is the error from int2.out: QUERY: CREATE TABLE INT2_TBL(f1 int2); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('0'); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('1234'); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('-1234'); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('34.5'); ERROR: pg_atoi: error in "34.5": can't parse ".5" QUERY: INSERT INTO INT2_TBL(f1) VALUES ('32767'); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('-32767'); QUERY: INSERT INTO INT2_TBL(f1) VALUES ('10'); ERROR: pg_atoi: error reading "10": Math result not representable QUERY: INSERT INTO INT2_TBL(f1) VALUES ('asdf'); ERROR: pg_atoi: error in "asdf": can't parse "asdf" QUERY: SELECT '' AS five, INT2_TBL.*; Any ideas what is causing this? It seems to be a problem with the pg_atoi function giving a fatal error on any data that is not formatted exactly right, or too large for it's return type.