Re: [GENERAL] To_char statement problems
Clark Endrizzi said: > I just sent this twice before with the wrong email address so it didn't > go through, that is why I am sending this so that it will get through. > I hope this isn't causing issues. > > Hi guys. I certainly hope this is the correct place to email this. I'm > having an issue that I have really tried to solve but can't. > > I'm using RHEL's postgresql 7.3.4 and everything works great except my > to_char() statements. The most common conversion string I use is > 'MM/DD/ HH:MM AM' which works except the minutes are never correct, > they always end with ":04". So if I submitted it at 10:23 AM it will > say 10:04 after conversion. > You are telling it HH:MM where MM is the Month so it is doing exactly what you asked for. Best, Jim ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] To_char statement problems
I just sent this twice before with the wrong email address so it didn't go through, that is why I am sending this so that it will get through. I hope this isn't causing issues. Hi guys. I certainly hope this is the correct place to email this. I'm having an issue that I have really tried to solve but can't. I'm using RHEL's postgresql 7.3.4 and everything works great except my to_char() statements. The most common conversion string I use is 'MM/DD/ HH:MM AM' which works except the minutes are never correct, they always end with ":04". So if I submitted it at 10:23 AM it will say 10:04 after conversion. I know Redhat doesn't have the newest version but I can't believe that they would have such a big problem. Is this just a problem in this version? Is my syntax incorrect? Thanks! -- Clark Endrizzi NAI Utah Commercial Real Estate 801-578-5522
Re: [GENERAL] Can we have time based triggers in Postgresql??
Greg Stark wrote: Mike Nolan <[EMAIL PROTECTED]> writes: > There's no reason you couldn't write a cron daemon that retrieved its list of > jobs from an SQL database. In fact we discussed doing exactly this at my last > job. I expect others have thought of the same idea and wouldn't be surprised > if it existed by now. Is there (or would we need) a way to specify access privileges for such a daemon in pg_hba.conf, and would that allow controlling the access privileges for each job or the user who sets it? (ISP's would want tight controls here.) Uh, that's not at all what I was picturing when I wrote the description above. I was picturing a cron daemon just like any other cron daemon except it happens to be backed by the database instead of a flat file. The privileges would be specified in tables just like they're currently specified in flat files (well currently in the directory structure above the flat files actually). The main advantage of such a database backed cron daemon would be the possible integration with other database backed systems. Job status could be viewed on database-backed web pages, Jobs could be created or deleted from web pages, failures could automatically create trouble tickets... And all that can be implemented just fine with the functionality that is there right now. Create a daemon and have triggers on the job table that NOTIFY the daemon so that it rereads and reschedules when the next job is due. This does not need the slightest little bit of backend changes. A wonderfull idea for a gborg/pgfoundry project, isn't it? Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] extract epoch and index scanning
Can someone explain why the latter query uses an index scan while the former uses a sequence scan? I have tried all sorts of casting in the first case but I cannot get the index scan. Cheers, Randall == data=> EXPLAIN ANALYZE SELECT acqtimestamp FROM aux_datarecord where acqtimestamp between (extract(epoch from timestamp '2004-02-21 22:39:57+00') - 50716800)::double precision and (extract(epoch from timestamp '2004-02-21 22:43:52+00'::timestamp) - 50716800.0)::double precision; NOTICE: QUERY PLAN: Seq Scan on aux_datarecord (cost=1.00..100548707.00 rows=477476 width=8) (actual time=114670.09..114670.09 rows=0 loops=1) Total runtime: 114670.23 msec EXPLAIN data=> EXPLAIN ANALYZE SELECT acqtimestamp FROM aux_datarecord where acqtimestamp between '1026686397'::double precision and '1026686632'::double precision; NOTICE: QUERY PLAN: Index Scan using aux_datarecord_acqtimestamp on aux_datarecord (cost=0.00..3.54 rows=1 width=8) (actual time=0.66..0.66 rows=0 loops=1) Total runtime: 0.83 msec EXPLAIN == ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Orphaned trigger -- bug?
I just ran across a situation where I dropped a table that had a trigger created on it. The system let me drop the table without complaining and then I realized/remembered that the trigger also existed. I tried dropping the trigger, but of course that couldn't be done because the target relation no longer existed. Of course I couldn't remake a relation because that would have a different OID. The only way I found to drop the trigger was to drop the entire database. Shouldn't the system complain about triggers being dependent on relations the same way that dropping a table that is used in a view isn't allowed? Thanks! Mark ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Oid problem
Richard Huxton írta: On Thursday 08 April 2004 15:21, Sky wrote: Problem: pg_loopen() unable to open PostgreSQL large object in... Solution from another mailing list: My blob oid was very big. I had to cast to (float) the oid parameter of pg_loopen. Sorry, bigger than what? Why float? Watch these links: http://archives.postgresql.org/pgsql-php/2003-09/msg00077.php http://archives.postgresql.org/pgsql-php/2003-09/msg00079.php I found it by google. This error message is my problem: pg_loopen() unable to open large object / PHP4 / I tried to use the oid in a function: CREATE FUNCTION addfoto(character varying, oid) RETURNS INTEGER AS ' BEGIN; INSERT INTO foto(filename, image) VALUES ( $1, $2 ); COMMIT; SELECT 1 AS RESULT; ' LANGUAGE 'sql'; Assuming the definition of "foto" is OK, looks fine from here. So, What can I do, and how, how can I "CAST" ?? Sorry - dont understand, but you can cast by: Watch the links above. SELECT foo::int8 ... SELECT CAST(foo as int8) ... Is the problem that you want to handle oid's (0 .. 4 billion) and you are using int4 (-2 billion .. +2 billion). In which case int8 might be a better choice than int4. Sorry but I don't understand, where do I use int4 ?? Here is my table definition too: CREATE TABLE foto( filename CHARACTER VARYING, image OID ); HTH Many thanks! -- Sky sky AT sylvio .hu Debian Group - Association of Hungarian Linux Users Accessibility Project leader gpg --keyserver hkp://pgp.mit.edu --recv-keys 314D1B0C fingerprint = B211 E8C7 C6C2 18B2 9F30 ADF6 EDD9 A7FA 314D 1B0C ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] pg_ctl problem
(sorry for the double post if there is one - i sent the mail to the lisyt from the wrong address) Hi, Im passing this in the commmand line to start up the PostgreSQL server: ./pg_ctl start -w -D /Volumes/GROUCH\ 2/Database3 but its always giving me this error: ./postmaster: invalid argument: "2/Database3" so i tried passing this: ( added quotes to the problematic database path ) ./pg_ctl start -w -D "/Volumes/GROUCH\ 2/Database3" and i still get the same error. Can i not pass a path with spaces in it? How should i pass this command to the command line in order to get the server up and running? thanks! -- Alexander Cohen http://www.toomuchspace.com (819) 348-9237 (819) 432-3443 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Big string using Zeos from Delphi
Hi!! I am trying to store a picture into pg database. I am usig delphi and ZeosDBO. I convert the picture into base64 text to send to database but when I do this I get this error trying to do insert : SQL:Error pqreadData() Read()--failed: errno=0 No error I test using Pgexplorer, this use libpq too, and everything is fine doing insert and loading from select. When I try to do a load with select I get the same error above. At the first I was thinking , the problem was the escape character / but when I do it the same for pgadmin this ran fine. I replace the character / for * too, for testing and nothing happens I got the same error. Somebody has idea? I know maybe it doesnt appear here but is just help! :( *---* *-Edwin Quijada *-Developer DataBase *-JQ Microsistemas *-809-747-2787 * " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo comun" *---* _ Charla con tus amigos en línea mediante MSN Messenger: http://messenger.latam.msn.com/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Can we have time based triggers in Postgresql??
Mike Nolan <[EMAIL PROTECTED]> writes: > > There's no reason you couldn't write a cron daemon that retrieved its list of > > jobs from an SQL database. In fact we discussed doing exactly this at my last > > job. I expect others have thought of the same idea and wouldn't be surprised > > if it existed by now. > > Is there (or would we need) a way to specify access privileges for such > a daemon in pg_hba.conf, and would that allow controlling the access > privileges for each job or the user who sets it? (ISP's would want > tight controls here.) Uh, that's not at all what I was picturing when I wrote the description above. I was picturing a cron daemon just like any other cron daemon except it happens to be backed by the database instead of a flat file. The privileges would be specified in tables just like they're currently specified in flat files (well currently in the directory structure above the flat files actually). The main advantage of such a database backed cron daemon would be the possible integration with other database backed systems. Job status could be viewed on database-backed web pages, Jobs could be created or deleted from web pages, failures could automatically create trouble tickets... -- greg ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Storing jpgs
Marek Lewczuk <[EMAIL PROTECTED]> writes: > On Apr 5, 2004, at 2:03 PM, Gavin M. Roy wrote: > > > I'm one for using base64 encoded text in a text field. It's easy to > > deal with queries, it's easy to deal with spitting out images, and > > it's easy to back up. Others do it differently, there are many ways to > > skin a cat, and each person who skins cats most likely thinks their > > way is best. Gavin > > Anyone knows what about performance of that kind of solution. Right now I'm > storing all multimedia files on the disc, but I would be great if all files > could be stored in DB (automatic remove, portability etc...). But the problem > is with the size of all files - today it is about 5GB, so I belive that this > can affect for overall DB performance. Any experiences ? Well just having them in the database won't affect performance directly. But there are indirect effects: 1) If you're accessing them regularly then the disk data will be cached by the kernel just like other tables, and will contend with the much denser data from your data tables. 2) Backups and restores will have to slog through that data as well and take correspondingly longer. > PS. My DB have 3 schemes where about 1 million different data are stored. I > thought that it will be the best to create new scheme for storing files ? > Right ? How about perfomance in this case ? Schemas don't affect performance at all. If you're debating between storing in the same table versus new identical tables there could be pros or cons, but if it complicates your SQL it's probably not worth it in any case. -- greg ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Oid problem
On Thursday 08 April 2004 15:21, Sky wrote: > Problem: > pg_loopen() unable to open PostgreSQL large object in... > > Solution from another mailing list: > My blob oid was very big. > I had to cast to (float) the oid parameter of pg_loopen. Sorry, bigger than what? Why float? > Question: > The oid is a type, isn't it? Yes > I tried to use the oid in a function: > > CREATE FUNCTION addfoto(character varying, oid) > RETURNS INTEGER > AS > ' > BEGIN; > INSERT INTO foto(filename, image) VALUES ( $1, $2 ); > COMMIT; > SELECT 1 AS RESULT; > ' > LANGUAGE 'sql'; Assuming the definition of "foto" is OK, looks fine from here. > So, What can I do, and how, how can I "CAST" ?? Sorry - dont understand, but you can cast by: SELECT foo::int8 ... SELECT CAST(foo as int8) ... Is the problem that you want to handle oid's (0 .. 4 billion) and you are using int4 (-2 billion .. +2 billion). In which case int8 might be a better choice than int4. HTH -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] eval in plpgsl
Bruce Momjian wrote: Mage wrote: Hello, is there any variable variable syntax or function in plpgsql? I didn't find it. I mean something like: magic(''new.'' || TG_ARGV[0]) = 5; Uh, we have EXECUTE, but I don't know if will allow you to set a local variable. Maybe you can do: EXECUTE SELECT ''new.''|| TG_ARGV[0] INTO new var It does not. This really isn't a full blown scripting language. If one wants that, there is PL/Tcl. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Oid problem
Problem: pg_loopen() unable to open PostgreSQL large object in... Solution from another mailing list: My blob oid was very big. I had to cast to (float) the oid parameter of pg_loopen. Question: The oid is a type, isn't it? I tried to use the oid in a function: CREATE FUNCTION addfoto(character varying, oid) RETURNS INTEGER AS ' BEGIN; INSERT INTO foto(filename, image) VALUES ( $1, $2 ); COMMIT; SELECT 1 AS RESULT; ' LANGUAGE 'sql'; So, What can I do, and how, how can I "CAST" ?? -- Sky sky AT sylvio .hu Debian Group - Association of Hungarian Linux Users Accessibility Project leader gpg --keyserver hkp://pgp.mit.edu --recv-keys 314D1B0C fingerprint = B211 E8C7 C6C2 18B2 9F30 ADF6 EDD9 A7FA 314D 1B0C ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] eval in plpgsl
--- Mage <[EMAIL PROTECTED]> wrote: > Bruce Momjian wrote: > > > > >Uh, we have EXECUTE, but I don't know if will allow > you to set a local > >variable. Maybe you can do: > > > > > execute doesn't support select into It does, if you use the "for variablename in execute ..." syntax. Which may seem like overkill for setting just one value, but at least it will work for you. Or you could investigate other procedural languages, which may offer the capability you want. I believe pltcl does, and perhaps plpython also. > >Mage > > > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings __ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Problem with pg_dump - reg.
Shanmugasundaram Doraisamy <[EMAIL PROTECTED]> writes: > Thanks for all your support to my previous > postings. I am currently using Postgresql 7.3.4-2PGDG. The problem I > am facing is that when I take a dump of the DB using pg_dump and restore > it using the cat pg_dump_file |psql dbname, I find that some views and > functions are missing. What errors get reported during the psql run? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Can we have time based triggers in Postgresql??
Gregory Wood wrote: Jan Wieck wrote: Gregory Wood wrote: The only real advantage I see are for those people in hosting environments that don't really have access to cron. I suppose it's also valuable in making your scripts more portable as you wouldn't need to make further alterations to the system beyond restoring a database backup or whatnot. and as soon as one would implement such stuff the users will be asking "and how do I execute shell commands from inside the DB?". Now that would be good thing. In preparation for that we should make the postmaster run under root, so that the pgcron functionality can start those external commands as any user! I think you stretched well beyond the scope of my comments, but since Sure did I and on purpose too, because when you offer "them" the small finger, you find yourself easily dismembered entirely! Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] eval in plpgsl
Bruce Momjian wrote: Uh, we have EXECUTE, but I don't know if will allow you to set a local variable. Maybe you can do: execute doesn't support select into Mage ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Can we have time based triggers in Postgresql??
My thoughts in a perfect world for having oracle style jobs in postgresql would be a very loud yes. In the world called reality I say why waste precious time on something that can already be accomplished by other means. This feature at this time would just be icing on an already tasty cake. This is not putting down anyone's ideas on getting these this is just my personal thought at this point in time. If it would be easy for the c guru's to do ,great put it in there. But if it will pull away from more critical areas why do it. |-+--> | | "Guy Rouillier"| | | <[EMAIL PROTECTED]> | | | Sent by: | | | [EMAIL PROTECTED]| | | tgresql.org| | | | | | | | | 04/07/2004 05:00 PM| | | | |-+--> >--| | | | To: "PostgreSQL General" <[EMAIL PROTECTED]> | | cc: | | Subject: Re: [GENERAL] Can we have time based triggers in Postgresql?? | >--| Andrew Sullivan wrote: > On Tue, Mar 30, 2004 at 12:16:50PM -0500, Steve Manes wrote: >> I think he probably means like an Oracle job. Although cron works, >> that would be handy so you wouldn't need to write wrapper scripts >> just to run a proc. > > I hate to sound like an oldbie crank (although I'll admit to being a > crank), but what exactly is the advantage supposed to be here? One > invents a new special bit of database code which exists just so > people don't have to write shell scripts? I guess the idea gets > under my skin just because I have enough time-based problems without > inventing a new interface to make it more complicated. Hmm. Looks > like the sigmonster has gacked up a piece of wisdom. > > A As one who was about to ask the same question, I can provide one reason: ease of use/administration. I can create everything I need to manage/process my database *in* my database, rather than in numerous shell scripts scattered about. I had a weird issue with cron recently. I needed to do some db maintenance and wanted to make sure no one was going to change it, so I removed all cron jobs (crontab -r). Halfway through, suddenly the database started getting updated!! I finally figured out that my predecessors, for some reason, had stuck a couple cron jobs in the root crontab, which I corrected. And of course, if I copy a database someplace, the jobs would go with it; not so cron jobs. Now having said this, I realize that even if a scheduler was added to PostgreSQL, there is nothing to stop someone from additionally using cron jobs anyway. But there would be less temptation to do so. -- Guy Rouillier ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html * PRIVILEGED AND CONFIDENTIAL: This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies. * ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] How estimated rows is running ?
Scott, Le mercredi 7 Avril 2004 22:13, scott.marlowe a écrit : > On Wed, 7 Apr 2004, [iso-8859-15] Hervé Piedvache wrote: > > > > I have a database with one table with about 22 millions records. > > I have a script every day wich insert about 200 000 records ... > > Sometime my script takes 4 hours to insert the data, sometime 1 hour. > > Are you wrapping all your inserts into one or several large transactions? > This can make inserting data MUCH faster in postgresql. It's one transaction ... because it's an INSERT from SELECT ... > Do you have fk/pk relationships? If so, are they of the same types? No no fk/pk ... > I wouldn't worry about the vacuum / analyze numbers being a little off. A > small variance of a few percent is no biggie, it's when things are off by > factors that the planner gets confused and makes bad decisions. Yes but he gets confused of about 150 000 rows ... it's few in front of the 22 millions ... but it's done just after an Analyze ... so why the analyze do not get the information ?? Regards, -- Hervé Piedvache Elma Ingénierie Informatique 6 rue du Faubourg Saint-Honoré F-75008 - Paris - France Pho. 33-144949901 Fax. 33-144949902 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly