[GENERAL] Finding time of last pg_stat_reset
I'm trying to figure out if there's any way to find when statistics was last reset. Previously when we were using 8.2, we had stats_reset_on_server_start set to on, and then assumed pg_stat_get_backend_start as the start time for collected stats. Is there any way to do this in 8.3, without f.ex adding a call to pg_stat_reset() in our startup scripts? -- Tommy Gildseth -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Very slow searching in a table with more than 10 millions recovered records from a backup file...
Hi there, I have an application with a database (pgsql) which has a big table ( 10 millions records) in windows 2003. Some times, I need to install the new version of the application. Here is what I did: 1. back up the big table via pgadmin III, 2. stop the pgsql in the old version of the application, 3. install the new version of the application (pgsql is included and all tables keep same like before) and 4. recovering the data( 10 millions records) into the table from the backup file. After I restart the application, searching the table becomes very very slow (much slower than the searching in the old version). I don't know what is wrong with it. pgsql needs time to reindexing those 10 millions records for the searching? Thanks for your suggestions in advance. ouyang
Re: [GENERAL] Very slow searching in a table with more than 10 millions recovered records from a backup file...
On Fri, Jun 12, 2009 at 9:56 AM, zxo102 ouyangzxo...@gmail.com wrote: Hi there, I have an application with a database (pgsql) which has a big table ( 10 millions records) in windows 2003. Some times, I need to install the new version of the application. Here is what I did: 1. back up the big table via pgadmin III, 2. stop the pgsql in the old version of the application, 3. install the new version of the application (pgsql is included and all tables keep same like before) and 4. recovering the data( 10 millions records) into the table from the backup file. After I restart the application, searching the table becomes very very slow (much slower than the searching in the old version). I don't know what is wrong with it. pgsql needs time to reindexing those 10 millions records for the searching? This is because you missed vacuum analyze in those steps, that should be done right after restore. -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
DimitryASuplatov wrote: My task is to store a lot (10^5) of small ( 10 MB) text files in the database with the ability to restore them back to the hard drive on demand. I cannot but ask the community a related question here: Can such design, that is, storing quite large objects of varying size in a PostgreSQL database, be a good idea in the first place? I used to believe that what RDBMS were really good at was storing a huge number of relations, each of a small and mostly uniform size if expressed in bytes; but today people tend to put big things, e.g., email or files, in relational databases because it's convenient to them. That's absolutely normal as typical data objects we have to deal with keep growing in size, but how well can databases stand the pressure? And can't it still be better to store large things as plain files and put just their names in the database? File systems were designed for such kind of job after all, unlike RDBMS. Thanks! Yar -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Libpq on windows
On 2009-06-11, Phil Longstaff plongst...@rogers.com wrote: --Boundary-00=_kTFMK/PsAPB2oua Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit I want to develop an app which uses libpq, built with mingw. Is there a download package which contains just the include files/dlls? If not, what package do I download? I don't need the server, just the client libraries. Phil are you on dialup, or pay per kilobyte, or some other restricitve net connection? at work we download the msi installer, run it to get the dev files, and save the installer for distribution with the finished app. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] help with data recovery from injected UPDATE
It's a classic story. I'm volunteering about one day per month for this project, learning SQL as I go. Priority was always given to the get it working tasks and never the make it safe tasks. I had/have grandiose plans to rewrite the whole system properly after I graduate. Unfortunately, the inevitable corruption didn't wait that long. As you're learning, it sounds like parametrized queries might have saved you from the sql injection that caused this. Very true, and always a good idea. However, OPs true failure here is on the backup front. Without recent, reliable backups, on another machine / media / datacenter etc. is the only way your data can be truly safe. [Spotts, Christopher] Oh absolutely. Regardless of anything you do on the functional aspect, you'd still need backups. I was just saying that if you're eventually going to redesign (like mentioned), a nudge towards parameterized queries doesn't hurt. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Libpq on windows
On 2009-06-11, Phil Longstaff plongst...@rogers.com wrote: --Boundary-00=_SzPMK0I3TQhQuQd Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit On June 11, 2009 01:21:09 am Albe Laurenz wrote: Phil wrote: I want to develop an app which uses libpq, built with mingw. Is there a download package which contains just the include files/dlls? If not, what package do I download? I don't need the server, just the client libraries. You can use the regular binary installer for Windows, it gives you the option to deselect certain components. So you can just install th client and libraries without the server. Yours, Laurenz Albe I should have been more specific. I was hoping for a compressed file to download and uncompress, because this is part of an automated build process. Downloading the larger installer and manually selecting components doesn't help me. the MSI installer can be run unattended, the no-installer zip will have the files themselves somewhere. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Libpq on windows
On 2009-06-11, Phil Longstaff plongst...@rogers.com wrote: --Boundary-00=_G1PMKwGIJrCuvLL Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit On June 10, 2009 10:00:48 pm Andy Colson wrote: Phil Longstaff wrote: I want to develop an app which uses libpq, built with mingw. Is there a download package which contains just the include files/dlls? If not, what package do I download? I don't need the server, just the client libraries. Phil If you dont mind all the ssl stuff (5 or 6 dll's worth) you can borrow the client from pgAdmin III. I, however, just build my own w/out the ssl stuff. There is no client only, but its really not hard, just download the full source, unpack, compile and then copy the libpq.dll out. If you want mine I could copy it out so you can download it. Thanks. How strict is pgsql about version matches (client to server). If I get your client, will it work with all 8.x versions of the server? Is there a place on the postgresql.org website that I could request that they package the library (built, or just source) by itself? there's a source repostiory, CVS I think. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Finding time of last pg_stat_reset
Tommy Gildseth wrote: I'm trying to figure out if there's any way to find when statistics was last reset. Previously when we were using 8.2, we had stats_reset_on_server_start set to on, and then assumed pg_stat_get_backend_start as the start time for collected stats. Is there any way to do this in 8.3, without f.ex adding a call to pg_stat_reset() in our startup scripts? No, I don't think the reset time is recorded anywhere. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
Yaroslav Tykhiy wrote: DimitryASuplatov wrote: My task is to store a lot (10^5) of small ( 10 MB) text files in the database with the ability to restore them back to the hard drive on demand. I cannot but ask the community a related question here: Can such design, that is, storing quite large objects of varying size in a PostgreSQL database, be a good idea in the first place? I used to believe that what RDBMS were really good at was storing a huge number of relations, each of a small and mostly uniform size if expressed in bytes; but today people tend to put big things, e.g., email or files, in relational databases because it's convenient to them. That's absolutely normal as typical data objects we have to deal with keep growing in size, but how well can databases stand the pressure? And can't it still be better to store large things as plain files and put just their names in the database? File systems were designed for such kind of job after all, unlike RDBMS. Thanks! Yar I'd have to vote yes, its a good idea. It offers consistency that you cant get with the file system. We store assessor information in a database along with a photo of the house. If the photos were on the file system whats to prevent someone from blowing away the folder? (our network admin loves to find big folders taking up lots of disk space and blow them away) Sure, you say, they can delete from photos, but that's different (and because our network admin does not know sql). If they delete a file, I'll still have a photo record and know they used to have a photo, and I know the path to the photo, but there is no photo there. But with the photo in the database, if there is no photo, there is no record either. If you are looking for speed, yea, I'd say store it on the file system. But we are looking for convenience and consistency. -Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
2009/6/6 DimitryASuplatov gene...@gmail.com: Hello, I am very new to postgresql database. I`ve used a little of MySql previously. My task is to store a lot (10^5) of small ( 10 MB) text files in the database with the ability to restore them back to the hard drive on demand. That means that I need two functions. First - grab file from the directory, store it in the database and delete from the disk; second - recreate in back to the disk. 1/ Is it possible? 2/ Could you give me some quick tips on how to manage it from the start so that I knew what to look for in the manual? Thank you for your time. Dimitry You can use the function pg_read_file(text,bigint,bigint)? like: inser into table foo values(1,2,3 [...], pg_read_file(filename,1,[?]); The problem is to write it on the disc, there is not a function for that AFAIK. -- Emanuel Calvo Franco ArPUG [www.arpug.com.ar] / AOSUG Member www.emanuelcalvofranco.com.ar -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] search for partial dates
On Thu, June 11, 2009 17:37, Andy Colson wrote: That's a little vague, so how about: select * from somethine where (extract(year from idate) = $1) or (extract(year from idate) = $2 and extract(month from idate) = $3) or (extract(year from idate) = $4 and extract(month from idate) = $5 and extract(day from idate) = $6) Actually, I am thinking that perhaps this is better accomplished by parsing the data in the application and generating a date range that I then pass as parameters to a PG BETWEEN condition: For example: given 2008 then SD = 2008010101 and ED = 20081231235959 given 200805 then SD = 2008050101 and ED = 20080531235959 given 20080709 then SD = 2008070901 and ED = 20080709235959 I believe that this construction should work and also make use of the index SELECT * WHERE effective_from BETWEEN SD and ED Is my appreciate correct? -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
This is a recurring debate and there are pros and cons for both sides. It usually comes down to whether you need transactional guarantees for these large objects. There are also practical concerns. Transfering these large objects over a single database tcp connection limits the application performance a lot. And the database is often more heavyweight than you really want to keep tied up to serve up images. Also, it makes backups a pain since it's a lot easier to back up a file system than a database. But that gets back to whether you need transactional guarantees. The reason it's a pain to back up a database is precisely because it needs to make those guarantees. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
If I had an admin roaming through my document server deleting document files out from under my database, that's a problem I would solve very quickly--with a completely non-technical solution. After all, what's to prevent such a person from deleting pgsql data files??? -- Scott Ribe scott_r...@killerbytes.com http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
Scott Ribe wrote: If I had an admin roaming through my document server deleting document files out from under my database, that's a problem I would solve very quickly--with a completely non-technical solution. After all, what's to prevent such a person from deleting pgsql data files??? Yea, true. I don't actually have a rogue admin cleaning up too much. My point was its less accessible and feels safer when the photos are in the database. -Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] PostgreSQL installation
I am from Cambodia. I want to use PostgreSQL. But I am poor of knowledge to install could you please help me give some guide to install PostgreSQL on Solaris 10, (+ to pgsql-general) Try to use the general list for these cases, you could obtain better results ;) The documentation for Solaris are in the site #1, you will find since installation until complex trace rutines. Enjoy it! #1[http://www.sun.com/software/products/postgresql/index.jsp] Regards, -- Emanuel Calvo Franco ArPUG [www.arpug.com.ar] / AOSUG Member www.emanuelcalvofranco.com.ar -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Having trouble restoring our backups
Hey guys, I'm having difficulty restoring some of our backups. Luckily, I'm only trying to do this to bring up a copy of our database for testing purposes, but this still has me freaked out because it means we currently have no valid backups and are only running with a single warm spare. Our primary database is on a RAID-10 that can't take snapshots and is very overworked, so we ship our wal files to a warm standby server. Every day or two I log in to the warm standby and run the following commands: 1. xfs_freeze -f /srv (this is where the entire postgres tree is mounted, no funny business with symlinks) 2. * take file system snapshot, wait about 30 seconds for snapshot to start running * 3. xfs_freeze -u /srv I don't exactly know how the snapshotting works (it's an Amazon EBS volume), so I don't know if I should wait until the snapshotting is 100% complete before I unfreeze the volume. This whole process can easily take 30 minutes to an hour, so I am also concerned that if I wait that long to unfreeze the volume I may cause an excessive backlog of wal files that are not getting applied to the warm spare. Now, when I try to restore one of these snapshots, I do the following: 1. create new share from snapshot 2. mount new share in new Linux instance 3. start postgres, verify that it's running and is still in recovery mode 4. touch my go live file and bring the database up I've done this successfully in the past. Today, however, I'm running into this problem when I try to run some queries: ERROR: could not access status of transaction 237546265 DETAIL: Could not open file pg_clog/00E2: No such file or directory. I tried creating the missing files last night using dd, and I was able to get the database to a point where I was able to run queries against it, however it was missing data that should have been there. I tried again this morning with a different snapshot and I've run into the same problem again. What am I doing wrong? FYI, we're running 8.3.7. Thanks, Bryan
Re: [GENERAL] How to store text files in the postgresql?
On Friday 12 June 2009, Greg Stark gsst...@mit.edu wrote: Also, it makes backups a pain since it's a lot easier to back up a file system than a database. But that gets back to whether you need transactional guarantees. The reason it's a pain to back up a database is precisely because it needs to make those guarantees. It's far easier to backup and restore a database than millions of small files. Small files = random disk I/O. The real downside is the CPU time involved in storing and retrieving the files. If it isn't a show stopper, then putting them in the database makes all kinds of sense. -- WARNING: Do not look into laser with remaining eye. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Having trouble restoring our backups
On Friday 12 June 2009, Bryan Murphy bmurphy1...@gmail.com wrote: What am I doing wrong? FYI, we're running 8.3.7. See the documentation on PITR backups for how to do this correctly. -- WARNING: Do not look into laser with remaining eye. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
It's far easier to backup and restore a database than millions of small files. Small files = random disk I/O. The real downside is the CPU time involved in storing and retrieving the files. If it isn't a show stopper, then putting them in the database makes all kinds of sense. On the contrary, I think backup is one of the primary reasons to move files *out* of the database. Decent incremental backup software greatly reduces the I/O time needed for backup of files as compared to a pg dump. (Of course this assumes the managed files are long-lived.) -- Scott Ribe scott_r...@killerbytes.com http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] WITH RECURSIVE clause -- all full and partial paths
Dear list, In reference to the message below posted on the 'pgsql-hackers' list regarding 'iterative' queries, could anyone help me write the queries that return all full and all partial paths from the root? Sincerely, Aryé. --http://archives.postgresql.org/pgsql-hackers/2008-02/msg00642.php CREATE TABLE department ( id INT PRIMARY KEY, parent_department INT REFERENCES department, name TEXT ); INSERT INTO department VALUES (0, NULL, 'ROOT'); INSERT INTO department VALUES (1, 0, 'A'); INSERT INTO department VALUES (2, 1, 'B'); INSERT INTO department VALUES (3, 2, 'C'); INSERT INTO department VALUES (4, 2, 'D'); INSERT INTO department VALUES (5, 0, 'E'); INSERT INTO department VALUES (6, 3, 'F'); INSERT INTO department VALUES (7, 5, 'G'); --select * from department --delete from department This will represent a tree structure of an organization: ROOT --- A --- B --- C --- F | | | + D | +- E --- G --If you want to extract all departments under A, you could use a recursive query: WITH RECURSIVE subdepartment AS ( -- SELECT * FROM department WHERE name = 'A' UNION ALL -- recursive term referring to subdepartment SELECT d.* FROM department AS d, subdepartment AS sd --WHERE d.id = sd.parent_department WHERE d.parent_department = sd.id ) SELECT * FROM subdepartment; This will return A, B, C, D and F.
Re: [GENERAL] How to store text files in the postgresql?
On Friday 12 June 2009, Scott Ribe scott_r...@killerbytes.com wrote: It's far easier to backup and restore a database than millions of small files. Small files = random disk I/O. The real downside is the CPU time involved in storing and retrieving the files. If it isn't a show stopper, then putting them in the database makes all kinds of sense. On the contrary, I think backup is one of the primary reasons to move files *out* of the database. Decent incremental backup software greatly reduces the I/O time needed for backup of files as compared to a pg dump. (Of course this assumes the managed files are long-lived.) We'll have to just disagree on that. You still have to do level 0 backups occasionally. Scanning a directory tree of millions of files to decide what to backup for an incremental can take forever. And restoring millions of small files can take days. But I concede there are good arguments for the filesystem approach; certainly it's not a one size fits all problem. If your files are mostly bigger than a few MB each, then the filesystem approach is probably better. And of course big database tables get unwieldy too, for indexing and vacuuming - I wouldn't necessarily put most files into the large object interface, just the ones too big to want to fetch all in one piece. -- WARNING: Do not look into laser with remaining eye. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Having trouble restoring our backups
On Fri, Jun 12, 2009 at 10:48 AM, Alan Hodgson ahodg...@simkin.ca wrote: On Friday 12 June 2009, Bryan Murphy bmurphy1...@gmail.com wrote: What am I doing wrong? FYI, we're running 8.3.7. See the documentation on PITR backups for how to do this correctly. I've read through the PITR documentation many times. I do not see anything that sheds light on what I'm doing wrong, and I've restored older backups successfully many times in the past few months using this technique. I have no explanation for why all of a sudden my last few backups are not restoring properly and we've not changed anything on our database setup recently. I'm currently creating a full backup of our primary database and will build a second warm spare with that, but the additional pressure this puts on our system is not acceptable as a long term backup solution. Bryan
Re: [GENERAL] Having trouble restoring our backups
On Fri, Jun 12, 2009 at 11:08 AM, Bryan Murphy bmurphy1...@gmail.comwrote: I've read through the PITR documentation many times. I do not see anything that sheds light on what I'm doing wrong, and I've restored older backups successfully many times in the past few months using this technique. I have no explanation for why all of a sudden my last few backups are not restoring properly and we've not changed anything on our database setup recently. I'm currently creating a full backup of our primary database and will build a second warm spare with that, but the additional pressure this puts on our system is not acceptable as a long term backup solution. FYI, for future reference for anybody else who runs into this problem, it appears we somehow lost the pg_clog files during the last time we took a full snapshot of our primary database. Our PITR spare was happily recovering wal files, but when I tried to bring it up it was missing the pg_clogs and it's literally been weeks since I last tried to do this (stupid on my part). We appear to have repaired our PITR based backup by copying the missing pg_clog files from our production database which thankfully still had them. I do not know how they got dropped from the last snapshot we took, but we'll be looking into our hot-spare building process to see what we can do to prevent this from happening again. Thanks, Bryan
[GENERAL] Behavior of NpgsqlDataReader in NpgSql 1 and NpgSql 2
Hello every one, I have a Vb 2005 application with postgres 8.2 as DB. I was trying to move from Pg 8.2 to 8.3.7, the installation went well. Once i tried running my application i got some errors. After doing some research i realized that. - NpgsqlDataReader in NpgSql 1 was capable to retain the records even eve after the connection is closed but it is not happening with NpgSql 2 After doing a bit of googleing i was not able to find any info on it. Can any one plz tel me that is it the case or i have to do some modification in the code. If that is the case then is there any other object which can be used the way i was using NpgsqlDataReader in NpgSql 1 Regards Nishkarsh -- View this message in context: http://www.nabble.com/Behavior-of-NpgsqlDataReader-in-NpgSql-1-and-NpgSql-2-tp23995183p23995183.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Behavior of NpgsqlDataReader in NpgSql 1 and NpgSql 2
On Fri, Jun 12, 2009 at 07:35, Nishkarshnishkars...@rediffmail.com wrote: Hello every one, Hello, Nishkarsh! I have a Vb 2005 application with postgres 8.2 as DB. I was trying to move from Pg 8.2 to 8.3.7, the installation went well. Once i tried running my application i got some errors. After doing some research i realized that. - NpgsqlDataReader in NpgSql 1 was capable to retain the records even eve after the connection is closed but it is not happening with NpgSql 2 After doing a bit of googleing i was not able to find any info on it. Can any one plz tel me that is it the case or i have to do some modification in the code. If that is the case then is there any other object which can be used the way i was using NpgsqlDataReader in NpgSql 1 Thanks to google Alert, I received a mail about your post here with the answer: http://forums.devshed.com/postgresql-help-21/behavior-of-npgsqldatareader-in-npgsql-1-and-npgsql-2t-617614.html I'm quoting it here for historical purposes. Post from user Pabloj: Not an expert on this but, as per it's site: Quote: Improved performance for large resultsetsifferent from Npgsql1, Npgsql2 doesn't read all table contents before returning control to user code. This makes Npgsql much more memory efficient when dealing with large tables. Thanks Jon Hanna for this. Check User Manual for more info about that and how to revert to old behavior of loading all data if you need it. Which might be related to the problem you're experiencing. Check the user manual under Preload Reader(manual is here ) Note that, from the same site Quote: ... Npgsql2 had a lot of improvements which represents a much more stable product, more bugs fixed and many implemented methods. It is strongly advised that current users update from Npgsql1 to Npgsql2 I hope it helps. -- Regards, Francisco Figueiredo Jr. Npgsql Lead Developer http://www.npgsql.org http://fxjr.blogspot.com http://twitter.com/franciscojunior Sent from Brasilia, DF, Brazil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] search for partial dates
On Fri, Jun 12, 2009 at 12:47:26AM +0200, Leif B. Kristensen wrote: CREATE OR REPLACE FUNCTION date2text(DATE) RETURNS TEXT AS $$ -- removes hyphens from a regular date SELECT SUBSTR(TEXT($1),1,4) || SUBSTR(TEXT($1),6,2) || SUBSTR(TEXT($1),9,2) $$ LANGUAGE sql STABLE; Why not use the to_char function[1]: SELECT to_char($1,'MMDD'); This is better because TEXT(dateval) doesn't have to give a string back in the form -MM-DD, it just does by default. Readability also seems to improve when using to_char. -- Sam http://samason.me.uk/ [1] http://www.postgresql.org/docs/current/static/functions-formatting.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] WITH RECURSIVE clause -- all full and partial paths
In article aaf543e90906120856r5219cf9cv7f13ba0d37494...@mail.gmail.com, aryoo howar...@gmail.com writes: Dear list, In reference to the message below posted on the 'pgsql-hackers' list regarding 'iterative' queries, could anyone help me write the queries that return all full and all partial paths from the root? Probably you want to use the following query: WITH RECURSIVE subdepartment AS ( SELECT id, parent_department, name AS path FROM department WHERE name = 'A' UNION ALL SELECT d.id, d.parent_department, sd.path || '.' || d.name FROM department d JOIN subdepartment sd ON sd.id = d.parent_department ) SELECT id, path FROM subdepartment; This returns: id | path +- 1 | A 2 | A.B 3 | A.B.C 4 | A.B.D 6 | A.B.C.F (5 rows) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] search for partial dates
On Fri, Jun 12, 2009 at 7:58 AM, James B. Byrnebyrn...@harte-lyne.ca wrote: On Thu, June 11, 2009 17:37, Andy Colson wrote: That's a little vague, so how about: select * from somethine where (extract(year from idate) = $1) or (extract(year from idate) = $2 and extract(month from idate) = $3) or (extract(year from idate) = $4 and extract(month from idate) = $5 and extract(day from idate) = $6) Actually, I am thinking that perhaps this is better accomplished by parsing the data in the application and generating a date range that I then pass as parameters to a PG BETWEEN condition: For example: given 2008 then SD = 2008010101 and ED = 20081231235959 given 200805 then SD = 2008050101 and ED = 20080531235959 given 20080709 then SD = 2008070901 and ED = 20080709235959 I believe that this construction should work and also make use of the index SELECT * WHERE effective_from BETWEEN SD and ED Is my appreciate correct? Yeah, if you're just looking at a where clause, between or where tsfield = '2008-07-09 00:00:00' and tsfield '2008-07-10 00:00:00' is even easier to code up, and you won't miss the rare time with timestamp precision of '2008-07-09 23:59:59.456204' or whatnot. The date_trunc and custom trunc functions come in handy when you want to group by time increments like 5 minutes etc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] String Manipulation
Hi, I want to calculate a new field I added to a table but I'm not sure how to do it. This will be a copy of another field with any non numeric characters stripped off the end and padded with spaces. This is what I was trying to do Update Buildings SET B_LOT_SORT=lpad(substr(lot,1,??),7) instead of ?? I need to know the position of the last numeric character. Any suggestions would be appreciated Christine Penner Ingenious Software 250-352-9495 christ...@ingenioussoftware.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] String Manipulation
On Fri, Jun 12, 2009 at 12:36:27PM -0700, Christine Penner wrote: I want to calculate a new field I added to a table but I'm not sure how to do it. This will be a copy of another field with any non numeric characters stripped off the end and padded with spaces. This is what I was trying to do Update Buildings SET B_LOT_SORT=lpad(substr(lot,1,??),7) instead of ?? I need to know the position of the last numeric character. I'd personally use a regular expression, much easier: UPDATE buildings SET b_log_sort = lpad(substring(lot,'^[0-9]+'),7); If this is so that you can sort things based on their numeric order, why not just convert it to an integer rather than doing any padding? -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] String Manipulation
Sam, The problem with making it a numeric field is that I have seen things like A123, #123a or 23-233. This is only here to make most sorting work better, not perfect. It all depends on how they enter the data. Wont the different formats make it harder to convert to a number? I tried your suggestion and haven't had any luck. For a quick test I did this: select b_lot_or_st_no, substring('1a','^[0-9]+') as TEST from F_BUILDINGS With this I tried using b_lot_or_st_no instead of 1a, I also replaced the , with for like they do in the manual. I looked through the manual but I'm still stuck. Christine At 03:05 PM 12/06/2009, you wrote: On Fri, Jun 12, 2009 at 12:36:27PM -0700, Christine Penner wrote: I want to calculate a new field I added to a table but I'm not sure how to do it. This will be a copy of another field with any non numeric characters stripped off the end and padded with spaces. This is what I was trying to do Update Buildings SET B_LOT_SORT=lpad(substr(lot,1,??),7) instead of ?? I need to know the position of the last numeric character. I'd personally use a regular expression, much easier: UPDATE buildings SET b_log_sort = lpad(substring(lot,'^[0-9]+'),7); If this is so that you can sort things based on their numeric order, why not just convert it to an integer rather than doing any padding? -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] WITH RECURSIVE clause -- all full and partial paths
On Fri, Jun 12, 2009 at 10:14:21PM +0200, Harald Fuchs wrote: In article aaf543e90906120856r5219cf9cv7f13ba0d37494...@mail.gmail.com, aryoo howar...@gmail.com writes: Dear list, In reference to the message below posted on the 'pgsql-hackers' list regarding 'iterative' queries, could anyone help me write the queries that return all full and all partial paths from the root? Probably you want to use the following query: WITH RECURSIVE subdepartment AS ( SELECT id, parent_department, name AS path FROM department WHERE name = 'A' UNION ALL SELECT d.id, d.parent_department, sd.path || '.' || d.name FROM department d JOIN subdepartment sd ON sd.id = d.parent_department ) SELECT id, path FROM subdepartment; This is much easier as: WITH RECURSIVE subdepartment AS ( SELECT id, parent_department, ARRAY[name] AS path FROM department WHERE name = 'A' UNION ALL SELECT d.id, d.parent_department, sd.path || d.name FROM department d JOIN subdepartment sd ON ( sd.id = d.parent_department AND d.name NOT IN(sd.path) /* Make sure there are no cycles */ ) SELECT id, path FROM subdepartment; Cheers, David. -- David Fetter da...@fetter.org http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] String Manipulation
On Fri, Jun 12, 2009 at 03:35:44PM -0700, Christine Penner wrote: The problem with making it a numeric field is that I have seen things like A123, #123a or 23-233. This is only here to make most sorting work better, not perfect. It all depends on how they enter the data. Wont the different formats make it harder to convert to a number? The first thing is to define what you want it to do; pick some values and define what the output should be and go from there. If you've got say, A123, #125a and 12-7 and you want them in that order then I'd strip out any non-numeric digits, convert it to a number and then sort on that. regexp_replace is your friend here. I tried your suggestion and haven't had any luck. For a quick test I did this: select b_lot_or_st_no, substring('1a','^[0-9]+') as TEST from F_BUILDINGS What do you get back for simple things like: SELECT substring('1a','^[0-9]+'); I'd expect you to get '1' back out. If you're not getting this out then you'll need to say which version of PG you're using as functions like this get added with each major version. Most useful docs for you are in: http://www.postgresql.org/docs/current/static/functions-string.html You can get to older releases quickly by replacing current with things like 8.2 and 7.4. -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] accessing anyarray elements
I'd like to be able to access individual elements of anyarray, treating them as type anyelement to take advantage of the polymorphism. Using pg_stats.histogram_bounds as a convenient example of an anyelement array, here's an example of the issue I'm running into. test_anyarray=# select version (); version - PostgreSQL 8.4beta2 on i386-apple-darwin9.6.0, compiled by GCC i686- apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490), 32-bit (1 row) histogram_bounds for for pg_catalog.pg_type.typelen test_anyarray=# SELECT histogram_bounds FROM pg_stats WHERE (schemaname,tablename,attname) = ('pg_catalog','pg_type','typlen'); histogram_bounds -- {-2,-2,1,2,6,64} (1 row) checking to see if functions treat the histogram_bounds anyarray as an array: test_anyarray=# SELECT histogram_bounds, array_upper(histogram_bounds, 1) FROM pg_stats WHERE (schemaname,tablename,attname) = ('pg_catalog','pg_type','typlen'); histogram_bounds | array_upper --+- {-2,-2,1,2,6,64} | 6 (1 row) Trying to access the first element of histogram_bounds: test_anyarray=# SELECT histogram_bounds, histogram_bounds[1] FROM pg_stats WHERE (schemaname,tablename,attname) = ('pg_catalog','pg_type','typlen'); ERROR: cannot subscript type anyarray because it is not an array Now, by casting through text to a specific array type, it works. test_anyarray=# SELECT histogram_bounds, (histogram_bounds::text::int[])[1] FROM pg_stats WHERE (schemaname,tablename,attname) = ('pg_catalog','pg_type','typlen'); histogram_bounds | histogram_bounds --+-- {-2,-2,1,2,6,64} | -2 (1 row) However, this casting defeats the purpose of using a polymorphic type. It appears this issue has come up before: http://archives.postgresql.org/message-id/20070801020230.gl15...@alvh.no-ip.org and tangentially here: http://archives.postgresql.org/message-id/14653.1229215...@sss.pgh.pa.us In this particular case, I'm hoping to get at the histogram array to look at the data distribution in a general way: CREATE SCHEMA utility; CREATE OR REPLACE FUNCTION utility.histogram_bounds(in_histogram_bounds anyarray, OUT bucket_index integer, OUT lower_bound anyelement, OUT upper_bound anyelement, OUT width anyelement, OUT cumulative_width anyelement) RETURNS SETOF RECORD STRICT LANGUAGE plpgsql AS $body$ DECLARE v_idx INTEGER; BEGIN cumulative_width := 0; bucket_index := 0; FOR v_idx IN 1..array_upper(in_histogram_bounds,1) LOOP lower_bound := upper_bound; upper_bound := in_histogram_bounds[v_idx]; CONTINUE WHEN v_idx = 1; bucket_index := bucket_index + 1; width := upper_bound - lower_bound; cumulative_width := cumulative_width + width; RETURN NEXT; END LOOP; RETURN; END $body$; Any ideas on how I might implement this? Would it require a change in the backend? Cheers, Michael Glaesemann grzm seespotcode net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] String Manipulation
Sam, I get nothing. I just updated recently but the only version number I can find is 8.3. I know its at least 8.3.4 but should be more. Christine At 03:58 PM 12/06/2009, you wrote: On Fri, Jun 12, 2009 at 03:35:44PM -0700, Christine Penner wrote: The problem with making it a numeric field is that I have seen things like A123, #123a or 23-233. This is only here to make most sorting work better, not perfect. It all depends on how they enter the data. Wont the different formats make it harder to convert to a number? The first thing is to define what you want it to do; pick some values and define what the output should be and go from there. If you've got say, A123, #125a and 12-7 and you want them in that order then I'd strip out any non-numeric digits, convert it to a number and then sort on that. regexp_replace is your friend here. I tried your suggestion and haven't had any luck. For a quick test I did this: select b_lot_or_st_no, substring('1a','^[0-9]+') as TEST from F_BUILDINGS What do you get back for simple things like: SELECT substring('1a','^[0-9]+'); I'd expect you to get '1' back out. If you're not getting this out then you'll need to say which version of PG you're using as functions like this get added with each major version. Most useful docs for you are in: http://www.postgresql.org/docs/current/static/functions-string.html You can get to older releases quickly by replacing current with things like 8.2 and 7.4. -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] String Manipulation
On Fri, Jun 12, 2009 at 04:07:11PM -0700, Christine Penner wrote: I get nothing. I just updated recently but the only version number I can find is 8.3. I know its at least 8.3.4 but should be more. OK, the main thing is that you're running a copy of PG from the 8.3 series. I've just tried it on a reasonably recent 8.3.7 and an old 8.3.0 I have and I get what I'd expect back (i.e. a string containing the number '1'). I'd start to question things like are you talking to the database you think you are, which client are you using and other details like that. If you can connect through psql it should tell you the server version and if you could paste a complete session that would help. For example, I get: s...@willow:~$ psql Welcome to psql 8.3.7, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit sam= SELECT substring('1a','^[0-9]+'); substring --- 1 (1 row) sam= -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Maintenance database SQL_ASCII
After a new pgsql installation the postgres maintenance database has an encoding of SQL_ASCII. pgAdmin III gave me a warning about that, and I may want to create users or databases that are not restricted 7bit ASCII. I was going to backup and recreate this table, but it can't be dropped. I guess I could backup all of the other databases and start over from initdb, but that would be a hassle. Do I have any other options? The strange thing is that I have two very similarly configured Ubuntu machines and the other created the postgres database with UTF8 encoding. The locale on both machines is the same: $ locale LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LC_NUMERIC=en_US.UTF-8 LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=en_US.UTF-8 LC_ADDRESS=en_US.UTF-8 LC_TELEPHONE=en_US.UTF-8 LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=en_US.UTF-8 LC_ALL= -Brad -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Very slow searching in a table with more than 10 millions recovered records from a backup file...
Grzegorz, Thank you very much. I will do that. I have another question: if I do the following steps, does it hurt pgsql? step 1. stop the pgsql in the old version of the application; the whole application is installed in c:/xbop and pgsql is located in c:/xbop/pgsql; step 2. rename c:/xbop to c:/xbop_old; step 3. install the new version in c:/xbop step 4. copy the pgsql in c:/xbop_old/pgsql into c:/xbop Since pgsql's backup and restore will take hours for the big table, if the above steps will not hurt the performance of pgsql, that might be a good way for me. Any suggestions. ouyang 2009/6/12 Grzegorz Jaśkiewicz gryz...@gmail.com On Fri, Jun 12, 2009 at 9:56 AM, zxo102 ouyangzxo...@gmail.com wrote: Hi there, I have an application with a database (pgsql) which has a big table ( 10 millions records) in windows 2003. Some times, I need to install the new version of the application. Here is what I did: 1. back up the big table via pgadmin III, 2. stop the pgsql in the old version of the application, 3. install the new version of the application (pgsql is included and all tables keep same like before) and 4. recovering the data( 10 millions records) into the table from the backup file. After I restart the application, searching the table becomes very very slow (much slower than the searching in the old version). I don't know what is wrong with it. pgsql needs time to reindexing those 10 millions records for the searching? This is because you missed vacuum analyze in those steps, that should be done right after restore. -- GJ
Re: [GENERAL] Very slow searching in a table with more than 10 millions recovered records from a backup file...
On Sat, 2009-06-13 at 09:31 +0800, zxo102 ouyang wrote: Grzegorz, Thank you very much. I will do that. I have another question: if I do the following steps, does it hurt pgsql? step 1. stop the pgsql in the old version of the application; the whole application is installed in c:/xbop and pgsql is located in c:/xbop/pgsql; step 2. rename c:/xbop to c:/xbop_old; step 3. install the new version in c:/xbop step 4. copy the pgsql in c:/xbop_old/pgsql into c:/xbop Since pgsql's backup and restore will take hours for the big table, if the above steps will not hurt the performance of pgsql, that might be a good way for me. You should be able to use the new install's PostgreSQL binaries with the old data directory so long as the old and new versions of PostgreSQL are the same major version, and have been built with compatible options. If you're sure the old and new versions of postgresql are compatible, stop the old version of postgresql, move the data directory aside, install the program upgrade, then *COPY* (not move, just in case) the data directory back. You *MUST* dump and reload if you are going to upgrade from, say, 8.2 to 8.3, or 8.3 to 8.4. It is OK to upgrade without a dump and reload from, say, 8.3.1 to 8.3.3 . In addition to making sure the versions are compatible, you must also make sure the new copy of PostgreSQL was built with compatible options. Both must have the same setting for integer datetimes, both must be built with the same blocksize, etc. They should also use the same versions of any procedural languages like Pl/Perl or Pl/Python, etc. If you want to be safe, dump and reload. If you're shipping updates to users/customers who may not upgrade reliably with every version and don't know how to make these decisions for themselves, dump and reload. Remember, if the major version changes (eg upgrading from 8.2 to 8.3) or the postgresql build settings have changed, you MUST dump and reload. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
On Fri, 2009-06-12 at 19:53 +1000, Yaroslav Tykhiy wrote: DimitryASuplatov wrote: My task is to store a lot (10^5) of small ( 10 MB) text files in the database with the ability to restore them back to the hard drive on demand. I cannot but ask the community a related question here: Can such design, that is, storing quite large objects of varying size in a PostgreSQL database, be a good idea in the first place? I used to believe that what RDBMS were really good at was storing a huge number of relations, each of a small and mostly uniform size if expressed in bytes; but today people tend to put big things, e.g., email or files, in relational databases because it's convenient to them. That's absolutely normal as typical data objects we have to deal with keep growing in size, but how well can databases stand the pressure? And can't it still be better to store large things as plain files and put just their names in the database? File systems were designed for such kind of job after all, unlike RDBMS. It depends a great deal on what you need. Using the DB allows you to access that data using the same tools, methods, connections, and security credentials you use for your other data. It also allows you to manage it in the same transactional environment, and verify its consistency. Using the file system can be faster and offers a wider variety of methods for accessing and manipulating the data. It can be easier to back up efficiently (differential/incremental backups etc) may take up less space, and more. To me, a nearly ideal option would be a file system that supported transactional operations and two phase commit. You could work with your data objects normally in the FS (at least for read access), but you could _ALSO_ modify them in tandem with the DB: - Begin DB trans - Begin FS trans ... do your work... - prepare db trans for commit - prepare fs trans for commit - commit db trans - commit fs trans A transaction manager could hide those from you, or the DB could take care of the FS 2PC as part of its own LOB and transaction management. In fact, I think that's how Microsoft have done it with transactional NTFS integration in MS SQL Server, which I must say sounds awfully nice. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to store text files in the postgresql?
On Fri, 2009-06-12 at 09:07 -0700, Alan Hodgson wrote: On Friday 12 June 2009, Scott Ribe scott_r...@killerbytes.com wrote: It's far easier to backup and restore a database than millions of small files. Small files = random disk I/O. That depends on how you're backing up. If you want to back a file system up database style, use the filesystem dump utility. fs dump tools have gone somewhat out of fashion lately, because of space use concerns, inconvenience of partial restores, cross-hardware/version compat issues, etc, but they're actually really rather similar to the result you get when backing up a DB like Pg with a fs-level snapshot. If your dump tool supports incrementals, you also get results rather akin to PITR WAL logging. Personally, there's no way I'd back up a filesystem with dump utilities. I don't trust even dumpe2fs enough, the space requirements are prohibitive, and restores are nightmarish. I have similar problems (minus the trust issue) with backups of databases full of LOBs, though. On the contrary, I think backup is one of the primary reasons to move files *out* of the database. Decent incremental backup software greatly reduces the I/O time needed for backup of files as compared to a pg dump. (Of course this assumes the managed files are long-lived.) We'll have to just disagree on that. You still have to do level 0 backups occasionally. Scanning a directory tree of millions of files to decide what to backup for an incremental can take forever. In particular, if your incremental backup tool is smart enough to track deleted files the resource requirements can be astonishing. In addition for looking for new/altered files, the tool needs to test to see if any previously backed up file has since vanished - and it can't really even rely on directory modification times to exclude very static data from checking. I use Bacula at at work, and backing up my Cyrus mail spools is a very I/O intensive and somewhat CPU-intensive operation even for incrementals, since the backup server and file daemon are exchanging file lists all the time, scanning the whole huge directory tree, etc. The adantage, though, is that the resulting daily backups are only about 1GB instead of 60 - 70GB. When you have to keep three full monthly backups plus daily incrementals for audit/history purposes, that matters. I can't see any way I could reasonably achieve a similar effect if I stored my mail in an RDBMS. Not, at least, and still have acceptable performance in the RDBMS. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general