Re: [GENERAL] [HACKERS] Avoiding io penalty when updating large objects
Tom Lane wrote: Alvaro Herrera <[EMAIL PROTECTED]> writes: On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote: If, for a given row, the value of c is, say, approximately 2^30 bytes large, then I would expect it to be divided up into 8K chunks in an external table, and I should be able to fetch individual chunks of that object (by offset) rather than having to detoast the whole thing. I don't think you can do this with the TOAST mechanism. The problem is that there's no API which allows you to operate on only certain chunks of data. There is the ability to fetch chunks of a toasted value (if it was stored out-of-line but not compressed). There is no ability at the moment to update it by chunks. If Mark needs the latter then large objects are probably the best bet. I'm not sure what it'd take to support chunkwise update of toasted fields. Jan, any thoughts? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly Ok, If there appears to be a sane path to implementing this, I may be able to contribute engineering effort to it. (I manage a group of engineers and could spare perhaps half a man year towards this.) But I would like direction as to how you all think this should be done, or whether it is just a bad idea. I can also go with the large object approach. I'll look into that. Mark Dilger ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [HACKERS] Avoiding io penalty when updating large objects
Alvaro Herrera <[EMAIL PROTECTED]> writes: > On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote: >> If, for a given row, the value of c is, say, approximately 2^30 bytes >> large, then I would expect it to be divided up into 8K chunks in an >> external table, and I should be able to fetch individual chunks of that >> object (by offset) rather than having to detoast the whole thing. > I don't think you can do this with the TOAST mechanism. The problem is > that there's no API which allows you to operate on only certain chunks > of data. There is the ability to fetch chunks of a toasted value (if it was stored out-of-line but not compressed). There is no ability at the moment to update it by chunks. If Mark needs the latter then large objects are probably the best bet. I'm not sure what it'd take to support chunkwise update of toasted fields. Jan, any thoughts? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [HACKERS] Avoiding io penalty when updating large objects
On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote: > I would like to write a postgres extension type which represents a btree of > data and allows me to access and modify elements within that logical btree. > Assume the type is named btree_extension, and I have the table: > > CREATE TABLE example ( > a TEXT, > b TEXT, > c BTREE_EXTENSION, > UNIQUE(a,b) > ); > > If, for a given row, the value of c is, say, approximately 2^30 bytes > large, then I would expect it to be divided up into 8K chunks in an > external table, and I should be able to fetch individual chunks of that > object (by offset) rather than having to detoast the whole thing. I don't think you can do this with the TOAST mechanism. The problem is that there's no API which allows you to operate on only certain chunks of data. You can do it with large objects though -- those you create with lo_creat(). You can do lo_seek(), lo_read() and lo_write() as you see fit. Of course, this allows you to change the LO by chunks. -- Alvaro Herrera () "No hay hombre que no aspire a la plenitud, es decir, la suma de experiencias de que un hombre es capaz" ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Avoiding io penalty when updating large objects
I would like to write a postgres extension type which represents a btree of data and allows me to access and modify elements within that logical btree. Assume the type is named btree_extension, and I have the table: CREATE TABLE example ( a TEXT, b TEXT, c BTREE_EXTENSION, UNIQUE(a,b) ); If, for a given row, the value of c is, say, approximately 2^30 bytes large, then I would expect it to be divided up into 8K chunks in an external table, and I should be able to fetch individual chunks of that object (by offset) rather than having to detoast the whole thing. But what if I want to update a single chunk, or only a couple chunks? How can I go about loading chunks, modifying them, and writing them back to disk, without incurring the overhead of writing 2^30 bytes back out to disk? And if I can do this in a hand coded c function, what does the corresponding SQL statement look like to call the function? Is it an update statement? Also, is it possible that only the rows in the *external* table get marked as updated during my transaction, or will the row in the "example" table be marked as updated? I expect this is not possible, but it would be really great if it were, and I haven't found a definitive "No, you can't do this" in the documentation yet. The idea is to store the first and second level entries of a tree directly in columns "a" and "b", but then to store arbitrarily deep children in a btree type stored in column "c". It doesn't make sense to have a really wide table to represent the tree for multiple reasons, mostly involving data duplication in the leftward columns but also because you can't know ahead of time how wide to make the table. I look forward to any useful responses. Thanks, Mark Dilger ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Connection local variables?
Is there a way to create a variable specific to the current working connection? Like a connection context or some such? I'm trying to take a variable in a query, and allow it to be used by a rule. Thanks, Steve ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] automating backup ?
Zlatko Matic schrieb: Now I have pgpass.conf file in D:\Documents and Settings\Zlatko\Application Data\postgresql content of pgpass.conf is: localhost:*:MONITORINGZ:postgres:tralalala content of backup_script.bat is: cd D:\Program Files\PostgreSQL\8.0\bin pg_dumpall >D:\MONITORINGZ_DUMPALL -U postgres still prompts for password...What is wrong ? Perhaps its like this. pg_dumpall wants to dump the whole database-cluster (every database in your server) that is not only "MONITORINGZ" but the two templates, too. So pg_dumpall doesn't ask you for the password to your own database but 2 times for the pw for the 2 template DBs. In pgpass.conf write * instead of MONITORINGZ or copy the line for template0 and template1. Or don't use pg_dumpall and use pg_dump instead just for MONITORINGZ. Maybe it's somerthing else ... one never knows with those computers ... ;) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Advice on merging two primary keys...
I've come into a situation where I will often need to merge two primary keys, with numerous foreign keys hanging off of them. For instance: CREATE TABLE people ( peopleid SERIAL PRIMARY KEY, firstname TEXT NOT NULL, lastname TEXT NOT NULL ); CREATE TABLE users ( username TEXT PRIMARY KEY, peopleid INT NOT NULL REFERENCES people ON UPDATE CASCADE ON DELETE RESTRICT, ... ); CREATE TABLE results ( peopleid INT NO NULL REFERENCES peopleid ON UPDATE CASCADE ON DELETE CASCADE, eventid INT ... score INT... ); There are some other tables keyed by peopleid that are normally only populated by user related peopleids. The site in question is a sports ranking site. Typically speaking most "people" are not "users" are have their information populated from placement sheets. Some people will later create an account and after in real life authentication the records need to be merged -- ie there will be records from both peopleid that will need should be adjusted to a single value. While any update of the either primary key will cascade to all relevant tables, such an update is disallowed for uniqueness reasons. Is there a good SQL-base method to accomplish this type of merging or does this need application logic? Eric ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Building Latest (8.1)
On Tue, 2005-06-28 at 18:35 -0400, Tom Lane wrote: > Matt Miller <[EMAIL PROTECTED]> writes: > > I'm trying to test a feature I see in the 8.1devel documentation. I > > figured I'd checkout a cvs working copy. Following the doc I: > > > cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login > > > but that just hangs and eventually times out. > > We were having some connectivity problems with that server last night, > I think. Does it work if you try now? Yes, it works. Thanks. I also had some firewall issues on my side. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Building Latest (8.1)
Matt Miller <[EMAIL PROTECTED]> writes: > I'm trying to test a feature I see in the 8.1devel documentation. I > figured I'd checkout a cvs working copy. Following the doc I: > cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login > but that just hangs and eventually times out. We were having some connectivity problems with that server last night, I think. Does it work if you try now? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Building Latest (8.1)
On Tue, 2005-06-28 at 17:57 +, Matt Miller wrote: > Following the doc I: > > cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login > > but that just hangs and eventually times out. >... > What am I doing wrong? I had a problem on my end. CVS checkout is now working. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Populating huge tables each day
On Tue, Jun 28, 2005 at 10:36:58AM -0700, Dann Corbit wrote: > > Nope, truncate is undoubtedly faster. But it also means you would have > > downtime as you mentioned. If it were me, I'd probably make the > > trade-off of using a delete inside a transaction. > > For every record in a bulk loaded table? Sure. If the data's only being loaded once a day, it probably doesn't matter if that delete takes 10 minutes. > If it were that important that both servers be available all the time, I > would bulk load into a second table with the same shape and then rename > when completed. Interesting idea, though the problem is that AFAIK everything will block on the rename. If everything didn't block though, this might be a better way to do it, although it potentially complicates the code greatly (think about needing to add indexes, rebuild RI, etc.) -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Generate a list of (days/hours) between two dates
Thanks for the replies! I've adopted the generate_series method, it's absolutely perfect. I didn't have the dates in a table yet, I needed a method to generate them from scratch, and this will do nicely. Thanks again, and hopefully I'll be able to contribute back someday! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] automating backup ?
Now I have pgpass.conf file in D:\Documents and Settings\Zlatko\Application Data\postgresql content of pgpass.conf is: localhost:*:MONITORINGZ:postgres:tralalala content of backup_script.bat is: cd D:\Program Files\PostgreSQL\8.0\bin pg_dumpall >D:\MONITORINGZ_DUMPALL -U postgres still prompts for password...What is wrong ? - Original Message - From: "Relyea, Mike" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 7:55 PM Subject: RE: [GENERAL] automating backup ? 1) Create the directory %APPDATA%\postgresql in my case it's C:\Documents and Settings\Administrator\Application Data\postgresql 2) Create the file %APPDATA%\postgresql\pgpass.conf I created it with Notepad 3) Put the necessary information into %APPDATA%\postgresql\pgpass.conf I put one line in mine - localhost:*:myDBname:myUserName:myPassword 4) Create the batch file to run your backup command In my case, it reads: "C:\Program Files\PostgreSQL\8.0\bin\psql" -h localhost -d myDBname -U myUserName -f Name-Of-File-With-Maintenance-Commands "C:\Program Files\PostgreSQL\8.0\bin\pg_dump" -f Name-Of-My-Dump-File -Fc -Z 9 -h localhost -U myUserName myDBname 5) Use the task scheduler to run your newly created batch file whenever you'd like it to run I actually run my batch file every night. My DB has no activity during the night, so I run my maintenance then. Name-Of-File-With-Maintenance-Commands contains SQL to refresh a materialized view and do a vacuum full analyze -Original Message- From: Zlatko Matic [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 28, 2005 1:07 PM To: Relyea, Mike; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? I would appreciate some example. Thanks. - Original Message - From: "Relyea, Mike" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 3:56 PM Subject: Re: [GENERAL] automating backup ? That's because they don't exist. You need to create them. I did it on WinXP and it works fine. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Tuesday, June 28, 2005 9:08 AM To: Magnus Hagander; Andreas; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? Importance: High Hi. I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. Regards, Zlatko - Original Message - From: "Magnus Hagander" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; "Andreas" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 10:16 AM Subject: Re: [GENERAL] automating backup ? Hello. I created a Windows XP schedule for backup, following your instruction. Now I have a .bat file with this script: cd D:\Program Files\PostgreSQL\8.0\bin pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall >D:\MYDATABASE_GLOBALS -U postgres -g Well, it works OK, but prompts for password every time. Is there any way that I pass the superuser password (off course, in safe way) so that it works automatically without prompting for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] 8.1 Out parameter question
Will it be possible to use the out params to return more than one row? will the params act as a composite type so they can be used in a set returning function? Thanks, Tony ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Populating huge tables each day
> -Original Message- > From: Jim C. Nasby [mailto:[EMAIL PROTECTED] > Sent: Monday, June 27, 2005 6:55 PM > To: Dann Corbit > Cc: Ben-Nes Yonatan; pgsql-general@postgresql.org > Subject: Re: [GENERAL] Populating huge tables each day > > On Mon, Jun 27, 2005 at 01:05:42PM -0700, Dann Corbit wrote: > > > > > -Original Message- > > > From: Jim C. Nasby [mailto:[EMAIL PROTECTED] > > > Sent: Monday, June 27, 2005 12:58 PM > > > To: Dann Corbit > > > Cc: Ben-Nes Yonatan; pgsql-general@postgresql.org > > > Subject: Re: [GENERAL] Populating huge tables each day > > > > > > On Mon, Jun 27, 2005 at 12:43:57PM -0700, Dann Corbit wrote: > > > > I see a lot of problems with this idea. > > > > > > > > You mention that the database is supposed to be available 24x7. > > > > While you are loading, the database table receiving data will not be > > > > available. Therefore, you will have to have one server online (with > > > > > > Why do you think that's the case? > > > > He's doing a bulk load. I assume he will have to truncate the table and > > load it with the copy command. > > Don't ass-u-me; he said he'd be deleting from the main table, not > truncating. > > > Is there an alternative I do not know of that is equally fast? > > Nope, truncate is undoubtedly faster. But it also means you would have > downtime as you mentioned. If it were me, I'd probably make the > trade-off of using a delete inside a transaction. For every record in a bulk loaded table? If it were that important that both servers be available all the time, I would bulk load into a second table with the same shape and then rename when completed. Be that as it may, I don't think that there is enough information yet to give good advice. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] automating backup ?
1) Create the directory %APPDATA%\postgresql in my case it's C:\Documents and Settings\Administrator\Application Data\postgresql 2) Create the file %APPDATA%\postgresql\pgpass.conf I created it with Notepad 3) Put the necessary information into %APPDATA%\postgresql\pgpass.conf I put one line in mine - localhost:*:myDBname:myUserName:myPassword 4) Create the batch file to run your backup command In my case, it reads: "C:\Program Files\PostgreSQL\8.0\bin\psql" -h localhost -d myDBname -U myUserName -f Name-Of-File-With-Maintenance-Commands "C:\Program Files\PostgreSQL\8.0\bin\pg_dump" -f Name-Of-My-Dump-File -Fc -Z 9 -h localhost -U myUserName myDBname 5) Use the task scheduler to run your newly created batch file whenever you'd like it to run I actually run my batch file every night. My DB has no activity during the night, so I run my maintenance then. Name-Of-File-With-Maintenance-Commands contains SQL to refresh a materialized view and do a vacuum full analyze -Original Message- From: Zlatko Matic [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 28, 2005 1:07 PM To: Relyea, Mike; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? I would appreciate some example. Thanks. - Original Message - From: "Relyea, Mike" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 3:56 PM Subject: Re: [GENERAL] automating backup ? That's because they don't exist. You need to create them. I did it on WinXP and it works fine. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Tuesday, June 28, 2005 9:08 AM To: Magnus Hagander; Andreas; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? Importance: High Hi. I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. Regards, Zlatko - Original Message - From: "Magnus Hagander" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; "Andreas" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 10:16 AM Subject: Re: [GENERAL] automating backup ? > Hello. > I created a Windows XP schedule for backup, following your > instruction. Now I have a .bat file with this script: > > cd D:\Program Files\PostgreSQL\8.0\bin > pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall > >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall > >D:\MYDATABASE_GLOBALS -U postgres -g > > Well, it works OK, but prompts for password every time. Is > there any way that I pass the superuser password (off course, > in safe way) so that it works automatically without prompting > for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Building Latest (8.1)
I'm trying to test a feature I see in the 8.1devel documentation. I figured I'd checkout a cvs working copy. Following the doc I: cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login but that just hangs and eventually times out. I looked at CVSup, but I found no binaries for CVSup at ftp.postgresql.org, and building CVSup seems to be a bit of a pain on Linux. I pulled a tarball from the "stable_snapshot" area of the ftp site, but that was just 8.0.3. What am I doing wrong? My main interest at this point is OUT parameters in PL/pgSQL. I read about this in the 8.1devel doc, so I thought I'd be able to test this feature in the "latest" source, wherever that is. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] User authorization
* Wayne Johnson ([EMAIL PROTECTED]) wrote: > Is there a way to do this automatically? Say, to make all new objects > accessible (or even owned) by a group? Something like the sticky bit in > a directory on UNIX. 8.1 is expected to have Roles support in it, which merges users and groups into one space. Roles can log in, can have passwords, and can have members. Members of a role have the permissions (including owner-level permissions for objects owned by that role) of the role. Personally I'd really like to see a way to set the 'default owner' for a schema to help with exactly these issues. That wasn't included in the Roles support but I think is a natural follow-on to it since the schema could be owned by a Role which has members. Thanks, Stephen signature.asc Description: Digital signature
Re: [GENERAL] automating backup ?
I would appreciate some example. Thanks. - Original Message - From: "Relyea, Mike" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 3:56 PM Subject: Re: [GENERAL] automating backup ? That's because they don't exist. You need to create them. I did it on WinXP and it works fine. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Tuesday, June 28, 2005 9:08 AM To: Magnus Hagander; Andreas; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? Importance: High Hi. I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. Regards, Zlatko - Original Message - From: "Magnus Hagander" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; "Andreas" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 10:16 AM Subject: Re: [GENERAL] automating backup ? Hello. I created a Windows XP schedule for backup, following your instruction. Now I have a .bat file with this script: cd D:\Program Files\PostgreSQL\8.0\bin pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall >D:\MYDATABASE_GLOBALS -U postgres -g Well, it works OK, but prompts for password every time. Is there any way that I pass the superuser password (off course, in safe way) so that it works automatically without prompting for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Finding points within 50 miles
On Jun 27, 2005, at 8:42 PM, Bruno Wolff III wrote: Google is your friend. There are places that sell very well kept zipcode databases for under $50. The US government gives it away for free. Look for "tiger". That is stale data. Vivek Khera, Ph.D. +1-301-869-4449 x806 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] ERROR: "TZ"/"tz" not supported
Sergey Levchenko <[EMAIL PROTECTED]> writes: > How can I convert '00:00:05.601 SAMST Tue Jun 28 2005' (varchar type) > to timestamp with time zone? Just casting it would work, except that SAMST is not one of the time zone abbreviations known to Postgres. If you're desperate you could add an entry to the table in datetime.c. (Someday we really need to make that list configurable instead of hard-wired.) regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] User authorization
I'm using postgresql with Drupal. I have one small problem. I would like to keep the Drupal tables available only to a small group of users (apache, root and myself). I've set these user up in a group. The problem is that every time I want to add a new Drupal module to the database, I need to run the supplied script that creates the needed objects. I then have to manually scan the script to find which objects are created, and then grant access to them to my group. Is there a way to do this automatically? Say, to make all new objects accessible (or even owned) by a group? Something like the sticky bit in a directory on UNIX. Thanks for a great product. --- Wayne Johnson, | There are two kinds of people: Those 3943 Penn Ave. N. | who say to God, "Thy will be done," Minneapolis, MN 55412-1908 | and those to whom God says, "All right, (612) 522-7003 | then, have it your way." --C.S. Lewis __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] automating backup ?
"Zlatko Matic" <[EMAIL PROTECTED]> writes: > Hi. > I can't find pgpass.conf file. It should be in Application Data > subdirectory, but there is no PostgreSQL subdirectory in Application > Data directory (!?). I couldn't find pgpass.conf even by searching the > hard disk.. I'm pretty sure it's not created by the default install (it certainly isn't on Unix)--you need to create it yourself if you're going to use it. -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] automating backup ?
Zlatko Matic schrieb: I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. you have to create the sub-dir and the file yourself ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] automating backup ?
That's because they don't exist. You need to create them. I did it on WinXP and it works fine. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Tuesday, June 28, 2005 9:08 AM To: Magnus Hagander; Andreas; pgsql-general@postgresql.org Subject: Re: [GENERAL] automating backup ? Importance: High Hi. I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. Regards, Zlatko - Original Message - From: "Magnus Hagander" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; "Andreas" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 10:16 AM Subject: Re: [GENERAL] automating backup ? > Hello. > I created a Windows XP schedule for backup, following your > instruction. Now I have a .bat file with this script: > > cd D:\Program Files\PostgreSQL\8.0\bin > pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall > >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall > >D:\MYDATABASE_GLOBALS -U postgres -g > > Well, it works OK, but prompts for password every time. Is > there any way that I pass the superuser password (off course, > in safe way) so that it works automatically without prompting > for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] performance for insert / update
this is my "schema" for the table with the "issue" ! # \d url_importance Table "public.url_importance" Column | Type | Modifiers ---+--+- url_id| bigint | default nextval('url_id_seq'::text) links_in | integer | default 0 links_out | integer | default 0 rank | double precision | default 0 Indexes: "ak_url_id_key_url_impo" unique, btree (url_id) Foreign-key constraints: "fk_url_impo_reference_url" FOREIGN KEY (url_id) REFERENCES url(url_id) ON UPDATE CASCADE ON DELETE CASCADE based on this table i calculate the "rank" for each page and the reupdate the table. on update it takes QUITE a lot time to make the update. basically i am updating just a number which sould not be that resource consuming ! is upgrading to 8.X a "solution" in this case ? p.s.: i am not updating anything except the rank column ! Tuesday, June 28, 2005, 4:01:02 PM, Bruno Wolff III wrote: > Do you have indexes on the foreign key fields in the referencing tables? > These are created by default and if you are updating the referenced tupples > a sequential search will be needed if there isn't an index. > Also of note is that there is recent a change to only do this if the > referenced > fields in the record are changed, but I think this is new for 8.1. That > will make things go a lot faster if you aren't updating the referenced > fields in your main table. -- Catalin Constantin Bounce Software http://www.bounce-software.com http://www.cabanova.ro ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] automating backup ?
Hi. I can't find pgpass.conf file. It should be in Application Data subdirectory, but there is no PostgreSQL subdirectory in Application Data directory (!?). I couldn't find pgpass.conf even by searching the hard disk.. Regards, Zlatko - Original Message - From: "Magnus Hagander" <[EMAIL PROTECTED]> To: "Zlatko Matic" <[EMAIL PROTECTED]>; "Andreas" <[EMAIL PROTECTED]>; Sent: Tuesday, June 28, 2005 10:16 AM Subject: Re: [GENERAL] automating backup ? Hello. I created a Windows XP schedule for backup, following your instruction. Now I have a .bat file with this script: cd D:\Program Files\PostgreSQL\8.0\bin pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall >D:\MYDATABASE_GLOBALS -U postgres -g Well, it works OK, but prompts for password every time. Is there any way that I pass the superuser password (off course, in safe way) so that it works automatically without prompting for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] performance for insert / update
On Mon, Jun 27, 2005 at 18:46:58 +0300, Catalin Constantin <[EMAIL PROTECTED]> wrote: > Hello, > > I have a pretty big database with about 200 000 rows. > This is the main table. Also some other tables with FKs to this main > table. > > I have to calculate some numbers for each entry at a certain amount of > time and update the DB. > > I've noticed the update TAKES a very long time. Do you have indexes on the foreign key fields in the referencing tables? These are created by default and if you are updating the referenced tupples a sequential search will be needed if there isn't an index. Also of note is that there is recent a change to only do this if the referenced fields in the record are changed, but I think this is new for 8.1. That will make things go a lot faster if you aren't updating the referenced fields in your main table. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] ERROR: "TZ"/"tz" not supported
When I execute query, I've got error message. test=> SELECT to_timestamp('00:00:05.601 SAMST Tue Jun 28 2005', 'HH24:MI:SS.MS TZ Dy Mon DD '); ERROR: "TZ"/"tz" not supported How can I convert '00:00:05.601 SAMST Tue Jun 28 2005' (varchar type) to timestamp with time zone? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Generate a list of (days/hours) between two dates
Hi, hier the same for minutes. Just change the intervall to 'hour' and the series-count to '24' : select current_date || ' ' || mytimequery.mytime as dates from (select (TIME '00:00:00' + myintervalquery.myinterval)::time as mytime from (select (s.t ||' minute')::interval as myinterval from generate_series(0,1439) as s(t) ) as myintervalquery ) as mytimequery; Best regards Hakan Kocaman Software-Developer digame.de GmbH Richard-Byrd-Str. 4-8 50829 Köln Tel.: +49 (0) 221 59 68 88 31 Fax: +49 (0) 221 59 68 88 98 Email: [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Gnanavel Shanmugam > Sent: Tuesday, June 28, 2005 7:45 AM > To: [EMAIL PROTECTED]; pgsql-general@postgresql.org > Subject: Re: [GENERAL] Generate a list of (days/hours) > between two dates > > > This might be helpful, > > select current_date + s.t as dates from generate_series(0,5) as s(t); >dates > > 2005-06-28 > 2005-06-29 > 2005-06-30 > 2005-07-01 > 2005-07-02 > 2005-07-03 > (6 rows) > > > > with regards, > S.Gnanavel > > > > -Original Message- > > From: [EMAIL PROTECTED] > > Sent: 27 Jun 2005 10:30:38 -0700 > > To: pgsql-general@postgresql.org > > Subject: [GENERAL] Generate a list of (days/hours) between two dates > > > > Hi guys, > > > > I've scoured the date/time functions in the docs as well as > > google-grouped as many different combinations as I could think of to > > figure this out without asking, but I'm having no luck. > > > > I'd like to make a query that would return a list of every trunc'd > > TIMESTAMPs between two dates. For example, I'd want to get > a list of > > every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and > > get a list that looks like: > > > > 6-1-2005 00:00:00 > > 6-1-2005 01:00:00 > > 6-1-2005 02:00:00 > > etc > > > > Conversely, I want to generate a list of every day between > two dates, > > like: > > > > 6-1-2005 00:00:00 > > 6-2-2005 00:00:00 > > 6-3-2005 00:00:00 > > > > I know there's gotta be some way to do this in a SELECT > function, but > > I'm running into a brickwall. I'm trying to take some of my date > > handling logic out of code and use the db engine so I can spend less > > time developing/maintaining code when mature date handling already > > exists in a resource I've already got loaded. > > > > Any thoughts? > > > > > > ---(end of > broadcast)--- > > TIP 5: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > ---(end of > broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Generate a list of (days/hours) between two dates
On Mon, Jun 27, 2005 at 10:30:38AM -0700, [EMAIL PROTECTED] wrote: > > I'd like to make a query that would return a list of every trunc'd > TIMESTAMPs between two dates. For example, I'd want to get a list of > every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and > get a list that looks like: > > 6-1-2005 00:00:00 > 6-1-2005 01:00:00 > 6-1-2005 02:00:00 Something like this? SELECT '2005-06-01 00:00:00'::timestamp + x * interval'1 hour' FROM generate_series(0, 9 * 24) AS g(x); Another possibility would be to write your own set-returning function that takes the start and end timestamps and a step value. > Conversely, I want to generate a list of every day between two dates, > like: > > 6-1-2005 00:00:00 > 6-2-2005 00:00:00 > 6-3-2005 00:00:00 SELECT '2005-06-01 00:00:00'::timestamp + x * interval'1 day' FROM generate_series(0, 9) AS g(x); generate_series() is a function in PostgreSQL 8.0 and later, but it's trivial to write in earlier versions using PL/pgSQL. http://www.postgresql.org/docs/8.0/static/functions-srf.html -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Performance Tuning Best Practices for 8
I've been using postgres off and on since about 1997/98. While I have my personal theories about tuning, I like to make sure I stay current. I am about to start a rather thorough, application specific evaluation of postgresql 8, running on a Linux server (most likely the newly release Debian Stable). While I have been running 7.4.x for a while, I have not had much of an opportunity to really look at 8. Are there any significant differences or gotchas tuning 8 as compared to 7.4 or older versions? I was hoping those that have 8 in production environments might share some of their tuning experiences with me, or point me to useful web sites or books. Most of the websites and books I've seen on this topic are from 2003 and 2004, so I was hoping to find something a little more current. Thanks, Greg ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] automating backup ?
> Hello. > I created a Windows XP schedule for backup, following your > instruction. Now I have a .bat file with this script: > > cd D:\Program Files\PostgreSQL\8.0\bin > pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres pg_dumpall > >D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall > >D:\MYDATABASE_GLOBALS -U postgres -g > > Well, it works OK, but prompts for password every time. Is > there any way that I pass the superuser password (off course, > in safe way) so that it works automatically without prompting > for password ? Use a pgpass.conf file: http://www.postgresql.org/docs/8.0/static/libpq-pgpass.html (remember the file has to be in the profile of the account that executes the step. And be sure to protect it with file system ACLs so other users can't read it) //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Postmaster Out of Memory
Jeff Gold <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> TRUNCATE and CLUSTER both rebuild indexes, so they'd also trigger the >> leak. > Sorry to bug you again, but I have two quick followup questions: (1) is > the leak you discovered fixed on the 8.0 branch? and (2) would closing > the database connection once per day be a reasonable way to work around > the problem in the absence of the patch you forwarded? It is fixed in CVS-tip of all branches back to 7.3, but there is not any release yet incorporating the fix. Yes, closing your session and starting a fresh one is a usable workaround. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] DANGER Windows version might hurt you
"Magnus Hagander" <[EMAIL PROTECTED]> writes: >> My editor strips empty lines at the end of a textfile and >> sets EOF right after the last visible character so it dumps >> the CRLF of the last line and even though it was a comment >> Postmaster complaines about a syntax error and goes on strike. > Hmm. It isn't. And it probably should, yes - or the parser should be > fixed to deal with it. It sounds like a simple oversight in the flex rules for postgresql.conf. I'll look into it after feature freeze if no one beats me to it. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Generate a list of (days/hours) between two dates
This might be helpful, select current_date + s.t as dates from generate_series(0,5) as s(t); dates 2005-06-28 2005-06-29 2005-06-30 2005-07-01 2005-07-02 2005-07-03 (6 rows) with regards, S.Gnanavel > -Original Message- > From: [EMAIL PROTECTED] > Sent: 27 Jun 2005 10:30:38 -0700 > To: pgsql-general@postgresql.org > Subject: [GENERAL] Generate a list of (days/hours) between two dates > > Hi guys, > > I've scoured the date/time functions in the docs as well as > google-grouped as many different combinations as I could think of to > figure this out without asking, but I'm having no luck. > > I'd like to make a query that would return a list of every trunc'd > TIMESTAMPs between two dates. For example, I'd want to get a list of > every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and > get a list that looks like: > > 6-1-2005 00:00:00 > 6-1-2005 01:00:00 > 6-1-2005 02:00:00 > etc > > Conversely, I want to generate a list of every day between two dates, > like: > > 6-1-2005 00:00:00 > 6-2-2005 00:00:00 > 6-3-2005 00:00:00 > > I know there's gotta be some way to do this in a SELECT function, but > I'm running into a brickwall. I'm trying to take some of my date > handling logic out of code and use the db engine so I can spend less > time developing/maintaining code when mature date handling already > exists in a resource I've already got loaded. > > Any thoughts? > > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] problems with slow insert/delete queries/lot of disk write i/o in postgresql 7.2.4
Alexander Korobov <[EMAIL PROTECTED]> writes: > We are having strange problem on production system with very slow > insert/delete commands and huge cpu and disk write activity spikes in > postgresql 7.2.4. The first thing you should consider, if you are concerned about performance, is adopting a less obsolete version of Postgres. As a stopgap, increasing the checkpoint interval parameters might help some. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster