Re: [HACKERS] [PATCHES] log_line_info
Bruce Momjian wrote: Andrew Dunstan wrote: Nice. Only two comments --- does this mean we should remove log_pid? Seems it is now duplicate functionality. Is that the only duplication? Also, I don't see any documention changes in the patch, but I assume you will work on that before final. I will do docs. We could sensibly get rid of log_pid and log_timestamp with my latest patch. I would also suggest getting rid of log_source_port, since there really isn't any reason *not* to log the source port. Do you want me to make those changes in my patch? Or I can leave them for now and we can get rid of them when everyone is happy. I agree, but let's make it a separate patch. Oh, I think we still need log_timestamp for postmaster-generated lines, no? What does log_line_info output for postmaster-generated logs? I have fixed the postmaster lines issue. There is a new escape %X that says "postmaster and friends stop here". Example, with log_timestamp = false and log_pid = false and log_line_info = '%T [%P] %X [EMAIL PROTECTED](%C:%S) %R %I line:%L ' : 2004-02-28 20:24:49 [11154] LOG: database system was shut down at 2004-02-28 20:24:20 EST 2004-02-28 20:24:49 [11154] LOG: checkpoint record is at 0/9D1874 2004-02-28 20:24:49 [11154] LOG: redo record is at 0/9D1874; undo record is at 0/0; shutdown TRUE 2004-02-28 20:24:49 [11154] LOG: next transaction ID: 467; next OID: 17145 2004-02-28 20:24:49 [11154] LOG: database system is ready 2004-02-28 20:25:30 [11158] [EMAIL PROTECTED](40413f8a.2b96:2004-02-28 20:25:30) line:1 LOG: connection received: host=alphonso port=45621 2004-02-28 20:25:30 [11158] [EMAIL PROTECTED](40413f8a.2b96:2004-02-28 20:25:30) alphonso:45621 authentication line:2 LOG: connection authorized: user=andrew database=blurflx 2004-02-28 20:25:34 [11158] [EMAIL PROTECTED](40413f8a.2b96:2004-02-28 20:25:30) alphonso:45621 idle line:3 LOG: statement: SELECT n.nspname as "Schema", c.relname as "Name", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type", u.usename as "Owner", c2.relname as "Table" FROM pg_catalog.pg_class c JOIN pg_catalog.pg_index i ON i.indexrelid = c.oid JOIN pg_catalog.pg_class c2 ON i.indrelid = c2.oid LEFT JOIN pg_catalog.pg_user u ON u.usesysid = c.relowner LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('i','') AND n.nspname NOT IN ('pg_catalog', 'pg_toast') AND pg_catalog.pg_table_is_visible(c.oid) ORDER BY 1,2; 2004-02-28 20:25:38 [11158] [EMAIL PROTECTED](40413f8a.2b96:2004-02-28 20:25:30) alphonso:45621 idle line:4 LOG: disconnection: session time: 0:00:08.50 user=andrew database=blurflx host=alphonso:45621 2004-02-28 20:25:44 [11149] LOG: received smart shutdown request 2004-02-28 20:25:44 [11170] LOG: shutting down 2004-02-28 20:25:46 [11170] LOG: database system is shut down Also, should we call the option just log_line? Is that clearer, or log_line_prefix? I floated the name log_line_info a while back and noone objected. I don't think "log_line" is any clearer - quite the contrary IMNSHO. cheers andrew ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] log_line_info
Andrew Dunstan wrote: > >Nice. Only two comments --- does this mean we should remove log_pid? > >Seems it is now duplicate functionality. Is that the only duplication? > >Also, I don't see any documention changes in the patch, but I assume you > >will work on that before final. > > > > I will do docs. We could sensibly get rid of log_pid and log_timestamp > with my latest patch. I would also suggest getting rid of > log_source_port, since there really isn't any reason *not* to log the > source port. Do you want me to make those changes in my patch? Or I can > leave them for now and we can get rid of them when everyone is happy. I agree, but let's make it a separate patch. Oh, I think we still need log_timestamp for postmaster-generated lines, no? What does log_line_info output for postmaster-generated logs? Also, should we call the option just log_line? Is that clearer, or log_line_prefix? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [ADMIN] Schema comparisons
On Sat, Feb 28, 2004 at 09:23:48PM -0500, Alex J. Avriette wrote: > On Sat, Feb 28, 2004 at 10:39:40AM -0500, Tom Lane wrote: > > > >> Have you tried diffing pg_dump output? It's not the greatest tool but > > >> it's helpful. > > > > > Yes, I did. It was quite cumbersome. Especially since the OIDs and > > > TOC entry numbers didn't matchup; and, since those didn't always match, > > > the order of objects wasn't quite the same either. So, diff was > > > throwing a lot of false positives at me. > > > > Yeah. CVS-tip pg_dump doesn't show OIDs by default, to make it easier > > to use for purposes like this. The ordering issue is the bigger problem > > though. I presume that the object creation history is different in the > > two databases and so pg_dump's habit of sorting by OID isn't helpful. > > I recently had to figure out what was different between the "live" schema > and the schema in cvs at work. This was a really painful process, and it > occurred to me that it wouldn't be terribly hard to write a perl program > to do it (I wound up using vim and diff). Is there interest in such a tool? > I could probably have one written within a day or two. I sometimes supplement vim/diff with xxdiff, meld, and winmerge. Hope this helps someone, --Tim Larson ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Collaboration Tool Proposal
On Sun, Feb 29, 2004 at 02:35:59AM +0100, Kaare Rasmussen wrote: > > Why GForge? > > GForge seems to be technically OK. But what about the future outlook. The home > page lists 5 projects, whereof the 4 are tests. Are you sure they will not > fold in a month or two, will they be reliable, responsive and real nice (the > three r's) ? http://gforge.org/ is not a hosting site, that is why you only found 4 test projects and the GForge project itself hosted on the site. The idea is that you download the software and host it on your own hardware. --Tim Larson ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Enterprice support in PostgreSQL
George A.J wrote: > Hi all, > > We are providing database solutions in postgreSQL... > Now using PostgreSQL 7.3. It is performing well. > But Now we have some enterprice level requirements. > One of Our requirement is to provide a distributed solution in PostgreSQL. > The questions are... > 1. Is it posible to provide a distributed solution in PostgreSQL. > 2. Does PostgreSQL Support distributed transactions. > 3. If not does it included in the next release. Or when will be the distributed > version of postgreSQL available. > 4. Is there a replication solution availbale for postgreSQL. > 5. Does postgreSQL support 2 phase commit and Distributed transaction standards. > 6. Is there a transaction manager(or co-ordinater) available for postgreSQL. > 7. Can we use MTS (Microsot Transaction Server) wtih postgreSQL. > 8. Does postgreSQL support Load balancing and all other enterprice features. > Can we expect these features in the next version.. We are working on all these fronts. We have replication solutions on gborg.postgresql.org, and some are working on 2-phase commit, perhaps for 7.5, dues in maybe 6 months. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] 7.3.6 for Monday ... still a go?
Bruce Momjian <[EMAIL PROTECTED]> writes: > Marc G. Fournier wrote: >> I'm looking at doing the bundle up on Sunday night, and announce on Monday >> ... any outstanding back-patches that need to get in? Tom/Bruce, on >> scheduale for doing the docs changes? > I can package up this week, but not this weekend, and we should have the > release item descriptions that we had in 7.4, and I don't see that > happening in 24 hours. AFAIK we are good to go in terms of the code --- there are no open issues that I'd want to back-patch to 7.3. We just need release notes. But I don't have time this weekend to do the release notes either... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] [ADMIN] Schema comparisons
On Sat, Feb 28, 2004 at 10:39:40AM -0500, Tom Lane wrote: > >> Have you tried diffing pg_dump output? It's not the greatest tool but > >> it's helpful. > > > Yes, I did. It was quite cumbersome. Especially since the OIDs and > > TOC entry numbers didn't matchup; and, since those didn't always match, > > the order of objects wasn't quite the same either. So, diff was > > throwing a lot of false positives at me. > > Yeah. CVS-tip pg_dump doesn't show OIDs by default, to make it easier > to use for purposes like this. The ordering issue is the bigger problem > though. I presume that the object creation history is different in the > two databases and so pg_dump's habit of sorting by OID isn't helpful. I recently had to figure out what was different between the "live" schema and the schema in cvs at work. This was a really painful process, and it occurred to me that it wouldn't be terribly hard to write a perl program to do it (I wound up using vim and diff). Is there interest in such a tool? I could probably have one written within a day or two. Alex -- [EMAIL PROTECTED] Alex J. Avriette, Solaris Systems Masseur http://envy.posixnap.net/~alex/articles/nro-wahhabi.html ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [ADMIN] Schema comparisons
Richard Huxton <[EMAIL PROTECTED]> writes: > I've been looking at storing $REVISION$ in comments for each object, so my > install scripts can halt if there is a problem. Not wanting to use my only > comment slot for this I was thinking about an extension to the COMMENT ON > statement: > COMMENT ON TABLE foo IS 'This is where I stroe my foos.'; > COMMENT ON TABLE foo SECTION 'default' IS 'I meant store my foos.'; > COMMENT ON TABLE foo SECTION 'revision' IS '1.19'; > COMMENT ON TABLE foo SECTION 'bar' IS 'baz'; This seems a little, um, specialized. Why don't you just keep the info in a user-defined table? regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Collaboration Tool Proposal
> Why GForge? GForge seems to be technically OK. But what about the future outlook. The home page lists 5 projects, whereof the 4 are tests. Are you sure they will not fold in a month or two, will they be reliable, responsive and real nice (the three r's) ? -- Kaare Rasmussen--Linux, spil,--Tlf:3816 2582 Kaki Datatshirts, merchandize Fax:3816 2501 Howitzvej 75 Åben 12.00-18.00Email: [EMAIL PROTECTED] 2000 FrederiksbergLørdag 12.00-16.00 Web: www.suse.dk ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [ADMIN] Schema comparisons
Ordering the pg_dump output by name within classes instead of OID sounds good to me, too. Also, something that might be easier for comparing schemata between databases: rather than dumping the database, have you tried using PostgreSQL Autodoc (http://www.rbt.ca/autodoc/) which just outputs the schema in a variety of formats (including XML for dia and DocBook). It just seems that if you're only concerned with structure, and not content of user tables, this could at least shorten the amount of data to be compared, if not get you into a space where there's already a tool to do all the work (I don't know if there's an XML-smart diff.) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Collaboration Tool Proposal
Janos, So far, all of the solutions that are being seriously considered seem to be free, open-source software. I can't find any indication on your site that this is software the PostgreSQL community can hack to bits as needed over the years. Even if it's free now, there's the possibility that it will later turn out to be a free straitjacket. Regards, Paul > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: Friday, February 27, 2004 1:19 PM > To: [EMAIL PROTECTED] > Subject: Re: [HACKERS] Collaboration Tool Proposal > > > Hi, > > please look at CodeBeamer (www.intland.com) it has all featured you > described and for selected open source projects is free now. > It is a web based collaborative software development platform with > -project tracking (dashboard) > -tracker > -document manager (sharing + versioning) > -forum > -cvs, Subversion and other SCM integration, GUI > -code browsing, xref for C/C++ and Java > -automated build > > Thanks, > Janos > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [ADMIN] Any Gentoo users interested in a slotted PostgreSQL
I dont see a real need for slots, but I do question why the 7.4 branch isnt marked as stable yet. :) Gavin Jyry Kuukkanen wrote: Hello Are there any Gentoo users here? Yes, there are some :) Do you wish the PostgreSQL ebuild made use of SLOTS? - to allow installing of major versions together (eg. 7.3 and 7.4 on the same host) - to ease migration of databases to new major versions (where a dump/reload is required) I've started a thread at: http://bugs.gentoo.org/show_bug.cgi?id=42894 With a initial plan for relocation of files to support slotting. Comments, suggestions, corrections, or messages to say it's a silly idea (and why) would be very much appreciated before I attempt this next week. I only have Gentoo installed on my old laptop that is not in very heavy and active use. However, PostgreSQL is installed on it and used every now and then. PostgreSQL migration has always worked fine for me, so it has been adequate to install fresher version on some sundry hardware and test the migration with it. This, quite obviously, is only my humble opinion and based on my limited experience :) Cheers, --Jyry C|:-(C|:-/C|8-OC|8-/C|:-( ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Pl/Java - next step?
Tom Lane wrote: > "Thomas Hallgren" <[EMAIL PROTECTED]> writes: > > ** 4. Make the postmaster spawn threads rather than processes ** > > I know this is very controversial and perhaps I should not bring it up at > > all. But then again, why not? Most readers are open-minded right? > > It's been considered and rejected before, and pljava isn't going to tilt > the scales. In fact, the main thing that bothers me about your > description of JNI is "Java uses multithreading wether you like it or > not". I am very afraid of what impact a JVM will have on the stability > of the surrounding backend. > > Other than that fear, though, the JNI approach seems to have pretty > considerable advantages. You listed startup time as the main > disadvantage, but perhaps that could be worked around. Suppose the > postmaster started a JVM --- would that state inherit correctly into > subsequently forked backends? > > Also, regarding your option #3 (do both), do you really think something > different is going to happen in practice? The developers of the other > implementation aren't likely to give it up just because yours exists. As I understand it, the JNI approach has one JVM per backend using java, while the Java/remote approach uses a single JVM for all backends and isolates them via classes. JNI says function execution will be faster and cleaner, while Java/remote feels system resource usage and startup time will be less. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] pg_xlog
qmis wrote: > Hi all > > How can i read transactions from "write ahead log " pg_xlog ? > It is possible ? No, it is all binary and read only on startup after a crash. If you want to interpret it, you have to read the backend code that reads it during recovery. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [pgsql-www] [HACKERS] Collaboration Tool Proposal
On Feb 26, 2004, at 6:53 PM, Joseph Tate wrote: Josh Berkus wrote: Folks, Discuss: Has anyone talked to the people at collabnet (http://www.collab.net)? I wonder if they'd be willing to put something together for the PostgreSQL team? They run the tigris.org site, which is one of the nicest OSS collaboration sites I've worked with. GForge is nice, but seems more kludgey than Tigris. What does the Apache project run? Another option is something like Drupal (http://www.drupal.org). Drupal is a CMS system with tons of plugins. I'm not sure that it could handle a project as large as PostgreSQL, but Drupal's own development work is self hosted. It may merit some investigation. Drupal? I would not recommend it. WIth every plug and play CMS you get what you pay for aka when you need to change something, you are in trouble and you end up searching their classes and grasp to understand they way they code in php. Is this as an alternative to gborg or the current website ? As far as I know drupal has nothing like bug tracking etc. for sure GForge (to me ) is way better then drupal :D Thanks David Costa ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] API Layers within Postgres
Hi Thinking of producing a modified version of Postgres to allow clients bypass the SQL type intefaces. How easy is to to get cursor access to the indexes and fine grained control of the transaction system, are their fairly clean internal APIs I can leverage. Cheers Chad ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [ADMIN] Any Gentoo users interested in a slotted PostgreSQL
Hello > Are there any Gentoo users here? Yes, there are some :) > Do you wish the PostgreSQL ebuild made use of SLOTS? > - to allow installing of major versions together (eg. 7.3 and 7.4 on the > same host) > - to ease migration of databases to new major versions (where a > dump/reload is required) > > I've started a thread at: > http://bugs.gentoo.org/show_bug.cgi?id=42894 > > With a initial plan for relocation of files to support slotting. > Comments, suggestions, corrections, or messages to say it's a silly idea > (and why) > would be very much appreciated before I attempt this next week. I only have Gentoo installed on my old laptop that is not in very heavy and active use. However, PostgreSQL is installed on it and used every now and then. PostgreSQL migration has always worked fine for me, so it has been adequate to install fresher version on some sundry hardware and test the migration with it. This, quite obviously, is only my humble opinion and based on my limited experience :) Cheers, --Jyry C|:-(C|:-/C|8-OC|8-/C|:-( ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] How to get Relation name from Oid ??
Hello , Can somebody please tell me how to get the name of the relation (Attribute also) from the Oid and the otherway back (Oid from name) ?? thanks -Ramu = " Karyathuranaam na Sukham na Nidhra " www.it.iitb.ac.in/~ramu Yahoo! India Insurance Special: Be informed on the best policies, services, tools and more. Go to: http://in.insurance.yahoo.com/licspecial/index.html ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] [pgsql-www] Collaboration Tool Proposal
Folks, Tim Perdue sent me this. -- -Josh Berkus Aglio Database Solutions San Francisco --- Begin Message --- Josh Berkus wrote: Tim, To follow up on the previous question, there are a few features that I'm not seeing in the GForge demo online. These are all features that our developers will ask about: Project home pages: is there an easy way to create these? Mailing list management: where? Web forums: search feature? Bug Tracking: is there the ability to add more characteristics to bugs? Cvsweb: I notice ties from gforge.org to cvsweb. Is this something we can easily set up? Josh, someone pointed out the list discussion going on today, and I wanted to comment on this, but am not subscribed: > - How will mailing list subscribers be affected? If you already use mailman, it would be seamless, as GForge does too. It would be a matter of entering all the mailman lists into the GForge database (probably scriptable, assuming GBORG had any rational naming convention). > - How will CVS users be affected? Same again most likely. GForge uses unix groups & users to control CVS, so they would have to be backfilled into the GF database. > - Can the mailing list archives be moved over? GF uses mailman to archive lists for now (although GF4 has a snazzier system) > - Where will my old bug reports and corresponding discussions go? I may be able to help you migrate your database structure over, assuming it is even remotely understandable. > - Can FAQ entries be copied over automatically? > - Is there a way of migrating these services one by one? --- End Message --- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] BTrees with record numbers
Is it possible for Postgres Btrees to support access by logical row number ? If not available is ti a huge job to support for sombebody willing to have a go ? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Collaboration Tool Proposal
Hi, please look at CodeBeamer (www.intland.com) it has all featured you described and for selected open source projects is free now. It is a web based collaborative software development platform with -project tracking (dashboard) -tracker -document manager (sharing + versioning) -forum -cvs, Subversion and other SCM integration, GUI -code browsing, xref for C/C++ and Java -automated build Thanks, Janos ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Thread safe connection-name mapping in ECPG. Is it
On Friday 27 February 2004 22:24, Lee Kindness wrote: > Sort of related, I was thinking about adding some more thread-related > code such that if a connection wasn't explicitely specified then the > last connection SET or CONNECTed to for the current thread is used, > rather than just the "last connection". > > But yeah, specifying the connection by variable (be it string or > connection ptr) would be a definite step forward. Currently you cannot > write a generic function like: > > int getit(char *using_connection) > { > EXEC SQL BEGIN DECLARE SECTION; > char *s_connection = using_connection; > int s_it; > EXEC SQL END DECLARE SECTION; > > EXEC SQL AT :s_connection SELECT it INTO :s_it FROM some_table; > return( s_it ); > } > > which could be run concurrently by multiple threads. Consider another scenario. In a C++ class you want to contain a database connection. The class needs to make n resources thread safe, including database connection. Now each instance of class would be referring to differnet database connection with same code. Doing same with char strings, is just clean enough IMHO.. Shridhar ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [pgsql-www] Collaboration Tool Proposal
There is a roundup version for postgresql. I have not tried it. For python people, this is the ultimate solution. It is customizable to death. I have the mailing list archives for the last couple of months. I like round up and use it. It has a great email interface and a "nosy" list feature which enables people to track the status of their issues. But as it is now, a resident python person would be extremely helpful to wrap up any customization (yes we'll want customization). --elein [EMAIL PROTECTED] On Fri, Feb 27, 2004 at 03:31:14PM -0800, Josh Berkus wrote: > Mikhail, > > > For a standalone bug/issue tracking tool take a look on > > http://roundup.sourceforge.net > > I don't see PostgreSQL support listed -- just SQLite and MySQL. > > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [GENERAL] Why does app fail?
Shridhar Daithankar <[EMAIL PROTECTED]> 25/02/2004 10:31:16 On Wednesday 25 February 2004 15:38, Paul Simpson wrote:>> Thank you for the advice, unfortunately, that isn't an option, you see I>> didn't write the application and so cannot control the calls it makes. What>> I need is for PG to do as it's told! I can only think of three solutions which I'd appreciate comments / advice>> on 1) Hack PG Source and re-compile to make upper-case the default table name.>> 2) Hack the ODBC driver source to get it to always add quotes to table>> names. 3) Given that the statement seems to be looking for the table name>> in one of the system tables, put a trigger on that table to automatically>> convert any new additions to upper case.>>I vote for option 2 It might take care of any other complaints such as >mismatch in locale etc. as well. I don't know really.>>Do we need to submit a TODO to hackers or on ODBC List? I am not too sure..>>Shridhar> Having spent a lot of time searching on the 'net, I suspect this would be a very popular option for a lot of people. What are the chances of getting this done fairly quickly, do you think? Paul
Re: [HACKERS] RFC: Very large scale postgres support
Alex, In our quest to see if we can get better performance out of PostgreSQL by throwing more HW at it, I would have recommended a V880 also. I'm curious to find out why you would have: "(in the past, I would have suggested a Sun V880 for this database, but we are using Linux on x86)" too. Cheers, Eddy [EMAIL PROTECTED] ("Keith Bottner") wrote in message news:<[EMAIL PROTECTED]>... > Alex, > > I agree that this is something that is worth spending time on. This > resembles the Oracle RAC (Real Application Cluster). While other people may > feel that the amount of data is unreasonable I have a similar problem that > will only be solved using such a solution. > > In regards to how your database is designed? Who cares? This is an RFC for a > general discussion on how to design this level of functionality into > Postgres. Ultimately any solution would work without regard to the insert, > updates, or deletes being executed. Alex, I think as a first step we should > start coming up with a feature list of what would be necessary to support > this level of functionality. From that point we could then identify efforts > that are currently ongoing on Postgres development that we could help out on > as well as those items that would need to be handled directly. > > I am very interested in going forth with this discussion and believe that I > would be able to have the company I work for put forward resources (i.e. > people or money) on developing the solution if we can come up with a > workable plan. > > Josh, thanks for the heads up on Clusgres, I will take a look and see how > that fits. > > Thanks, > > Keith > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Alex J. Avriette > Sent: Saturday, February 07, 2004 12:29 PM > To: [EMAIL PROTECTED] > Subject: [HACKERS] RFC: Very large scale postgres support > > > Recently I was tasked with creating a "distribution system" for postgres > nodes here at work. This would allow us to simply bring up a new box, push > postgres to it, and have a new database. > > At the same time, we have started to approach the limits of what we can do > with postgres on one machine. Our platform presently is the HP DL380. It is > a reasonably fast machine, but in order to eke more performance out of > postgres, we are going to have to upgrade the hardware substantially. > > So the subject came up, wouldn't it be nice if, with replication and > proxies, we could create postgres clusters? When we need more throughput, to > just put a new box in the cluster, dist a psotgres instance to it, and tell > the proxy about it. This is a very attractive idea for us, from a > scalability standpoint. It means that we don't have to buy $300,000 servers > when we max out our 2- or 4- cpu machines (in the past, I would have > suggested a Sun V880 for this database, but we are using Linux on x86). > > We are left with one last option, and that is re-engineering our application > to distribute load across several instances of postgres which are operating > without any real knowledge of eachother. I worry, though, that as our needs > increase further, these application redesigns will become asymptotic. > > I find myself wondering what other people are doing with postgres that this > doesn't seem to have come up. When one searches for postgres clustering on > google, they will find lots of HA products. However, nobody seems to be > attempting to create very high throughput clusters. > > I feel that it would be a very good thing if some thinking on this subject > was done. In the future, people will hopefully begin using postgres for more > intense applications. We are looking at perhaps many tens of billions of > transactions per day within the next year or two. To simply buy a "bigger > box" each time we outgrow the one we're on is not effective nor efficient. I > simply don't believe we're the only ones pushing postgres this hard. > > I understand there are many applications out there trying to achieve > replication. Some of them seem fairly promising. However, it seems to me > that if we want to see a true clustered database environment, there would > have to be actual native support in the postmaster (inter postmaster > communication if you will) for replication and cross-instance locking. > > This is obviously a complicated problem, and probably not very many of us > are doing anything near as large-scale as this. However, I am sure most of > us can see the benefit of being able to provide support for these sorts of > applications. > > I've just submitted this RFC in the hopes that we can discuss both the best > way to support very large scale databases, as well as how to handle them > presently. > > Thanks again for your time. > alex > > -- > [EMAIL PROTECTED] > Alex J. Avriette, Solaris Systems Masseur > "I ... remain against the death penalty because I feel that eternal boredom > with no hope of parole is a much worse punishment than ju
[HACKERS] Arbitrary collation support for PostgreSQL
Hello all, PostgreSQL, at least until version 7.4, has rather weak support for various collating sequences. What you get when you do select * from table order by column is hardcoded in the database cluster at the initdb time. Yet, it is reasonable request to want one select to order by using English rules, another one to run with German rules and yet another with Czech ones, without having to dump, initdb, restore. The distribution http://www.fi.muni.cz/~adelton/l10n/postgresql-nls-string-0.50.tar.gz defines a function nls_string which allows collation to be set at runtime: select * from table order by nls_string(name, 'en_US.UTF-8') select * from table order by nls_string(name, 'cs_CZ.UTF-8') select * from table order by nls_string(name, 'C') The README and INSTALL files with more details are included in the distribution. I'd appreciate comments about the viability of the goal, name, and / or solution presented. Yours, -- Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/ .project: Perl, mod_perl, DBI, Oracle, large Web systems, XML/XSL, ... Only self-confident people can be simple. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Pl/Java - next step?
Hi all! Sorry for my latencies. An IRC chat is ok for me, anytime. On Sun, Feb 22, 2004 at 08:08:00PM +0100, Thomas Hallgren wrote: > I'm in Sweden. Some time tuesday evening (european time) perhaps? > > Why is your work not made public somewhere? The project on sourceforge is > inactive it seems. Do you have a CVS setup privately? Yes, actualy, the sf.net cvs was used very rarely, so I simply droped it after a while. Now we use the CVS on Dave`s server, but it will move to a new server. > > > Not to minimize your work, as I think it is great, but this particular > > use-case I consider to be overkill for pl/java. It is probably easier to > > use pl/pgsql if all you want to do is calculations. > > > Not to minimize your work, but if the only thing you want to do is to send a > request to a servlet, that is very easy to do with Pl/Perl ;-) > > Seriously, when I say calculations, I mean any computed value that doesn't > involve database accesses. It could for instance be an implementation of a > soundex algorithm comparing two values or something similar like graphic > image matching. Regardless if such things can be implemented in pgsql or > not, the fact that there's a bunch of downloadable Java code out there that > can be used, with little or no effort, is enough to motivate my statement. > Java in the database has quite a lot of advantages, and most people would prefer using java instead of learning one more language for stored procedures. If we can show that it can be stable and portable, people will love it. It is their problem what they use it for :)) I think one could use it for sending data into message queues, call validation with EJB methods, do complex analisis on it, check if a key exists in another database(db platform independent distributed RDBMS), or whatever, it would make a DB realy inteligent, and would help a lot keeping 2 tier systems out of trouble. /s/would/will Laszlo Hornyak > Regards, > > Thomas Hallgren ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [PATCHES] update i386 spinlock for hyperthreading
On Fri, Feb 20, 2004 at 05:26:46AM -0500, Neil Conway wrote: > "Simon Riggs" <[EMAIL PROTECTED]> writes: > >>Kenneth Marshall would like me to post this: > >> I agree that in order to manage today's large memory machines, we > >> need to have less contention in our buffer management strategies. > >> The two main main choke points are in the buffer hash table routines > >> and in the buffer management linked lists. Unfortunately most of the > >> code depends on holding the bufmgr lock on entry which eliminates > >> many chances for parallelism. > > Are you familiar with the work I've been doing recently to try to > reduce the contention for the BufMgrLock? For example: > > http://www.mail-archive.com/pgsql-hackers%40postgresql.org/msg40289.html > > The approach I've taken is to remove the usage of the BufMgrLock for > operations that do not affect the global state of the buffer pool. > That means that operations like incrementing a buffer's refcount > requires only holding the per-buffer meta data lock. That's only one > part of the puzzle, however: other ways to reduce BufMgrLock > contention will probably be necessary. > > Unfortunately this code is not in CVS yet: I've been too busy with > school to wrap up the remaining issues it has. However, I hope to get > it into the tree reasonably soon, and certainly in time for 7.5. > > >> The number of buffer pools should at the very minimum be equal to > >> the number of processors in the system. [...] > > Not sure I understand exactly what you're suggesting here. Can you > elaborate? > > >> The next item to address is the buf_table concurrency. It appears > >> that the same code that was used in the hash index update by Tom > >> Lane could be used to split the buf_table accesses into a > >> per-bucket access using a per-bucket lock and not a global > >> lock. Modifying the current dyn_hash search and update code would > >> make it look effectively like Mr. Lane's new hash index code. > > Interesting. This would be complementary, of course, to my work on > changing the buffer locking scheme: perhaps once that is done, we can > reassess the remaining lock contention issues in the bufmgr, and > implement this if necessary? > > Another related idea that Jan Wieck and I had discussed was avoiding > acquiring the BufMgrLock at all in BufferAlloc() where possible. For > instance, we could enhance the existing PrivateRefCount mechanism, or > invent some new mechanism, which would essentially keep a LRU list of > buffer tag -> buffer id mappings in each backend's local memory. Then, > we would walk this list in BufferAlloc(): if the buffer tag we're > looking for is already there, we can immediately acquire the buffer's > per-buffer meta data lock (without ever acquiring the BufMgrLock). > We'd need to then check that the buffer hasn't changed under our feet > (compare the locked buffer's tag with what we think its tag should be, > and start over if its different). > > -Neil > > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend Neil, I have been following the discussion in the pgsql-hackers list. I tried to apply the patch you mentioned above, but I did not have the same version of postgres and had a lot of rejects. I also wanted to see your approach to adding a finer-grained lock structure to the buffer manager; since some of my ideas would depend on the implimentation used. My comment on the number of buffer pools: >> The number of buffer pools should at the very minimum be equal to >> the number of processors in the system. [...] refers to the fact that if you could provide a per-CPU buffer pool you would be able to minimize the intra-CPU cache sync. The code would need to be able to find out what CPU it was running on to make that work. Other wise, simply splitting the buffer pool into several pools with a per-pool lock would increase the concurrency proportional to the number of pools. The buffer header would have a pool id to allow you to grab the appropriate per-pool lock. Also preferentially take a new buffer from the pool you are already using. I am waiting for your commit to CVS to look at if further. If you think that will be a while still, could you let me know which version of postgres I can use to get a clean patch installation from http://www.mail-archive.com/pgsql-hackers%40postgresql.org/msg40289.html --Ken ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[HACKERS] dbt-2 tests & profiling on ia64
Mark, > On Mon, 23 Feb 2004 09:42:34 -0800 (PST), [EMAIL PROTECTED] said: Mark> http://developer.osdl.org/markw/ia64/dbt2/ Mark> I have a summary of intial results from our DBT-2 workload with Mark> PostgreSQL 7.4.1 on a 4-way Itanium2 system with 16GB of memory and 56 Mark> drives using LVM2 and linux-2.6.3. There's readprofile Mark> and oprofile data, but oprofile is seg faulting when it's trying to Mark> generate the annotated assembly source. You could try q-tools, see the announcement here: http://marc.theaimsgroup.com/?l=linux-ia64&m=107075994721581 Besides the flat profile, it will also give you call-counts. (It would be nice if this feature could be added to oprofile some day.) --david ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] Enterprice support in PostgreSQL
Hi all, We are providing database solutions in postgreSQL...Now using PostgreSQL 7.3. It is performing well. But Now we have some enterprice level requirements. One of Our requirement is to provide a distributed solution in PostgreSQL. The questions are...1. Is it posible to provide a distributed solution in PostgreSQL.2. Does PostgreSQL Support distributed transactions. 3. If not does it included in the next release. Or when will be the distributed version of postgreSQL available.4. Is there a replication solution availbale for postgreSQL.5. Does postgreSQL support 2 phase commit and Distributed transaction standards.6. Is there a transaction manager(or co-ordinater) available for postgreSQL.7. Can we use MTS (Microsot Transaction Server) wtih postgreSQL. 8. Does postgreSQL support Load balancing and all other enterprice features. Can we expect these features in the next version.. regards, jinujose Do you Yahoo!? Yahoo! Mail SpamGuard - Read only the mail you want.
Re: [HACKERS] Two-phase commit
On Sat, 7 Feb 2004, Bruce Momjian wrote: > > Please have a look and comment, the patches can be found here: > > http://www.iki.fi/hlinnaka/pgsql/ > > > > What is the schedule for 7.5? Any chance of getting this in? > > 7.5 is certainly possible. We are months away from beta on 7.5 and I > would like ot see two-phase commit included. One item that has come up > in past discussions is a way of recording two-phase commit failures to > the administrator in cases where you precommit, get a reply, commit, > then the remote machine disappears. You would resolve this by opening a new session, and checking if the gid you specified in PREPARE TRANSACTION is still present in the pg_prepared_xacts view. It could be done manually by the administrator, or it could be done automatically by an external transaction manager if there is one. The XA interface specifies a function called "recover", that gives you a list of pending transactions. If we some day have an XA implementation, the recover call would map directly to "SELECT gid FROM pg_prepared_xacts". The JDBC XA implementation that I'm working on does that already. I have updated my patches, see the URL above. I renamed the commands to PREPARE TRANSACTION, COMMIT PREPARED and ROLLBACK PREPARED. I think it's more coherent that way. I also added documentation entries for the commands, and a basic regression test. I went through all the AtCommit_* and AtEOXact* hooks in xact.c to find any possible problem areas. The following items have not yet been implemented and throw an error if you try to do 2PC in the same transaction. * Notifications (NOTIFY/LISTEN). All pending notifications should be stored in persistent storage in the prepare phase, and sent in the commit phase. * Creation/deletion of relations. I couldn't figure out how the relation cache invalidation stuff should work with 2PC. * Modifying GUC variables. I need to study the GUC code more thoroughly before I can tell what needs to be done. * Updates to shadow/group files, that is, CREATE USER and friends. Needs some tricks to delay the writing of pg_pwd/pg_group. * Large objects. AFAICS, no particular problem here, but I'd like to deal with them later when the more important stuff are ok. Plus a couple of minor details: * Temporary tables. The seem to work somehow, but I haven't tested them much. I have a feeling that nasty things might happen if you commit the prepared transaction from another backend etc. * initdb gives a warning about a missing file. It's harmless, but I don't see how to detect that you're running under initdb. Also, if you try to prapare a transaction with a global transaction identifier that's already in use, you first get a warning and then an error. I'm going to tackle the above problems later, but I would like to get this applied to the cvs trunk with the current functionality first, after discussion of course. The rest are nice to have for the sake of completeness but probably not necessary for most users. - Heikki ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] 7.3.6 for Monday ... still a go?
Marc G. Fournier wrote: > > I'm looking at doing the bundle up on Sunday night, and announce on Monday > ... any outstanding back-patches that need to get in? Tom/Bruce, on > scheduale for doing the docs changes? I can package up this week, but not this weekend, and we should have the release item descriptions that we had in 7.4, and I don't see that happening in 24 hours. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Avoid MVCC using exclusive lock possible?
On Tue, 24 Feb 2004, Stephen wrote: > It came to my mind that what if there could be a mechanism in place to allow > overwriting portions of the same row *whenever possible* instead of creating > a new row as MVCC would require. How would you do a rollback if an error occurred mid-query? How would you keep your table from being a big pile of junk if a power failure happened during the query? Jon ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Too-many-files errors on OS X
[EMAIL PROTECTED] (Tom Lane) wrote in message (in part) > ... > Hmm. This may be OS-specific. The shlibs certainly show up in the > output of lsof in every variant I've checked, but do they count against > your open-file limit? >From the lsof FAQ: > 5.2 Why doesn't Apple Darwin lsof report text file information? > > At the first port of lsof to Apple Darwin, revision 4.53, > insufficient information was available -- logic and header > files -- to permit the installation of VM space scanning > for text files. As of lsof 4.70 it is sill not available. > Text file support will be added to Apple Darwin lsof after > the necessary information becomes available. Lsof calls the executable and shared libraries "text files." The lsof FAQ may be found at: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ I have developed a hack which will be released at lsof revision 4.71. A pre-release source distribution of 4.71 only for Darwin is available at: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/NEW/lsof_4.71C.darwin.tar.bz2 Note that you must build the lsof executable from that distribution and building lsof requires that you download the XNU headers from www.opensource.apple.com/darwinsource/. Downloading the XNU headers requires an Apple ID and password. Vic Abell, lsof author ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Avoid MVCC using exclusive lock possible?
Hi, Recently, I ran a huge update on an Integer column affecting 100 million rows in my database. What happened was my disk space increased in size and my IO load was very high. It appears that MVCC wants to rewrite each row (each row was about 5kB due to a bytea column). In addition, VACUUM needs to run to recover space eating up even more IO bandwidth. It came to my mind that what if there could be a mechanism in place to allow overwriting portions of the same row *whenever possible* instead of creating a new row as MVCC would require. This would work well for timestamp, char, integer, float, boolean columns etc.. A user must explicitly call: EXCLUSIVE LOCK ON TABLE UPDATE ROWs RELEASE LOCK ON TABLE. It basically immitates the behavior of MySQL. Surely, this would be faster than recreating the new row and marking the old one as invalid at the expense of locking the table. MySQL users can then use Postgres and get similar performance simply by locking the table first. It probably works well when the transaction volume is low, when you need a quick counter, when your IO bandwidth is saturated or when you want to avoid VACUUMing after a massive update. Any thoughts? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [SQL] Materialized View Summary
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tuesday 24 February 2004 01:48 pm, Robert Treat wrote: > On Tue, 2004-02-24 at 12:11, Richard Huxton wrote: > > On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote: > > > I've written a summary of my findings on implementing and using > > > materialized views in PostgreSQL. I've already deployed eagerly > > > updating materialized views on several views in a production > > > environment for a company called RedWeek: http://redweek.com/. As a > > > result, some queries that were taking longer than 30 seconds to run > > > now run in a fraction of a millisecond. > > > > > > You can view my summary at > > > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.htm > > >l > > have you done much concurrency testing on your snapshot views? I > implemented a similar scheme in one of my databases but found problems > when I had concurrent "refresh attempts". I ended up serializing the > calls view LOCKing, which was ok for my needs, but I thought potentially > problematic in other cases. > I don't actually use snapshot views in production. I would imagine that if you had two seperate processes trying to update the views simultaneously, that would be a problem. All I can say is "don't do that". I think you'd want to lock the table before we go and start messing with it on that scale. We are running into some deadlock issues and some other problems with eager mvs, but they are very rare and hard to reproduce. I think we are going to start locking the row before updating it and see if that solves it. We also just discovered the "debug_deadlock" feature. I'll post my findings and summaries of the information I am getting here soon. I'm interested in whatever you've been working on WRT materialized views. What cases do you think will be problematic? Do you have ideas on how to work around them? Are there issues that I'm not addressing but should be? > > Interesting (and well written) summary. Even if not a "built in" > > feature, I'm sure that plenty of people will find this useful. Make > > sure it gets linked to from techdocs. > > Done. :-) > *blush* > > If you could identify candidate keys on a view, you could conceivably > > automate the process even more. That's got to be possible in some > > cases, but I'm not sure how difficult it is to do in all cases. > > it seems somewhere between Joe Conways work work arrays and polymorphic > functions in 7.4 this should be feasible. > I'll have to look at what he is doing in more detail. - -- Jonathan M. Gardner Web Developer, Amazon.com [EMAIL PROTECTED] - (206) 266-2906 -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAO837BFeYcclU5Q0RAhonAKDBY7Svz9/vxmerS+y/h2mLgV1ZZQCdFlnd 7aMPFvRx4O8qg+sJfWkaBh8= =zdhL -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] postgres Mac OS X startup script
Hello, I wanted to submit some changes to the bundled postgres startup script for Mac OS X. I added calls to utilize the bundled apache rotatelogs script in the DB startup for log rotation. Also modified startup parameters file to allow using the "SystemStarter" utility to start/stop/restart postgres with a rotating log file. The script credits David Wheeler, 2002. I wrote him a message about the changes an he suggested I post them here. I explain some of the changes below. Not sure how to submit the changes. I have 3 files, "PostgreSQL" script, "StartupParameters.plist" file, "pg_startupscript.diff" file. The diff file was run against the original "PostgreSQL" script file. I'll try attaching them to this message. If they get filtered I can resend if needed. Thanks. Ray A. 1) Changed the "Provides" parameter in StartupParameters.plist to "PostgreSQL" from "postgres database" simply for ease of typing. It seems that the SystemStarter utility uses the "Provides" value if you want to control the script. This way I did not have to enclose it in quotes on commandline. The modified StartupParameters.plist is now an XML document also. 2) For the startup script I added 2 user modifiable variables: # do you want to rotate the log files, 1=true 0=false ROTATELOGS=1 # logfile rotate in seconds ROTATESEC="604800" I also added a non modifiable variable: # The apache log rotation utility LOGUTIL="/usr/sbin/rotatelogs" I modified the StartService and RestartService functions to execute the new commands if the user wants log rotation. %PostgreSQL Description: application/applefile PostgreSQL Description: Binary data %StartupParameters.plist Description: application/applefile StartupParameters.plist Description: Binary data %pg_startupscript.diff Description: application/applefile pg_startupscript.diff Description: Binary data ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] 7.3.6 for Monday ... still a go?
I'm looking at doing the bundle up on Sunday night, and announce on Monday ... any outstanding back-patches that need to get in? Tom/Bruce, on scheduale for doing the docs changes? Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] How to get Relation name from Oid ??
On Saturday 28 February 2004 18:47, Ramanujam H S Iyengar wrote: > Hello , > > Can somebody please tell me how to get the name of the relation (Attribute > also) from the Oid and the otherway back (Oid from name) ?? The pg_class table is what you want. See the system-tables chapter of the manuals for details. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] How to get Relation name from Oid ??
Hello , Can somebody please tell me how to get the name of the relation (Attribute also) from the Oid and the otherway back (Oid from name) ?? thanks -Ramu _ Masterpieces made affordable! Buy art prints. http://go.msnserver.com/IN/42736.asp MSN Shopping. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [ADMIN] Schema comparisons
Interestingly I tried to address the same problem few days ago. I used pg_dump, grep, etc - in the end I got what I needed, but it was a cumbersome ordeal. I think ideally it would be great to have a utility that would give me a clean diff. between the schemas. Perhaps pg_dump could have a new arg to produce the output most suitable for this utility. Mike. > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Tom Lane > Sent: Saturday, February 28, 2004 10:40 AM > To: Mark Lubratt > Cc: [EMAIL PROTECTED] > Subject: Re: [HACKERS] [ADMIN] Schema comparisons > > > Mark Lubratt <[EMAIL PROTECTED]> writes: > > On Feb 27, 2004, at 10:28 PM, Tom Lane wrote: > >> Mark Lubratt <[EMAIL PROTECTED]> writes: > >>> I've been trying to be careful, but I've gotten out of synch with > >>> whether or not I've applied the changes I've made to the development > >>> system to the production system. Is there a utility that will compare > >>> the tables, functions, trigger, views, etc. between two systems and > >>> flag the schema elements that aren't in synch between the two? > >> > >> Have you tried diffing pg_dump output? It's not the greatest tool but > >> it's helpful. > > > Yes, I did. It was quite cumbersome. Especially since the OIDs and > > TOC entry numbers didn't matchup; and, since those didn't always match, > > the order of objects wasn't quite the same either. So, diff was > > throwing a lot of false positives at me. > > Yeah. CVS-tip pg_dump doesn't show OIDs by default, to make it easier > to use for purposes like this. The ordering issue is the bigger problem > though. I presume that the object creation history is different in the > two databases and so pg_dump's habit of sorting by OID isn't helpful. > > It occurs to me that this could be solved now that we have > dependency-driven ordering in pg_dump. The ordering algorithm is > presently > * Order by object type, and by OID within types; > * Move objects as needed to honor dependencies. > Ordering by OID should no longer be needed for correctness, because > the second phase will take care of any dependency problems. We > could instead make the initial sort be by object name (within types). > This should ensure that the schema output is identical for logically > equivalent databases, even if their history is different. > > (When dumping from a pre-7.3 database, we'd have to stick to the OID > algorithm for lack of dependency info, but of course that case is > getting less interesting as time wears on.) > > Comments? Anyone see a reason not to do this? > > regards, tom lane > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [ADMIN] Schema comparisons
On Saturday 28 February 2004 15:39, Tom Lane wrote: > Mark Lubratt <[EMAIL PROTECTED]> writes: > > On Feb 27, 2004, at 10:28 PM, Tom Lane wrote: > >> Mark Lubratt <[EMAIL PROTECTED]> writes: > >>> I've been trying to be careful, but I've gotten out of synch with > >>> whether or not I've applied the changes I've made to the development > >>> system to the production system. Is there a utility that will compare > >>> the tables, functions, trigger, views, etc. between two systems and > >>> flag the schema elements that aren't in synch between the two? > >> > >> Have you tried diffing pg_dump output? It's not the greatest tool but > >> it's helpful. > Comments? Anyone see a reason not to do this? It would help me out too - I have similar problems to Mark with keeping various copies in sync. I've been looking at storing $REVISION$ in comments for each object, so my install scripts can halt if there is a problem. Not wanting to use my only comment slot for this I was thinking about an extension to the COMMENT ON statement: COMMENT ON TABLE foo IS 'This is where I stroe my foos.'; COMMENT ON TABLE foo SECTION 'default' IS 'I meant store my foos.'; COMMENT ON TABLE foo SECTION 'revision' IS '1.19'; COMMENT ON TABLE foo SECTION 'bar' IS 'baz'; >From first inspections, it seems to be a matter of adding a column to a base-table and changing some queries/use a view+base-table. I thought it might be of use to the pgadmin crew etc, but haven't got to the point of writing up my notes and seeing if there is interest. Is there any point in thinking this through further, or is it me not thinking clearly? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] cvs: mislinked plpgsql.so ?
Working with current CVS: $ createlang plpgsql createlang: language installation failed: ERROR: could not load library "/pgroot-cvs/lib/postgresql/plpgsql.so": /pgroot-cvs/lib/postgresql/plpgsql.so: undefined symbol: downcase_truncate_identifier --strk; ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] cvs: mislinked plpgsql.so ?
strk <[EMAIL PROTECTED]> writes: > Working with current CVS: > $ createlang plpgsql > createlang: language installation failed: > ERROR: could not load library "/pgroot-cvs/lib/postgresql/plpgsql.so": > /pgroot-cvs/lib/postgresql/plpgsql.so: > undefined symbol: downcase_truncate_identifier Your plpgsql.so may be CVS-tip, but your backend isn't... that function was just added a few days ago. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [ADMIN] Schema comparisons
Mark Lubratt <[EMAIL PROTECTED]> writes: > On Feb 27, 2004, at 10:28 PM, Tom Lane wrote: >> Mark Lubratt <[EMAIL PROTECTED]> writes: >>> I've been trying to be careful, but I've gotten out of synch with >>> whether or not I've applied the changes I've made to the development >>> system to the production system. Is there a utility that will compare >>> the tables, functions, trigger, views, etc. between two systems and >>> flag the schema elements that aren't in synch between the two? >> >> Have you tried diffing pg_dump output? It's not the greatest tool but >> it's helpful. > Yes, I did. It was quite cumbersome. Especially since the OIDs and > TOC entry numbers didn't matchup; and, since those didn't always match, > the order of objects wasn't quite the same either. So, diff was > throwing a lot of false positives at me. Yeah. CVS-tip pg_dump doesn't show OIDs by default, to make it easier to use for purposes like this. The ordering issue is the bigger problem though. I presume that the object creation history is different in the two databases and so pg_dump's habit of sorting by OID isn't helpful. It occurs to me that this could be solved now that we have dependency-driven ordering in pg_dump. The ordering algorithm is presently * Order by object type, and by OID within types; * Move objects as needed to honor dependencies. Ordering by OID should no longer be needed for correctness, because the second phase will take care of any dependency problems. We could instead make the initial sort be by object name (within types). This should ensure that the schema output is identical for logically equivalent databases, even if their history is different. (When dumping from a pre-7.3 database, we'd have to stick to the OID algorithm for lack of dependency info, but of course that case is getting less interesting as time wears on.) Comments? Anyone see a reason not to do this? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] cvs lock
When I try to commit to cvs it gets stuck and outputs this messages every 30:th second: cvs server: [11:11:28] waiting for ishii's lock in /cvsroot/pgsql-server/src/bin/pg_controldata/po -- /Dennis Björklund ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match