Re: [GENERAL] non-integer constant in ORDER BY: why exactly, and documentation?
On Oct 11, 2012, at 4:48 PM, Ken Tanzer wrote: > Hi. I recently ran a query that generate the same error as this: > > SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > ERROR: non-integer constant in ORDER BY > LINE 1: SELECT * FROM generate_series(1,10) ORDER BY 'foo'; > > The query was generated by an app (and the result somewhat inadvertent), so > it was easy enough to change and I'm not asking here about a practical > problem. > > I am curious though about why this "limitation" exists. I get that integer > constants are reserved for sorting by column numbers. But if Postgres > already knows that it's a non-integer constant, why not let it go through > with the (admittedly pointless) ordering? > > Also, I couldn't see that this was explictly mentioned in the documentation. > The relevant pieces seemed to be: > > Each expression can be the name or ordinal number of an output column (SELECT > list item), or it can be an arbitrary expression formed from input-column > values. > > followed closely by: > > It is also possible to use arbitrary expressions in the ORDER BY clause, > including columns that do not appear in the SELECT output list. > (http://www.postgresql.org/docs/9.1/static/sql-select.html#SQL-ORDERBY) > > And looking at the expressions page > (http://www.postgresql.org/docs/8.4/static/sql-expressions.html), the first > type of value expression is a "constant or literal expression." So nothing > seems to explicitly rule out a literal ORDER BY. > > I'm not sure if it would do violence to something I'm missing, but would the > following combined statement work for the documentation? > > "Each expression can be the name or ordinal number of an output column > (SELECT list item), or it can be an arbitrary expression. The expression > can include column values--whether they appear in the SELECT output list or > not. An expression may not, however, consist solely of a non-integer > constant. And an integer constant will be interpreted as the ordinal number > of an output column " Apparently, the parser tries to pull an column index out of any constant appearing in that position. It can be trivially worked around: select * from generate_series(1,10) order by coalesce('foo'); but that doesn't help if your query is automatically generated. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] crosstab
On Sep 4, 2012, at 4:30 PM, Aram Fingal wrote: > > On Sep 4, 2012, at 4:18 PM, Misa Simic wrote: > >> Inside PL/R you can take the same table as it is (unpivoted) as your >> data.frame and then pivot it inside R using reshape package,,, And then >> inside PL/R function do whatever you would like to do with data i.e export >> to whatever... - but you cant return it pivoted as result of Postgres >> function.. > > Okay, you just answered my question in reply to Joe Conway (crossed in the > mail). So I could just use write.table() from R to save to disk or I might > even have the PL/R procedure write a heatmap or PCA plot to disk, again > without returning any rows to PostgreSQL. > > -Aram Or you could return the heatmap/plot as BYTEA data or use arrays as necessary. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] restart server on Lion
On Jan 30, 2012, at 2:40 PM, Scott Frankel wrote: > > Hi all, > > What's the best/correct way to cause the PostgreSQL server to startup > automatically when rebooting on OSX 10.7 Lion? > > I'm using a macports install of postgres 8.4 and went through a couple > grueling days, sudo'd up to my eyeballs, to restore the postgres user and > have a working installation. > > To start the service, I'm currently invoking this on the cmd-line: > > sudo su postgres -c "/opt/local/lib/postgresql84/bin/pg_ctl -D > /opt/local/var/postgresql84/defaultdb -l > /opt/local/var/postgresql84/defaultdb/data/logfile.txt start" > > That's pretty cumbersome for each reboot. I've also seen references to > manually invoking this on the cmd-line: > > sudo serveradmin start postgres > > But that yields "postgres:error = "CANNOT_LOAD_BUNDLE_ERR" > > Is there an /etc or OSX-specific solution people are using for restarts? My > PG 8.3 server restarted automagically on OSX 10.5. While I don't recall > setting up anything specifically to make that happen, memory fades... MacPorts includes a launchd plist to handle this. (Perhaps "launchd" is the keyword you need to search.) /Library/LaunchDaemons/org.macports.postgresql90-server.plist (for PostgreSQL 9.0, of course) http://od-eon.com/blogs/calvin/os-x-lion-postgresql/ You can adjust the script to your liking. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Transaction ID wraparound, Oracle style
On Jan 18, 2012, at 2:15 PM, Scott Marlowe wrote: > On Wed, Jan 18, 2012 at 11:21 AM, Igor Polishchuk > wrote: >> Here is an article on a recently discovered Oracle flaw, which allows SCN to >> reach its limit. >> http://www.computerworld.com/s/article/9223506/Fundamental_Oracle_flaw_revea >> led?taxonomyId=18&pageNumber=1 >> >> Please don't beat me for posting a link for an Oracle related article. >> If you despise a very notion of mentioning Oracle, please just don't read >> the post. >> This article may be interesting to any RDBMS professional, no mater what db >> flavor he/she is working with. >> Also, this story may be a lesson for the Postgresql community on how not do >> things. I'm not a developer, but it seems that having synchronized >> transaction id between let say streaming-replicated databases would give >> some advantages if done properly. > > Wow, interesting difference between postgresql which occasionally > resets its smaller transaction id to prevent wrap whereas oracle just > uses a bigger number. If my calcs are right, Oracle has about 500 > years to figure out the wrap around limit at 16ktps etc. > > Thanks for the link, it was a fascinating read. By the way, this is called a Lamport clock. http://en.wikipedia.org/wiki/Lamport_timestamps?banner=none "On receiving a message, the receiver process sets its counter to be greater than the maximum of its own value and the received value before it considers the message received." Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Table permissions
On Jan 18, 2012, at 12:25 PM, salah jubeh wrote: > Hello, > > Thanks for the info, I have already solved this by writing the following > function. Also, i think it is better than changing the schema tables directly > > Regards > It doesn't look like the procedure handles grant options (WITH GRANT OPTION), so the output ACL will not be the same as the input ACL. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Table permissions
On Jan 18, 2012, at 8:48 AM, salah jubeh wrote: > Hello, > > I have create a table from another table such as > > CREATE TABLE tmp_XXX AS SELECT * FROM XXX; > > > The tmp_XXX tables has no permissions assigned to it and I want to assign > it with the same owner and access privileges of XXX table. I had a look on > pg catalog tables http://www.postgresql.org/docs/8.4/static/catalogs.html to > create a stored procedure to do this job for me. I have found some tables > which might be useful to get the original table iformation. For example, I > can use pg_table to get the owner of the original table. Also, I have found > pg_roles. However, I was not able to find where the access privileges are > stored. > > Is there a better way to do this task than extracting the access privileges > from pg catalog ? If not, where I can find the access privileges information > ? > You are looking for pg_catalog.pg_class.relacl. Just copy that from the original table to the duplicate (and perhaps relowner, depending on your situation), and you will have duplicate permissions. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to create "auto-increment" field WITHOUT a sequence object?
On Jun 30, 2011, at 2:40 PM, Dmitry Koterov wrote: > Hello. > > I need to create an auto-increment field on a table WITHOUT using sequences: This problem pops up a lot for invoice sequence numbers for the tax office and related cases. (Search for "gapless sequence".) Since the numbers are really only needed for an external system (as you mention), then it may make sense to generate the gapless IDs when necessary and map the generated IDs to the rows later. The drawback is that some rows in the table will not have the gapless ID until the batch job is run, but all rows will still be addressable by the real sequence ID. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] not like perl..
On Mar 29, 2011, at 10:18 AM, hook wrote: > I have a simple table with a varchar(32) field that I am trying to extract > data using regular expressions. > > select * from spam where inetaddr like '100.%' > row | inetaddr | tdate > --+--+--- > 3245 | 100.81.98.51 | 03/08/2011 07:21:19.29209-works fine > > > select * from spam where inetaddr like E'\d\d\d.%' > row | inetaddr | tdate > -+--+--- > (0 rows) > --- zip ??? > > slect * from spam where inetaddr like E'\d.%' > row |inetaddr| tdate > ---++ > 49424 | d.russell...@jdmarketing.co.uk | 03/27/2011 15:46:41.110566 > ??? though \d was a digit match > > > select * from spam where inetaddr like E'\\d.%' > row |inetaddr| tdate > ---++ > 49424 | d.russell...@jdmarketing.co.uk | 03/27/2011 15:46:41.110566 > ??? > > > What am I doing wrong??? You are not using the regular expression operator. test=# create table test(a text); CREATE TABLE test=# insert into test(a) values ('100.81.98.51'); INSERT 0 1 test=# select * from test where a ~ $$^\d+\.$$; a -- 100.81.98.51 (1 row) This is just like perl. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL ODBC Driver Help
On Feb 15, 2011, at 12:25 PM, Carlos Mennens wrote: > I'm trying to figure out how I can have users in the office connect > their Microsoft Office 2007 clients to our company database server > running PostgreSQL 8.4.7. I've configured PostgreSQL to accept > incoming connections and allow users to login however I read that I > need to have each client install a MSI pack that allows ODBC drivers > to talk to the database server. I've installed the ODBC driver from > the following URL: > > The Drivers can be found at: > PostgreSQL: http://www.postgresql.org/ftp/odbc/versions/msi/ > > > I installed it on the machine running Office 2007 but beyond that I > have no idea how to initialize a connection from Excel or Access to > the database server. Can anyone please help me figure out how to > connect? > > I have my 'pg_hba.conf' & 'postgresql.conf' files configured perfectly > to accept incoming SSL connections from my internal network on the > default port for PostgreSQL. I googled "connect excel postgresql" and found this: http://port25.technet.com/videos/research/excelopendbprimer.pdf which seems to take one through all the steps. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] (Hopefully stupid) select question.
On Jan 24, 2011, at 10:50 AM, Fredric Fredricson wrote: > I have been fighting with a select and can find no satisfactory solution. > > Simplified version of the problem: > > A table that, in reality, log state changes to an object (represented as a > row in another table): > > CREATE TABLE t ( >id SERIAL UNIQUE, >ref INTEGER, -- Reference to a row in another table >someData TEXT, >inserted DATE DEFAULT CURRENT_TIMESTAMP > ) ; > Then we insert multiple rows for each "ref" with different "someData". > > > Now I want the latest "someData" for each "ref" like: > > ref | someData (only latest inserted) > - > 1 | 'data1' > 2 | 'data2' > etc... > > The best solution I could find depended on the fact that serial is higher for > higher dates. I do not like that because if that is true, it is an indirect > way to get the data and could possibly, in the future, yield the wrong result > if unrelated changes where made or id's reused. > > Here is my solution (that depend on the SERIAL): > SELECT x.ref,x.someData > FROM t as x > NATURAL JOIN (SELECT ref,max(id) AS id FROM t GROUP BY ref ORDER BY ref) AS > y ; > > Can somebody come up with a better solution? (without resorting to stored > procedures and other performance killers). I would argue that relying on the id is safer than relying on the current timestamp because CURRENT_TIMESTAMP refers to the time that the transaction is started, not when the transaction was committed (or the row was "actually" inserted). In addition, it is technically possible for two transactions to get the same CURRENT_TIMESTAMP. SERIAL values are never reused. You could also create a security view which exposes the historical data but without the primary key in the actual table. I recommend http://pgfoundry.org/projects/tablelog which uses "performance killers" like stored procedures to handle things properly- at least take a look to see how things are handled. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to fire triggers just on "top" level DML
On Jan 19, 2011, at 5:36 PM, Kevin Grittner wrote: > "A.M." wrote: > >> Most PLs include some session-specific storage. In PL/Perl, it is >> %_SHARED. Setting a flag there should do the trick. If you are >> using a PL which does not have such a notion (like plpgsql), you >> can add a call in your triggers to a function written in a PL >> which does support this. Alternatively, a C function which >> sets/checks a global flag would work as well. > > I thought it might come to that. I'm comfortable writing C > functions, and we're not using any languages so far besides C, SQL, > and plpgsql, so I'd probably use C. If I'm going that far, though, > I'd be rather inclined to implement a TG_DEPTH variable (as being > easier for us to use) and offer it to the community in case there's > anyone else who would find this useful. If that turns out to be > harder than I think, I'll fall back to what you outlined here. If you do implement TG_DEPTH, I am curious as to what the difference between TG_DEPTH==34 and TG_DEPTH==35 could mean. I think it might cause poor coding practice in making decisions based off assumed trigger order execution. Since you only care to distinguish between depth 1 and depth 2 (and not beyond), could you elaborate on a use case where further trigger "depth" information may be useful? Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to fire triggers just on "top" level DML
On Jan 19, 2011, at 4:59 PM, Kevin Grittner wrote: > We've been running for about ten years on a framework which fires > triggers similar to database triggers in a Java tier close to the > database, and we're now trying to convert these to actual PostgreSQL > database triggers. Our biggest hitch at the moment is that we > defined a class of triggers we called "top" triggers, which only > fire from DML submitted by the application, not from DML issued by > other triggers. > > One significant use of this is to block direct modification of > summary data (either selected columns or entire tables) which are > supposed to be trigger maintained. It's not immediately obvious how > to accomplish this within PostgreSQL, although I'm probably missing > something. We're not tied to any particular methodology -- a > TG_DEPTH variable, if it existed, would do fine, for example. > > Any suggestions? Most PLs include some session-specific storage. In PL/Perl, it is %_SHARED. Setting a flag there should do the trick. If you are using a PL which does not have such a notion (like plpgsql), you can add a call in your triggers to a function written in a PL which does support this. Alternatively, a C function which sets/checks a global flag would work as well. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] excessive escaping in regular expression functions
Hello, The following statement replaces an asterisk in a string with a double-escaped asterisk: SELECT regexp_replace('*',E'\\*',E'\*'); I got this result through experimentation and I am at a loss to explain why so much escaping is necessary for the third argument. Is there a better way? Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Using aclitem[] at application layer
Hello, In an attempt to implement ACLs at the application layer (for resources stored outside of the database), I am evaluating using aclitem[] as a column type. All the functions I would need seem to be in place: aclcontains, aclexplode, aclinsert, aclitemeq, aclitemin, aclitemout, aclremove, but they are conspicuously missing from the documentation (http://www.mail-archive.com/pgsql-patches@postgresql.org/msg03400.html), so I wonder if there are any caveats or hurdles which would make me consider writing my own type. From a cursory examination, it looks like the limitations would be: 1) roles must refer to postgresql roles (that's fine for my case) 2) permission options are hardcoded to "arwdDxtXUCTc" (not so great) Are there any other problems I would encounter? Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UUID column as pimrary key?
On Jan 6, 2011, at 3:52 AM, Stuart Bishop wrote: > Maybe I should start a business in providing UUID collision insurance? Your ideas are intriguing to me and I wish to subscribe to your newsletter. -M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] schemaverse!
On Dec 21, 2010, at 5:06 PM, Merlin Moncure wrote: > A postgresql based game, that you can play from psql! Written by Abstrct > (Josh) > > http://www.schemaverse.com/ Finally, a game which makes it look like I am doing work! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: Survey on backing up unlogged tables: help us with PostgreSQL development!
On Nov 17, 2010, at 11:32 AM, Ivan Voras wrote: > On 11/17/10 02:55, Josh Berkus wrote: >> >>> If you do wish to have the data tossed out for no good reason every so >>> often, then there ought to be a separate attribute to control that. I'm >>> really having trouble seeing how such behavior would be desirable enough >>> to ever have the server do it for you, on its terms rather than yours. >> >> I don't quite follow you. The purpose of unlogged tables is for data >> which is disposable in the event of downtime; the classic example is the >> a user_session_status table. In the event of a restart, all user >> sessions are going to be invalid anyway. > > Depends on what you mean by "session". > > Typical web application session data, e.g. for PHP applications which are > deployed in *huge* numbers resides directly on file systems, and are not > guarded by anything (not even fsyncs). On operating system crash (and I do > mean when the whole machine and the OS go down), the most that can happen is > that some of those session files get garbled or missing - all the others work > perfectly fine when the server is brought back again and the users can > continue to work within their sessions. -- *That* is useful session behaviour > and it is also useful for logs. > > The definition of unlogged tables which are deliberately being emptied for no > good reason does not seem very useful to me. I'd rather support a (optional) > mode (if it can be implemented) in which PostgreSQL scans through these > unlogged tables on startup and discards any pages whose checkums don't match, > but accepts all others as "good enough". Even better: maybe not all pages > need to be scanned, only the last few, if there is a chance for any kind of > mechanism which can act as checkpoints for data validity. This is not really a fair feature comparison. With the file-based sessions, the webserver will continue to deal with potentially corrupted sessions, which is worse than dealing with no sessions. This new PostgreSQL feature will ensure that such a thing a cannot happen while also offering the performance of the file-based session storage and the ability to use queries against the session data. In my backups (using whatever flag or dump default), I will be ensuring that the sessions are *not* in the backup. I also plan on using this feature for materialized views to replace memcached. Considering that I have been waiting on this feature for years, I, for one, welcome our unlogged table overlords. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?
On Oct 28, 2010, at 12:04 PM, Daniel Verite wrote: > A.M. wrote: > >> In PostgreSQL, query canceling is implemented by opening a >> second connection and passing specific data which is received >> from the first connection > > With libpq's PQCancel(), a second connection is not necessary. To clarify, PQcancel() opens a new socket to the backend and sends the cancel message. (The server's socket address is passed as part of the cancel structure to PQcancel.) http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/interfaces/libpq/fe-connect.c;h=8f318a1a8cc5bf2d49b2605dd76581609cf9be32;hb=HEAD#l2964 The point is that a query can be cancelled from anywhere really and cancellation will not use the original connection socket. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?
On Oct 28, 2010, at 11:08 AM, Michael Clark wrote: > Hello all. > > Thanks a lot for the responses, they are appreciated. > > I think I now understand the folly of my loop, and how that was negatively > impacting my "test". > > I tried the suggestion Alex and Tom made to change my loop with a select() > and my results are now very close to the non-async version. > > The main reason for looking at this API is not to support async in our > applications, that is being achieved architecturally in a PG agnostic way. > It is to give our PG agnostic layer the ability to cancel queries. > (Admittedly the queries I mention in these emails are not candidates for > cancelling...). Hm- I'm not sure how the async API will allow you to cancel queries. In PostgreSQL, query canceling is implemented by opening a second connection and passing specific data which is received from the first connection (effectively sending a cancel signal to the connection instead of a specific query). This implementation is necessitated by the fact that the PostgreSQL backend isn't asynchronous. Even if you cancel the query, you still need to consume the socket input. Query cancellation is available for libpq both in sync and async modes. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Custom cache implemented in a postgresql C function
On Oct 20, 2010, at 7:44 PM, Gabi Julien wrote: > Hi, > > Here is my problem: I have a postgresql C function that looks like this: > > Datum filter(PG_FUNCTION_ARGS); > > It takes identifiers and queries a bunch of tables and ends up returning true > or false. So far nothing difficult except that we want better performance. > The function was already optimized to the best of my abilities and changing > the structure of the database would not help. However, having a cache would > be the perfect solution. I could implement this cache outside of postgresql > if need be but nothing could beat implementing this directly in a postgresql > C function. > > So this is what I want, a custom cache built into a postgresql C function. > Since postgresql uses different processes, it would be best to use the shared > memory. Can this be done safely? At its core, the cache could be considered > as simple as a map protected by a mutex. With postgresql, I first need to > initialized some shared memory. This is explained at the end of this link: > > http://www.postgresql.org/docs/8.2/static/xfunc-c.html > > However, it sounds like I need to reserve the shared memory in advance using: > > void RequestAddinShmemSpace(int size) > > In my case, I do not know how big my cache will be. I would preferably > allocate the memory dynamically. Is this possible? In any case, am I trying > to reinvent the wheel here? Is there already a shared map or a shared hash > structure available in postgresql? > > If shared memory turns out too difficult to use, I could create separate > caches for each postgresql processes. This would be a waste of space but it > might be better then nothing. In this case, do I need to make my code thread > safe? In other words, is postgresql using more then one thread per processes? Apart from the other suggestions made, another option could be to use your own shared memory which you allocate and manage yourself (without postgresql managing it). You could implement a simple least-recently-used cache to purge old entries as the cache grows. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] queriing the version of libpq
On Oct 7, 2010, at 5:34 AM, Devrim GÜNDÜZ wrote: > On Thu, 2010-10-07 at 12:23 +0300, Peter Eisentraut wrote: >> This is really something that psycopg2 should work out for you. I >> suggest you take up a discussion on this on their mailing list. > > ...which is down over the last 3 weeks or so: > > http://www.initd.org/ In addition, I posted a patch for 9.0 support which was supposed to be rolled in to psycopg2 weeks ago. Now I am stuck pushing my own psycopg2 egg around. The psycopg2 project is too reliant on one person (who has trouble managing his servers)- I wish he would move the project to a public project management service. I also remember a discussion on the poor state of postgresql drivers for python and which driver the PostgreSQL project should endorse- it looks like the situation has not improved. Here's the thread: http://archives.postgresql.org/pgsql-hackers/2010-02/msg00351.php Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] exclude constraints with same name?
I found some surprising behavior with the new EXCLUDE constraint in 9.0.0- it seems that EXCLUDE constraint names have to be unique across tables: test=# BEGIN; BEGIN test=# CREATE TABLE a(a INTEGER); CREATE TABLE test=# CREATE TABLE b(b INTEGER); CREATE TABLE test=# ALTER TABLE a ADD CONSTRAINT testo1 CHECK(a=1); ALTER TABLE test=# ALTER TABLE b ADD CONSTRAINT testo1 CHECK(b=1); ALTER TABLE test=# ALTER TABLE a ADD CONSTRAINT testo2 EXCLUDE (a WITH =); NOTICE: ALTER TABLE / ADD EXCLUDE will create implicit index "testo2" for table "a" ALTER TABLE test=# ALTER TABLE b ADD CONSTRAINT testo2 EXCLUDE (b WITH =); NOTICE: ALTER TABLE / ADD EXCLUDE will create implicit index "testo2" for table "b" ERROR: relation "testo2" already exists test=# Also, the error message is odd and could be improved. The workaround is to use unique constraint names, but I would like to better understand why they need to be unique in the first place when other constraint names need not be. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] sql DO in rule 9.0rc1
On Sep 9, 2010, at 8:31 PM, Jeff Davis wrote: > On Thu, 2010-09-09 at 17:07 -0400, A.M. wrote: >> Is there a technical limitation which prevents DO from being used in rules >> or am I missing something with this? >> >> CREATE RULE test_update AS ON UPDATE TO test DO INSTEAD DO $$ >> >> BEGIN; >> >> >> RAISE NOTICE 'hello'; >> >> >> END; >> >> >> $$; >> >> Cheers, >> M > >> From the docs here: > >http://www.postgresql.org/docs/9.0/static/sql-createrule.html > > I see: > >"Valid commands are SELECT, INSERT, UPDATE, DELETE, or NOTIFY." > > And I assume that DO is a separate command that is not valid for a rule > such as this. > > As a workaround, you can make a named function and do "SELECT > myfunction()" as the INSTEAD clause. Perhaps I should have posted to -hackers instead, but I was really wondering if there were some real technical limitation to having this implemented. Does "DO" have any sort of context which can be applied? It seems that NEW and OLD would have to be pushed into that context. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] sql DO in rule 9.0rc1
Is there a technical limitation which prevents DO from being used in rules or am I missing something with this? CREATE RULE test_update AS ON UPDATE TO test DO INSTEAD DO $$ BEGIN; RAISE NOTICE 'hello'; END; $$; Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] exclusion constraint with overlapping timestamps
I am experimenting with exclusion constraints via Depesz's excellent introduction here: http://www.depesz.com/index.php/2010/01/03/waiting-for-8-5-exclusion-constraints/ In the example, he uses non-overlapping (day) dates for hotel booking. In my case, I would like to use the same datatype but allow for timestamps to overlap on the boundaries, so that I can store a continuous timeline of state. CREATE TABLE test.x ( validfrom TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now(), validto TIMESTAMP WITH TIME ZONE, CHECK(validfrom < COALESCE(validto,'infinity'::timestamptz)), CONSTRAINT overlapping_validity EXCLUDE USING GIST( box(point(extract(epoch FROM validfrom AT TIME ZONE 'UTC'),0), point(extract(epoch FROM validto AT TIME ZONE 'UTC'),1)) WITH && ) ); INSERT INTO test.x(validfrom,validto) VALUES ('2010-08-08 10:00:00 UTC','2010-08-08 11:00:00 UTC'); --success INSERT INTO test.x(validfrom,validto) VALUES ('2010-08-08 11:00:00 UTC','2010-08-08 12:00:00 UTC'); --failure, but should succeed in my design INSERT INTO test.x(validfrom,validto) VALUES ('2010-08-08 10:30:00 UTC','2010-08-08 11:00:00 UTC'); --proper failure I considered adding a fudge factor to the box values, but that feels prone to failure in edge cases (why can't I have a value that is valid for one second?). Do I need to write a new box operator which checks ignores overlap at the edges or is a better way to accomplish this? Thanks. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] initdb fails to allocate shared memory
On Aug 25, 2010, at 11:31 AM, Joshua D. Drake wrote: > On Wed, 2010-08-25 at 11:15 -0400, A.M. wrote: >> I am using pgsql9.0b4 (but pgsql8.4 exhibits the same behavior) on MacOS >> 10.6.4 and initdb fails: >> initdb: removing data directory "/Volumes/Data/pgsql90b" >> >> I would like to create the database space and then reduce the shared memory >> requirements in postgresql.conf, but this situation seems to create a >> chicken-and-egg problem. How can I reduce shared_buffers or max_connections >> prior to running initdb? > > If you don't have enough shared memory to initdb, you don't have enough > to run postgresql. You need to increase your shared memory for MacOS > per: > > http://www.postgresql.org/docs/8.4/static/kernel-resources.html > > And then initdb. Then it seems that the error reporting could be improved to not mention "shared_buffers" and "max_connections" neither of which I can touch during initdb. "creating template1 database in /Volumes/Data/pgsql90b/base/1 ... FATAL: could not create shared memory segment: Cannot allocate memory DETAIL: Failed system call was shmget(key=1, size=1703936, 03600). HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space. To reduce the request size (currently 1703936 bytes), reduce PostgreSQL's shared_buffers parameter (currently 50) and/or its max_connections parameter (currently 14). The PostgreSQL documentation contains more information about shared memory configuration." Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] initdb fails to allocate shared memory
I am using pgsql9.0b4 (but pgsql8.4 exhibits the same behavior) on MacOS 10.6.4 and initdb fails: /usr/local/pgsql90beta/bin/initdb -D /Volumes/Data/pgsql90b/ -E UTF8 The files belonging to this database system will be owned by user "agentm". This user must also own the server process. The database cluster will be initialized with locale en_US.UTF-8. The default text search configuration will be set to "english". creating directory /Volumes/Data/pgsql90b ... ok creating subdirectories ... ok selecting default max_connections ... 10 selecting default shared_buffers ... 400kB creating configuration files ... ok creating template1 database in /Volumes/Data/pgsql90b/base/1 ... FATAL: could not create shared memory segment: Cannot allocate memory DETAIL: Failed system call was shmget(key=1, size=1703936, 03600). HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space. To reduce the request size (currently 1703936 bytes), reduce PostgreSQL's shared_buffers parameter (currently 50) and/or its max_connections parameter (currently 14). The PostgreSQL documentation contains more information about shared memory configuration. child process exited with exit code 1 initdb: removing data directory "/Volumes/Data/pgsql90b" I would like to create the database space and then reduce the shared memory requirements in postgresql.conf, but this situation seems to create a chicken-and-egg problem. How can I reduce shared_buffers or max_connections prior to running initdb? Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_notify but no pg_listen?
On Aug 23, 2010, at 10:18 PM, Craig Ringer wrote: > On 08/24/2010 06:43 AM, Bruce Momjian wrote: >> A.M. wrote: >>> There is a new pg_notify function in pgsql 9.0 but no pg_listen >>> equivalent? Why? It sure would be handy to pass quoted strings... >> >> Notify sends the notify; there is no place to send a 'listen' payload. > > I assume what they want is the ability to filter notifications, so they only > get notifications with a certain payload. > > Seems to me that in that case you should just be using different notify > values (possibly using the two-argument form of pg_notify) so you can listen > on different things depending on what you are interested in. Actually, my use case was for parameterized queries and pl functions where it's much easier to use quoted strings for the notification name as well as the payload- it would just be a convenience, really. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_notify but no pg_listen?
There is a new pg_notify function in pgsql 9.0 but no pg_listen equivalent? Why? It sure would be handy to pass quoted strings... Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Half-applied UPDATE rule on view
Hello, I have encountered an odd behavior involving rules which the following script demonstrates (in postgresql 8.4.3). Note that at the end of the run, the "dud" table contains one row "spam1" when the update rule clearly contains two inserts to the "dud" table. It seems that the update rule on "test.job" cuts off execution after the first update executes (and succeeds)- why? -- DROP SCHEMA IF EXISTS test CASCADE; CREATE SCHEMA test; SET search_path TO test; CREATE TABLE job_history ( id SERIAL PRIMARY KEY, logicalid INTEGER NOT NULL, --ID shown through views created TIMESTAMP NOT NULL, createdby TEXT NOT NULL, deprecated TIMESTAMP, --if NULL, then it is still valid (and must be the latest "version" for this historical ID) deprecatedby TEXT, name TEXT ); CREATE TABLE dud ( name TEXT ); CREATE SEQUENCE job_logicalid OWNED BY job_history.logicalid; CREATE INDEX job_history_live_idx ON job_history(deprecated) WHERE deprecated IS NULL; CREATE VIEW job AS SELECT jh.logicalid AS id, jh.name AS name FROM job_history AS jh WHERE deprecated IS NULL; CREATE RULE job_insert AS ON INSERT TO job DO INSTEAD INSERT INTO job_history(id,logicalid,created,createdby,deprecated,deprecatedby,name) VALUES (DEFAULT,nextval('job_logicalid'),now(),current_role,NULL,NULL,NEW.name) RETURNING job_history.logicalid,name; CREATE RULE job_update AS ON UPDATE TO job DO INSTEAD ( INSERT INTO dud(name) VALUES ('spam1'); UPDATE job_history SET deprecated=now(),deprecatedby=current_role WHERE id=(SELECT MAX(jh.id) FROM job_history AS jh WHERE jh.logicalid=NEW.id); INSERT INTO dud(name) VALUES ('spam2'); INSERT INTO job_history(logicalid,created,createdby,deprecated,deprecatedby,name) VALUES (NEW.id,now(),current_role,NULL,NULL,NEW.name); ); CREATE RULE job_delete AS ON DELETE TO job DO INSTEAD UPDATE job_history SET deprecated=now(),deprecatedby=current_role WHERE id=(SELECT MAX(jh.id) FROM job_history AS jh WHERE jh.logicalid=OLD.id); INSERT INTO test.job(name) VALUES ('jobA'),('jobB'); SELECT * FROM job_history; UPDATE test.job SET name='jobC' WHERE id=2; SELECT * FROM job_history; SELECT * FROM dud; -- Here is a run of it in postgresql 8.4.3: psql -a -1 -f job_history test -- DROP SCHEMA IF EXISTS test CASCADE; psql:/Users/agentm/Desktop/job_history:1: NOTICE: drop cascades to 3 other objects DETAIL: drop cascades to table test.job_history drop cascades to table test.dud drop cascades to view test.job DROP SCHEMA CREATE SCHEMA test; CREATE SCHEMA SET search_path TO test; SET CREATE TABLE job_history ( id SERIAL PRIMARY KEY, logicalid INTEGER NOT NULL, --ID shown through views created TIMESTAMP NOT NULL, createdby TEXT NOT NULL, deprecated TIMESTAMP, --if NULL, then it is still valid (and must be the latest "version" for this historical ID) deprecatedby TEXT, name TEXT ); psql:/Users/agentm/Desktop/job_history:14: NOTICE: CREATE TABLE will create implicit sequence "job_history_id_seq" for serial column "job_history.id" psql:/Users/agentm/Desktop/job_history:14: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "job_history_pkey" for table "job_history" CREATE TABLE CREATE TABLE dud ( name TEXT ); CREATE TABLE CREATE SEQUENCE job_logicalid OWNED BY job_history.logicalid; CREATE SEQUENCE CREATE INDEX job_history_live_idx ON job_history(deprecated) WHERE deprecated IS NULL; CREATE INDEX CREATE VIEW job AS SELECT jh.logicalid AS id, jh.name AS name FROM job_history AS jh WHERE deprecated IS NULL; CREATE VIEW CREATE RULE job_insert AS ON INSERT TO job DO INSTEAD INSERT INTO job_history(id,logicalid,created,createdby,deprecated,deprecatedby,name) VALUES (DEFAULT,nextval('job_logicalid'),now(),current_role,NULL,NULL,NEW.name) RETURNING job_history.logicalid,name; CREATE RULE CREATE RULE job_update AS ON UPDATE TO job DO INSTEAD ( INSERT INTO dud(name) VALUES ('spam1'); UPDATE job_history SET deprecated=now(),deprecatedby=current_role WHERE id=(SELECT MAX(jh.id) FROM job_history AS jh WHERE jh.logicalid=NEW.id); INSERT INTO dud(name) VALUES ('spam2'); INSERT INTO job_history(logicalid,created,createdby,deprecated,deprecatedby,name) VALUES (NEW.id,now(),current_role,NULL,NULL,NEW.name); ); CREATE RULE CREATE RULE job_delete AS ON DELETE TO job DO INSTEAD UPDATE job_history SET deprecated=now(),deprecatedby=current_role WHERE id=(SELECT MAX(jh.id) FROM job_history AS jh WHERE jh.logicalid=OLD.id); CREATE RULE INSERT INTO test.job(name) VALUES ('j
Re: [GENERAL] flatten pg_auth_members
On Jun 23, 2010, at 6:01 PM, A.M. wrote: > Hello, > > I am trying to make a query which will flatten pg_auth_members into a table > with two columns "user" and "group" which will recurse inherited roles so > that each login role is associated once with any inherited roles (assuming > all associated roles are inherited). > > This query does not do what I want, but I can't quite wrap my head around the > recursion part: > > WITH RECURSIVE usergroups(user_id,group_id) AS ( > SELECT am.member AS user_id,am.roleid AS group_id FROM pg_auth_members > AS am > UNION > SELECT am.member AS user_id,am.roleid AS group_id FROM usergroups AS > u,pg_auth_members AS am WHERE am.roleid=u.group_id > ) > SELECT r.user_id,r.group_id FROM usergroups AS r; > > For a role inheritance tree of "bob (1)"->"manager(2)"->"employee(3)", I > would like to see: > > user | group > 1 | 2 > 1 | 3 Hm- I wasn't able to figure out the WITH RECURSIVE construct, so I used a cartesian product instead: SELECT DISTINCT am1.member, (SELECT a2.rolname FROM pg_authid AS a2 WHERE a2.oid=am1.member), am2.roleid, (SELECT a3.rolname FROM pg_authid AS a3 WHERE a3.oid=am2.roleid) FROM pg_auth_members AS am1,pg_auth_members AS am2 WHERE pg_has_role(am1.member,am2.roleid,'MEMBER') UNION SELECT am1.member, (SELECT a2.rolname FROM pg_authid AS a2 WHERE a2.oid=am1.member), am1.member, (SELECT a2.rolname FROM pg_authid AS a2 WHERE a2.oid=am1.member) FROM pg_auth_members AS am1; Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] flatten pg_auth_members
Hello, I am trying to make a query which will flatten pg_auth_members into a table with two columns "user" and "group" which will recurse inherited roles so that each login role is associated once with any inherited roles (assuming all associated roles are inherited). This query does not do what I want, but I can't quite wrap my head around the recursion part: WITH RECURSIVE usergroups(user_id,group_id) AS ( SELECT am.member AS user_id,am.roleid AS group_id FROM pg_auth_members AS am UNION SELECT am.member AS user_id,am.roleid AS group_id FROM usergroups AS u,pg_auth_members AS am WHERE am.roleid=u.group_id ) SELECT r.user_id,r.group_id FROM usergroups AS r; For a role inheritance tree of "bob (1)"->"manager(2)"->"employee(3)", I would like to see: user | group 1 | 2 1 | 3 Thanks for any assistance, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Querying a Large Partitioned DB
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Apr 10, 2009, at 10:15 AM, Justin Funk wrote: Team Amazing, I am building a massive database for storing the syslogs of a room of servers. The database gets about 25 million entries a day, and need to keep them for 180 days. So the total size of the database will be about 4.5 billion records. I need to be able to do full text searches on the message field, and of course, it needs to be reasonably fast. You could use pg-pool II or your own middleware to execute the search query in parallel across all the partitions (maybe not all 180 at once, though). Cheers, M -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.10 (Darwin) iEYEARECAAYFAknfaVAACgkQqVAj6JpR7t65DQCgsN51pMWoY8WXyxss6cXRPHug 4h8An2IufbKuhrw4fyki4gBbjrkkQD0M =5PRb -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] nulls
On Mar 12, 2009, at 5:50 PM, James B. Byrne wrote: ... and c.date_effective_from >= and c.date_superseded_after <= Have I understood things aright? The one problem I foresee is that changes to the commodity_tax_rates table may not reflect in transaction dates that have passed. What happens if a tax is retroactively ended or applied outside these barriers? Is this tax calculation frozen at the date of the "best information we have"? If so, you might consider an insert-only table and linking the tax decision to the row that happened to be in effect at any time. This would likely require adding a column indicating when the row was inserted. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves?
On Feb 5, 2009, at 6:08 AM, Greg Stark wrote: On Wed, Feb 4, 2009 at 6:42 PM, Simon Riggs wrote: As A.M. says elsewhere, it would be good to have a trigger that fired a NOTIFY that was picked up by a scheduled job that LISTENs every 10 minutes for certain events. We need a place for code that is *not* directly initiated by a user's actions, yet works as part of a closed loop system. Can't you do this today in about three lines of your favourite scripting language? I used to do this in perl on Oracle and that was without anything like LISTEN/NOTIFY to optimize it. Web pages just inserted a record and went on about their business while a worker daemon scanned for new records and generated notification mails. The problem with trying to push everything into the database is that it ends up sucking your entire application into the database. That limits your choice of languages and tools, and also creates a huge bottleneck. In addition to the other response, one should also take security scenarios into consideration. If role X installs an event handler (timed or via notify), I would expect the callback to be run as that role X. This is currently impossible to safely do from outside the database because SET SESSION AUTHORIZATION can be trivially revoked with RESET SESSION AUTHORIZATION. Also, LISTEN/NOTIFY really need to allow for additional user info to be attached (it's on the TODO list) and they have further security problems because they are global broadcasts. I would expect an integrated event dispatch mechanism to handle the complexity of security as well as what to do in case of rollback. So, no, this is not something can be slapped together from outside the db. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves?
On Feb 3, 2009, at 11:55 PM, Guy Rouillier wrote: Craig Ringer wrote: An internal job scheduler with the ability to fire jobs on certain events as well as on a fixed schedule could be particularly handy in conjunction with true stored procedures that could explicitly manage transactions. Craig, what kind of "events" are you thinking about? Triggers are already pieces of code that run upon "certain events", namely insert, update or delete events. What others do you have in mind? What about LISTEN/NOTIFY events? That would be one way to create autonomous transactions. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql and Mac OS X
On Nov 4, 2008, at 4:14 PM, Tom Lane wrote: Tom Allison <[EMAIL PROTECTED]> writes: I tried getting a source install on my mac book yesterday and today. It's not a normal *nix installation. The location of the files are all non-standard. 'make' is prefixed by /Developer/usr/bin/. The question is *why* the location is nonstandard. Other people's Macs are not set up that way (mine seems to have these files in the expected place, for example). I added /Developer/usr/bin to PATH and tried ./configure. That would help configure find the stuff in /Developer/usr/bin, but it does nothing for files that ought to be in /usr/lib, /usr/include, etc. I am not sure whether adding these to the configure command would be sufficient: --with-includes=/Developer/usr/include --with-libraries=/Developer/ usr/lib /Developer/usr/ shouldn't be linked against directly- this is the location for OS X SDKs, so that binaries can be built and linked which work on older versions of OS X than one is currently using. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] mac install question
On Jul 23, 2008, at 2:19 PM, [EMAIL PROTECTED] wrote: > Date: Wed, 23 Jul 2008 14:12:45 -0400 > From: [EMAIL PROTECTED] > To: [EMAIL PROTECTED] > Subject: Re: [GENERAL] mac install question > CC: pgsql-general@postgresql.org > > On Wed, Jul 23, 2008 at 1:58 PM, [EMAIL PROTECTED] > <[EMAIL PROTECTED]> wrote: > > > > Please excuse my lack of mac knowledge. I installed postgresql 8.3 using the mac os x 1 click installer onto my brand new powerbook. The install appeared to go very smooth. If I go to Postgresql under Applications it appears as if I can start and stop postgres and open pgadmin. I even created a test database under pgadmin with no errors. So everything appears cool but... > > > > I can't seem to run any of the postgresql commands from a shell. If I open a terminal and try to run psql or createdb or any of the commands I get this error: > > > > /Library/PostgreSQL/8.3/bin/createdb test > > dyld: Library not loaded: /Users/buildfarm/pginstaller/server/ staging/osx/lib/libpq.5.dylib > > That would seem to indicate that the installer doesn't set things up > properly for command-line access. You should talk to whoever created > it, since it isn't (to my knowledge) part of the official Postgres > distribution. > > -Doug Well I got it from a link on postgresql.org. Of course it does say that it is a beta installer. http://www.postgresql.org/download/macosx The package is improperly linked so the command-line tools are useless on any machine other than the buildfarm setup. Dave Page (CC'd) is apparently the maintainer. Cheers, M
Re: [GENERAL] changing the endianness of a database
On May 12, 2008, at 4:02 PM, Chris Saldanha wrote: Hi, We'd like to ship PostgreSQL as part of a product that runs on both PPC and Intel Macs, but the database files are tied to the build settings and endianness of the computer that the database was initialized on. Is there any way to cause the server to modify the database files in- place for endianness issues? I know that a backup-then-restore process would fix the data, but on Macs, many users use Apple's computer migration tools to copy all their programs/data/users/etc.. to new Macs. If the user moves from a PPC to an Intel Mac, for instance, the database would be copied over, but the data would be for the old computer, and the database won't start. The backup/restore process is hard for end users, since they don't understand it -- and they won't contact us until after the migration is done, and often not until they've discarded the old computer. It would be nice if there was a way to recover the data from the existing database files. I found this old thread on a related topic, and it seems that this cannot be done... http://archives.postgresql.org/pgsql-general/2008-01/msg00635.php You know that you don't have to compile postgresql as "Universal", right? If you have separate PPC and Intel versions (not lipo'd together), then, presumably, you should be able to figure out which one needs to run. The PPC postgresql would then run on the Macintel under Rosetta and you would then have control to proceed with an automatic dump/restore. However, this would not work for someone moving the database from an Intel machine to a PPC machine. Postgresql is simply not well-suited for such uncontrolled environments. What happens when you upgrade postgresql? Do you then ship with 4 version of the db (Intel/PPC * 8.2/83)? Perhaps you should dump all the non-transient data whenever the application is shut down (in anticipation of an upgrade)? Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Mac ordering with locales
On Feb 22, 2008, at 10:16 AM, Martijn van Oosterhout wrote: On Thu, Feb 21, 2008 at 11:14:58AM -0800, Jeff Davis wrote: I have looked for a standard related to the locale behavior and I was surprised that I couldn't find one. Is a given locale, e.g. en_US, supposed to have identical behavior on any platform for which it's available? If there is a standard of some kind, is apple violating it? Nope. If there were we could complain about it. All we have now is many different implementations. The most commonly used ones are Java, ICU, glibc and Windows. AIUI all except Windows understand the xx_XX format. Java and ICU are essentially the same. I found a note that both Perl6 and PHP6 may use ICU. That would be an interesting change. Darwin also uses ICU extensively. Is it that time of year again to discuss using/linking against it? Cheers, M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Mac ordering with locales
On Feb 21, 2008, at 12:01 PM, Tom Lane wrote: Pascal Cohen <[EMAIL PROTECTED]> writes: The fact is that works on Linux and win but under Mac I always get the ordering with 'default' C locale (I displayed all the lc_* and all are right set) Yeah, this has been complained of before, eg here http://archives.postgresql.org/pgsql-general/2005-11/msg00047.php and here http://archives.postgresql.org/pgsql-general/2004-04/msg00564.php It seems to be a deficiency in Apple's locale support. The second message is interesting since it indicates that "smart" sorting is available somewhere/somehow under OS X, but nobody here knows how to get at it :-( The function is "CFStringCompareWithOptionsAndLocale()". http://developer.apple.com/documentation/CoreFoundation/Reference/ CFStringRef/Reference/reference.html#//apple_ref/c/func/ CFStringCompareWithOptionsAndLocale It is obviously not a portable function (beyond the Core Foundation sources being open and available), so there may not be any interest in having this in PostgreSQL. Also, which MacOS X version is under discussion here? Could the strcoll() bug have been fixed in Leopard? Cheers, M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Disable Triggers
On Feb 21, 2008, at 10:20 AM, Terry Lee Tucker wrote: Greetings: We have been working diligently toward integrating Slony into our production databases. We've been having trouble with various tables, although being replicated perfectly in the initial replication stage, afterwards, getting out of sync. I have finally figured out what the problem is. We have a Perl process that continually updates certain columns across all databases. That Perl process calls a function we have written called disable_triggers which updates pg_class, setting reltriggers to 0 for the given table, and then later, after the work is complete, resetting reltriggers to the original value. Unfortunately, during this process, the Slony trigger is disabled as well which is causing our problem. My questions is this: how would I go about changing my function so that all the triggers EXCEPT the Slony trigger would be disabled? Any ideas? Version: PostgreSQL 7.4.19 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-9) Couldn't your triggers check some flag to determine if they should continue? Cheers, M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] OT - pg perl DBI question
On Jan 29, 2008, at 2:36 PM, Andrew Sullivan wrote: On Tue, Jan 29, 2008 at 01:56:35PM -0500, A.M. wrote: The postgresql from eight years ago is also quite rusty. No, it's not, which is my point. If you don't need any of the features you mention, and are aware of the limitations, there's nothing wrong with using it. The v2 protocol works, for instance, and for some applications there's nothing wrong with it. I wouldn't start a large project using Pg.pm right now, for sure, but I think dismissing code you don't use on the basis that it's old is just silly. The reason we say "upgrade your postgresql" is not because it's old, but because there are _known_ bugs in it, and those bugs eat data. ...and Pg.pm includes a serious security hole in the form of non- existent query escaping which will never be fixed. Are we really discussing the semantics of "rust"? -M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] OT - pg perl DBI question
On Jan 29, 2008, at 1:07 PM, Andrew Sullivan wrote: On Tue, Jan 29, 2008 at 08:14:28AM -0800, David Fetter wrote: I wouldn't trust that library or anything that depends on it if I were you. It's been unmaintained for a *very* long time. Because code rusts when it's sitting around on a hard drive? Pg.pm doesn't get much attention, I agree, but I've actually never run into a (n undocumented) bug with it. Also, for simple Perl access for Postgres-dedicated use, DBI can be a little heavyweight. You mean other than the fact that it doesn't support the V3 protocol, doesn't support escaping parameters, is a one-for-one wrapper for the libpq from eight years ago (and has never been updated since), there is a timing bug from 4 years ago still open (http://rt.cpan.org/ Public/Bug/Display.html?id=3177), and the docs include zingers like "Starting with postgresql-6.5 it is required to use large objects only inside a transaction"? The postgresql from eight years ago is also quite rusty. Cheers, M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Continual Postgres headaches...
On Dec 6, 2007, at 2:22 PM, Weber, Geoffrey M. wrote: I've been trying for quite a while to get Postgresql tuned for use as an OLTP system. I have several PL/pgSQL functions that handle inserts and updates to the main table and several near-real-time daemons written that access the data and can take automated actions on it (email/page concerned people, get complimentary information from a different system, etc.). I started with Postgres 8.1 and am now using 8.2.4 (and have been since its release). I'll try to provide enough information for a decent response, but as I can't obviously put my entire schema and database out there, I'm hoping that I can get some decent guidelines beyond that what I've found though Google, etc. to get this thing tuned better. Most of the data centers in on a central table and has 23 columns, 1 constraint, and 9 indexes. 4 of the indexes are partial. The table usually contains about 3-4 million rows, but I've cut it down to 1.2 million (cut out 2/3 of the data) in an effort to migrate the database to a 2nd sever for more testing. The two partial indexes used the most: 242MB accessed nearly constantly, and 15MB accessed every 5 seconds - but also updated constantly via inserts using the 242MB index. Other than one other 25MB index, the others seem to average around 300MB each, but these aren't used quite as often (usually about every minute or so). My problems really are with performance consistency. I have tweaked the execution so that everything should run with sub-second execution times, but even after everything is running well, I can get at most a week or two of steady running before things start to degrade. Without some examples of reproducible problematic behavior, you are likely to get only hazy responses. With your rate of database changes, you may need to be vacuuming more often (or certain tables more and other tables less). From your description above, it sounds like you are persistently polling the database for changes. Have you considered using asynchronous notifications? http://www.postgresql.org/docs/8.2/interactive/sql-listen.html Cheers, M ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Disconnects hanging server
On Dec 3, 2007, at 4:16 PM, Brian Wipf wrote: We have a dual 3.0 GHz Intel Dual-core Xserve, running Mac OS X 10.5.1 Leopard Server and PostgreSQL 8.2.5. When we disconnect several clients at a time (30+) in production, the CPU goes through the roof and the server will hang for many seconds where it is completely non-responsive. It seems the busier the server is, the longer the machine will hang. You should run Shark or Instruments to determine where the system is getting hung up. You will likely need to install developer tools. If you need help reading the profilers' output, please join up on an Apple list. In my profiling of PostgreSQL under 10.4 with PostgreSQL 8.1, I found disappointing results with bottlenecks in the mutex-locked stdio. I suspect that the results in 10.5 may be drastically different. Cheers, M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] stripping HTML, SQL injections ...
On Nov 14, 2007, at 4:23 PM, Scott Marlowe wrote: On Nov 14, 2007 2:40 PM, madhtr <[EMAIL PROTECTED]> wrote: Quick question, are there any native functions in PostGreSQL 8.1.4 that will strip HTML tags, escape chars, etc? I can't think of a lot of native functions, but it's sure easy enough to roll your own with things like the regex functionality built in. Please don't do that- there are corner cases where a naive regex can fail, leaving the programmer thinking he is covered when he is not. The variety of web languages include filtering modules (HTML::Scrubber)- in the case of Perl or PHP, it can even be run server-side. Furthermore, one shouldn't use an API which allows for SQL injections. Cheers, M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Call for Speakers PostgreSQL Conference Fall 2007
On Sep 6, 2007, at 21:10 , Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hello, The PostgreSQL Conference Fall 2007 is shaping up nicely. We are now seeking more speakers. Here is the current lineup: What's the difference between the conference groups at http:// www.postgresqlconference.org/ and http://www.pgcon.org/2008/? I am subscribed to general and hackers and this is the first time I've seen this particular conference mentioned. Could the announcements be made on general as well? Do I need to subscribe to advocacy too? Cheers, M ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump doesn't dump everything?
On Sep 5, 2007, at 18:57 , Liam Slusser wrote: I've been trying to replicate a database but each time I replication it the performance of the copy is about 100 times slower (~100ms to ~8 seconds for the same query). The only way I have found to replicate it and keep the same performance is doing a hotcopy of the database. Is the hardware identical? Is your postgres.conf tuned identically? Cheers, M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] server closed the connection unexpectedly
This is pg_log : 2007-08-19 03:00:50 LOG: database system was shut down at 2007-08-19 02:58:26 Malay Peninsula Standard Time 2007-08-19 03:00:50 LOG: checkpoint record is at 0/75A808 2007-08-19 03:00:50 LOG: redo record is at 0/75A808; undo record is at 0/0; shutdown TRUE 2007-08-19 03:00:50 LOG: next transaction ID: 0/1931; next OID: 16737 2007-08-19 03:00:50 LOG: next MultiXactId: 1; next MultiXactOffset: 0 2007-08-19 03:00:51 LOG: database system is ready 2007-08-19 03:01:43 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:01:43 LOG: incomplete startup packet 2007-08-19 03:20:15 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:20:15 LOG: incomplete startup packet 2007-08-19 03:25:30 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:25:30 LOG: incomplete startup packet 2007-08-19 03:27:05 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:27:05 LOG: incomplete startup packet 2007-08-19 03:33:18 WARNING: there is no transaction in progress 2007-08-19 03:33:29 NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "auth_users_pkey" for table "auth_users" 2007-08-19 03:33:30 WARNING: there is no transaction in progress 2007-08-19 03:33:34 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:33:34 LOG: incomplete startup packet Thanks - Original Message - From: "Raymond O'Donnell" <[EMAIL PROTECTED]> To: "Muhyiddin A.M Hayat" <[EMAIL PROTECTED]> Cc: Sent: Sunday, August 19, 2007 3:13 AM Subject: Re: [GENERAL] server closed the connection unexpectedly On 18/08/2007 19:30, Muhyiddin A.M Hayat wrote: somebody help me please You'll need to post a lot more information before anyone can help. Is there anything in the server log? - or the Windows event log? Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] server closed the connection unexpectedly
Dear all, i'm unable to connect postgres server with error : C:\Program Files\PostgreSQL\8.2\bin>psql -U postgres siakad Password for user postgres: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. pg_log : 2007-08-19 03:00:50 LOG: database system was shut down at 2007-08-19 02:58:26 Malay Peninsula Standard Time 2007-08-19 03:00:50 LOG: checkpoint record is at 0/75A808 2007-08-19 03:00:50 LOG: redo record is at 0/75A808; undo record is at 0/0; shutdown TRUE 2007-08-19 03:00:50 LOG: next transaction ID: 0/1931; next OID: 16737 2007-08-19 03:00:50 LOG: next MultiXactId: 1; next MultiXactOffset: 0 2007-08-19 03:00:51 LOG: database system is ready 2007-08-19 03:01:43 LOG: could not receive data from client: An operation was attempted on something that is not a socket. 2007-08-19 03:01:43 LOG: incomplete startup packet somebody help me please
[GENERAL] server closed the connection unexpectedly
Dear all, i'm unable to connect postgres server with error : C:\Program Files\PostgreSQL\8.2\bin>psql -U postgres siakad Password for user postgres: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. somebody help me please
Re: [GENERAL] User-Friendly TimeZone List
On Aug 15, 2007, at 13:27 , Naz Gassiep wrote: Hi all, I am still, after quite some time, wrangling over the time zone system in my app. I have sorted out all the internal handling, however I am still uncertain as to what the best way to get the user to select their time zone is. I was thinking of having users just select their timezones from a list which was populated with the contents of the query: select * from pg_timezone_names ; which would look something like this. http://mrnaz.com/tztest.html This however is problematic for a number of reasons: 1. The timezone list there isn't exactly user friendly, there are many Etc/* timezones there, as well as others that would be potentially confusing for users who are trying to select the timezone they are in. 2. If a timezone is removed from the list for whatever reason, then the system will be left with users who have selected a timezone that is no longer a valid choice in the list. The ideal situation would be to maintain my own persistent list of timezones (the way Microsoft maintains their own user-friendly list that they use for Windows) that maps to the timezones embedded in the PG back end, but I haven't the resources to pull this off. Has anyone else worked on a mechanism to allow users to supply the timezone they are in, and if so, do you have any comments on how best to handle the apparently mutually exclusive problems of simplicity for users and accuracy in the back end? At the simple end of the I can't just have users only select from a list going from GMT-12 to GMT+12. At the complex end of the scale I'd just give them the output from the list and any that are deprecated will result in the user reverting to UTC until they reset a new timezone. Don't forget that not all timezones are offset by integer hours from GMT. I, too, am battling timezone handling- I have been using various Perl modules to fill in the missing components such as better/multi- lingual timestamp parsing. Cheers, M ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] timezone + DST handling
Hi, I have a CMS where I display the timezone for international meeting dates. I store the dates as follows: startdate | timestamp with timezone timezonename | text This works fine until I hit daylight-savings time when the name of the timezone changes. So, given a "timestamp with timezone" and a base timezone (non-DST), how can I determine if the date is currently in DST? Is it possible to get the name of the new DST timezone? Thanks! -M ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [pgsql-general] In memory tables/databases
On Jul 5, 2007, at 13:20 , Andrew Sullivan wrote: On Sun, Jul 01, 2007 at 11:11:30PM +0200, Alexander Todorov wrote: The question was is there something else that exists in PostgreSQL and will do the same job. Why re-invent the wheel, and make it square? But also, if you don't care whether you keep your data, why on earth are you putting it in an RDBMS? Is it because all your pre-built tools already speak SQL? If you're really after performance, I'm not convinced a SQL-speaking RDBMS (delivered by MySQL or Postgres or anyone else) is what you actually need. Look- there are plenty of scenarios where data one does not care about is linked (in a relational fashion) to data one does care about. One common example is a web session. If your database fails, then the sessions are really irrelevant in the future. Another example is a live queue or snapshot of other data (materialized views?). As long as the database is running, then the data is useful. Such a table can contain foreign keys but no triggers and may not have references to it from "non-temp" tables. Why not have a table type that writes no WAL and is truncated whenever postgres starts? Such a table could then be put in a ramdisk tablespace and there would be no transaction atomicity repercussions. Is there something I'm missing? Claiming that postgresql is simply the wrong tool is silly, especially since it is so close to having the desired behavior. Cheers, M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] why postgresql over other RDBMS
On May 24, 2007, at 20:39 , Andrew Sullivan wrote: On Thu, May 24, 2007 at 03:25:52PM -0400, A.M. wrote: Wouldn't it be a cool feature to persists transaction states across connections so that a new connection could get access to a sub- transaction state? You could do this using an incredibly evil, carefully implemented hack in a connection pool. I'm shuddering at the thought of it, to be honest, so details are left as an exervisse for the reader. Actually, a sample implementation could be done using stored procedures and some IPC. It would however require that the receiver poll for requests- the API would probably look very similar to dblink. -M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] why postgresql over other RDBMS
On May 24, 2007, at 18:21 , Chris Browne wrote: Jan Wieck had a proposal to a similar effect, namely to give some way to get one connection to duplicate the state of another one. This would permit doing a neat parallel decomposition of pg_dump: you could do a 4-way parallelization of it that would function something like the following: - connection 1 opens, establishes the usual serialized mode transaction - connection 1 dumps the table metadata into one or more files in a specified directory - then it forks 3 more connections, and seeds them with the same serialized mode state - it then goes thru and can dump 4 tables concurrently at a time, one apiece to a file in the directory. This could considerably improve speed of dumps, possibly of restores, too. Note that this isn't related to subtransactions... Well, I was thinking that since transactions are now serializable, it should be possible to move the state between existing open transactions. -M ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] why postgresql over other RDBMS
On May 24, 2007, at 18:12 , PFC wrote: Indeed. Wouldn't it be a cool feature to persists transaction states across connections so that a new connection could get access to a sub- transaction state? That way, you could make your schema changes and test them with any number of test clients (which designate the state to connect with) and then you would commit when everything works. Actually you can hack this by starting your webserver with only 1 thread, use persistent connections, and disable all commits in the application. But I'd call that "a very hackish hack". Not really- then I can't use any transactions, in which case, I might as well use MySQL. I would like to be able to pass transaction state between connections. -M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] why postgresql over other RDBMS
On May 24, 2007, at 15:57 , Alvaro Herrera wrote: A.M. wrote: Indeed. Wouldn't it be a cool feature to persists transaction states across connections so that a new connection could get access to a sub- transaction state? That way, you could make your schema changes and test them with any number of test clients (which designate the state to connect with) and then you would commit when everything works. Unfortunately, the postgresql architecture wouldn't lend itself well to this. Still, it seems like a basic extension of the notion of sub- transactions. Hmm, doesn't this Just Work with two-phase commit? 2PC requires that the modifications already be in concrete. What I suggest is a method for a new connection to insert itself into an existing (sub-)transaction SQL stream, make changes, and commit to the root or parent transaction. In the scenario where changes are pending, only one connection can test the changes- it must be the connection that opened the transaction. Concurrency issues cannot be tested before committing, for example. The implementation could be as simple as decoupling connections from transactions- then a connection could make serialized requests to other backends. A proof-of-concept could certainly be cobbled together with pipes and pl/perl, but the real beef would be the ability to "pass" responsibility for transactions from one connection to another. Cheers, M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] why postgresql over other RDBMS
On May 24, 2007, at 14:29 , Wiebe Cazemier wrote: On Thursday 24 May 2007 17:30, Alexander Staubo wrote: [2] Nobody else has this, I believe, except possibly Ingres and NonStop SQL. This means you can do a "begin transaction", then issue "create table", "alter table", etc. ad nauseum, and in the mean time concurrent transactions will just work. Beautiful for atomically upgrading a production server. Oracle, of course, commits after each DDL statements. If this is such a rare feature, I'm very glad we chose postgresql. I use it all the time, and wouldn't know what to do without it. We circumvented Ruby on Rails' migrations, and just implemented them in SQL. Writing migrations is a breeze this way, and you don't have to hassle with atomicity, or the pain when you discover the migration doesn't work on the production server. Indeed. Wouldn't it be a cool feature to persists transaction states across connections so that a new connection could get access to a sub- transaction state? That way, you could make your schema changes and test them with any number of test clients (which designate the state to connect with) and then you would commit when everything works. Unfortunately, the postgresql architecture wouldn't lend itself well to this. Still, it seems like a basic extension of the notion of sub- transactions. Cheers, M ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] cursors in postgres
On Mar 29, 2007, at 10:47 , Jasbinder Singh Bali wrote: Hi, I've written a function using cursors as follows: can anyone please comment on the text in red. -- CREATE OR REPLACE FUNCTION sp_insert_tbl_email_address(int4, text, text, text) RETURNS void AS $BODY$ DECLARE uid int4 ; src text; local text; domain text; cur_dup_check CURSOR FOR SELECT * FROM tbl_email_address WHERE unmask_id=$1 and source=$2 and email_local=$3 and email_domain=$4; BEGIN OPEN cur_dup_check ; FETCH cur_dup_check INTO uid,src,local,domain; --need to check the fetch status of the cursor whether any rows were returned or not and keep moving to the next record till fetch status is not zero Huh? Why don't you simply declare a unique constraint across the columns you require? -M
Re: [GENERAL] exception handling in plperlu
On Mar 16, 2007, at 10:54 , Jasbinder Singh Bali wrote: just wondeng why doesn't it let me put my $dbh=DBI->connect("dbi:Pg:dbname=dbunmask; host=192.168.0.120; port=5432;", "", ""); in eval says Global symbol "$dbh" requires explicit package name at line ever dbh is used> There is a mailing list for DBD::Pg: http://gborg.postgresql.org/mailman/listinfo/dbdpg-general/ But your problem in not related to the driver, rather it's a Perl issue. Your variable is defined within the block, so its scope is the eval block ("my" implies lexical scoping). If you define it outside the block and assign inside the block, then it will work. Cheers, M
Re: [GENERAL] PostgreSQL and embedded PC with Compact Flash?
On Jan 17, 2007, at 9:52 , k.novo wrote: Hello, I have strange question and idea. Use PostgreSQL in embedded PC (with Linux) as data storage for collection measure data. Problem is in limited Write cycle in Compact Flash HDD (about 100.000) My idea is collect data to temporary table in RAM (RAM Disk) and once day rewrite all collect data to main table to Compact Flash. Question is. How set PostgreSQL and according as it is possible with PostgreSQL? Is possible minimize soever Write entry to PostgreSQL files on CF? Possibly, recommended another SQL server. What I do is create a RAM file system and run the postgresql db in it. Every 30 seconds, the ramfs is synced to the CF. I can only get away with this because the data is mostly transient in nature, i.e. if the box fails, the data is useless (network appliance). I sync only to maintain some customer configuration in the database. In my testing, the db manages to recover on failure. If it doesn't, the contingency plan is to load in a default configuration; customers can make backups. It's not beautiful, but it works- also it's obviously an order of magnitude faster than fsyncing with CF. Unfortunately, a lot of data ends up 3 times in RAM: 1) fs cache of db on CF 2) ram fs 3) embedded application If someone has a better way to avoid CF writes, I'm all ears. Note that any recent CF will do wear-leveling and give you more than 100,000 (more like a million) writes per sector and they are constantly improving, so this may all be moot soon- or so I hope. For this type of application, I wish postgresql offered global temp tables. čau. -M ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Trying to connect to an Oracle instance...
On Thu, July 13, 2006 11:03 am, Tony Caduto wrote: > Spendius wrote: > >> Hi, >> I've been trying to perform a connection to an Oracle DB for a while, >> to no avail. Here is what I get at my psql prompt: postdb=# Select >> dblink_connect('login','hostaddr= port=1521 \ >> > If you are trying to connect to a Oracle DB, don't you need to be using > DBI Link instead of DBlink? > I thought DBLink was just for Postgresql databases? That's true. But why should dblink crash? That's a bug. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Null and Void() - Or,
On Wed, June 28, 2006 5:31 am, [EMAIL PROTECTED] wrote: > Date and Pascal hate nulls. One even goes so far as to say that if you > permit NULLs in a database, then the results from *every* query is suspect. > So they turn perform backflips suggesting ways to avoid nulls. > None, so far, seem appealing. This has been discussed to death on this list and on every other SQL forum, but since you asked... To understand NULL, there is a little history that needs to be brought up. The original relational model proposal by Codd had no provisions for non-existent data. Mathematical purity is a strong argument against NULL. Another one is just as simple: "NULL represents the absence of data, so it is the antithesis of what should be stored in a _data_base." In Codd's later papers, he comes up with several distinct NULLs representing different states of unknowledge. Date is vehemently opposed to NULL for the aforementioned reasons. NULL is nothing more than a shortcut. SQL logic has to do backflips to accomodate it- notice how NULL!=NULL- indeed, one NULL can mean a variety of things even in the same context! ("Bob doesn't know","HR doesn't know","No one cares","Not applicable", etc.) In this paper: http://web.onetel.com/~hughdarwen/TheThirdManifesto/Missing-info-without-nulls.pdf Darwen discusses relational design without NULLs (his solution requires support for distributed keys which PostgreSQL admittedly doesn't support) but the premise is very simply that data can be partitioned so that the lack of knowledge is implicit in its absence (which is part of the relational model- it should be a closed system of truths). Obviously, for practical purposes, NULL isn't going anywhere fast for SQL databases, but it is really good to know the background and rationale for your own and other's design decisions. I hope this has helped. -M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] cpan perl module - plperlu danger?
On Thu, June 22, 2006 7:41 am, Philippe Lang wrote: > Hi, > > > I would like to access a cpan perl module (FSA::Rules) in a perl > procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows you > to define a finite state machine in perl. No disk access. > > In order to to that, I need to use plperlu instead of plperl. And it > works just fine. > > What exactly is the danger using a cpan library under plperlu? How can I > make sure it won't crash my system, even when run concurrently by a few > people? You can't be sure, that is what plperl is for. [But even Safe.pm has had dozens of bugs revealed over the years- caveat emptor.] A workaround is to create a set of plperlu functions which can be called by other functions (using security definer, if necessary). This means that you have to wrap the major functional components in perl. Another option is to use plperl as a template to create your own procedural language which includes all the modules you need while still locking down everything else. Good luck. -M ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [INTERFACES] help with error message from perl Pg
On Tue, June 20, 2006 10:44 am, Tom Lane wrote: > 78 would be ASCII 'N', but that's not really significant AFAICS. The > problem here is that the frontend and backend have lost sync: the server is > expecting to find a message beginning at a place in the frontend data > stream that evidently isn't the start of a message. In short, the > frontend has sent corrupted data of some sort. > > Martijn's theory of inadequately locked threaded access is certainly one > likely way this can happen, but it's not the only one. It might be useful > for you to capture the data stream (with something like tcpdump) and try > to get more information about the nature of the corruption. Frequently, if > you can identify "ah-hah, THIS data is being inserted into the middle of > THAT" or whatever, the cause becomes obvious. > > > Also, before you spend too much time on this, make sure your DBI and > DBD::Pg modules are up-to-date. If it's a bug in that level, it'd be > foolish to waste much of your own time chasing it. If you look at his example code, he's not even using DBI- he's using Pg.pm which is for all practical purposes deprecated and supplanted by DBD::Pg. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] help with error message from perl Pg
On Tue, June 20, 2006 10:20 am, Geoffrey wrote: > > We considered that and have verified that we are not closing it. But, > the question came up, should we be passing it by reference or value? We > are doing the following: > > my $conn = Pg::connectdb ("dbname=$db port=$port"); . > . > my $retVal = &$prog($conn, @args); > > Question is, should we be doing: > > > my $retVal = &$prog(\$conn, @args); > > > FYI, there's no expectation of changing $conn in anyway within the sub > routines. You are using a completely outdated interface to postgres. Looking on CPAN, Pg.pm was last updated 04 Apr 2000. Which version of postgresql are you using? It is likely that the old interface blows up when connecting to a postgresql from >2000. -M ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
> Now, there's another thing that makes it amazingly hard to displace: > imagining what would be better *enough* to justify the many millions of > people-years and even more billions of dollars needed to move away from > it. Despite Date's many whines over the decades, his still-vaporware > Relational Model doesn't even vaguely approximate that > criterion. 1) Please understand Date is not a programmer, he is a lecturer, therefore, he is not undertaking nor does he wish to undertake any implementation. Ideally, he wouldn't endorse any particular implementation (but he does- that's his option). 2) Re: "still-vaporware Relational Model"- the relational model is a mathematical model for data representation. Your comment makes as much sense as claiming that "Newtonian physics" is vaporware. 3) From your comments, it is clear that you wish to only consider existing software as proof of usefulness and you are not interested in considering alternative ideas. This is precisely the difference between a researcher and a rote programmer. I would rather be someone in between. Regardless of what you think of the relational model, I would urge you to be more open-minded, even about "vaporware". Much of the world's most interesting software has not yet been written. I'm done with this thread. Good luck. -M ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
>> Yes, and all SQL products worth their salt include some languages to >> provide iteration and other processing that SQL can't do or doesn't do >> well. Why must the rules be different for a truly relational db. (see >> http://dbappbuilder.sourceforge.net/Rel.html) > I may get interested if some actual software which implements Date's > Relational Model ever comes out. Or I may not, as I am getting lots > of useful work done using SQL and friends. We empiricists are like that. You mean like the Java software I pointed out in the link above? It's an implementation of Tutorial D. >>> What say we just stop right there and call Date's Relational Model >>> what it is: a silly edifice built atop wrong premises. >> >> Using that logic, we should kick SQL to the curb too. >> > > Um, no. You haven't actually used the logic. You're just saying you > did, which is different. I've got to say you're reminding me of just about > every Libertarian, Communist, or other kind of doctrinaire moonbat I've > run across. Having a theory is nice, but when reality bumps up against > it, that means the theory, not reality, is wrong. What's with the insults? Cool off or something... -M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
On Fri, June 9, 2006 11:45 am, David Fetter wrote: > On Fri, Jun 09, 2006 at 05:20:46PM +0200, Martijn van Oosterhout wrote: > >> On Fri, Jun 09, 2006 at 07:09:12AM -0400, Agent M wrote: >> >>> Well, the Date argument against NULLs (and he never endorsed them, >>> or so he claims) is that they are not data- they represent the absence >>> of data- so why put non-data in a _data_base. >> >> At this point you could start a whole philosophical discussion about >> whether knowing you don't know something is a fact worth storing. > > And to me, the answer is an unqualified "yes." A state of ignorance > is an important piece of information by itself. > > For example, that I don't know someone's birthdate is important. When > I'm trying to figure out when to send a birthday card, knowing that I > don't know this piece of information means that I take a different action > "decide whether to try to find out what the birthdate is." from > the action I would take if I didn't know that I don't know the birthdate, > which is "rummage through all my records trying to find the birthdate." So you should normalize and add relations to represent the state adequately. NULL doesn't give you enough information anyway- does NULL in a birthday header mean "no birthday", "n/a" (a business doesn't have a birthday), "not born yet", etc... Using real data, you can represent any of these states. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] not valid character for Unicode
On Fri, June 9, 2006 11:17 am, Adam Witney wrote: > > > Martijn van Oosterhout wrote: > >> On Fri, Jun 09, 2006 at 03:59:52PM +0100, Adam Witney wrote: >> >>> Hi, >>> >>> >>> Im trying to upgrade from 7.4 -> 8.1 but it is failing with Unicode >>> errors. The offending character is the greek character mu (often used >>> for micro). Here is an offending string "[EMAIL PROTECTED]" (in case it >>> doesn't >>> appear in the email, the mu is between the B and the G) >>> >>> Any ideas why this character is not valid in Unicode? >>> >> >> It's a valid unicode character, it's just you havn't encoded it in >> unicode. It's probably in Latin-1. In that case, you need to specify it >> in the client encoding... > > Hi Martijn, > > > thanks for your quick response. > > Ok i am a bit confused by all this encoding stuff... i don't really know > how to encode it in unicode? this is a text string that is extracted from a > text file, i just put it in an INSERT statement. > > I have to replace fields with this in it with a valid string that will > load into 8.1, do you know who i would do the conversion? For migration, you should pg_dump- it's not clear from your email whether you are doing that. If you typed up some sql in Windows which you want to load into postgres, you might try: set client_encoding to 'LATIN1'; at the top of your script. -M ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
Also, Date mentions the notion that tables don't have to be mapped to individual files. For example, if the types of queries are known in advance, it could be possible to rearrange the data to be optimal for those queries. Currently, tables are just big serialized arrays. On Fri, June 9, 2006 9:55 am, Aaron Bingham wrote: > [EMAIL PROTECTED] wrote: > >> I'm reading, and enjoying immensely, Fabial Pascal's book "Practical >> Issues in Database Management." >> >> >> > I also found this book very useful when I first started doing serious > database work. For a more thorough treatment of many of these issues, see > An Introduction to Database Systems by Chris Date. The latter book > is so full of detail that it is sometimes hard to follow, but it's worth > the effort. > >> Though I've just gotten started with the book, he seems to be saying >> that modern RDBMSs aren't as faithful to relational theory as they ought >> to be, and that this has many *practical* consequences, e.g. lack of >> functionality. >> >> Given that PostgreSQL is open source, it seems a more likely candidate >> for addressing Pascal's concerns. At least the potential is there. >> >> > Although some DBMSs have invented new ways to break the relational > model, the fundamental problems are in SQL. No DBMS based on SQL is going > to be able to support RM correctly. > >> Some questions: >> >> >> 1) Is PostgreSQL more faithful to relational theory? If so, do you find >> yourself using the additional functionality afforded by this? e.g. >> does it really matter to what you do in your daily work. >> > Within the limitations imposed by the SQL standard, PostgreSQL seems to > do about as well as could be expected, but falls short as all SQL DBMSs > must. For example, PostgreSQL allows NULLs and duplicate rows (there are > preventive measures against both of these but you have to be careful to > avoid them, and sometimes you can't). One feature of RM PostgreSQL lacks > are nested relations (a bad idea for base table design but useful in query > results). > > Regards, > > > -- > > Aaron Bingham > Senior Software Engineer > Cenix BioScience GmbH > > > > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > > > http://www.postgresql.org/docs/faq > > ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] convert row() to array
Is there a function to convert a row record into an array (discarding column info)? -M ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] SELECT table_type FROM table;
By accident, a colleague came across something unexpected. Here is a simple example: create table testo(gonk integer,spoodle text); CREATE TABLE agentm=# insert into testo values(1,'hello'); INSERT 0 1 agentm=# insert into testo values(2,'text'); INSERT 0 1 agentm=# select testo from testo; testo --- (1,hello) (2,text) (2 rows) Obviously, this is intentional behavior but where is it documented? I am aware that testo is also a type and that a set is returned for each row but this must be a special case, no? Alternate types don't seem to apply. agentm=# create type nice as (gonk integer,spoodle text); CREATE TYPE agentm=# select nice from testo; ERROR: column "nice" does not exist agentm=# select *::nice from testo; ERROR: syntax error at or near "::" at character 9 LINE 1: select *::nice from testo; ^ agentm=# select cast(* as nice) from testo; ERROR: syntax error at or near "*" at character 13 LINE 1: select cast(* as nice) from testo; ^ Also, how can I turn each set row into an array? -M ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] GUI Interface
It would be great if by default postgres used NOTIFY after any schema changes. Then, listening UIs could be aware of changes behind the scenes without polling or manual refreshing. -M ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] tablespace and backup
You will need to provide more information about the data requirement- such as column types, what you need to search for, and the actual queries and execution plans. Purely as a guess, it seems like you haven't tried partial indexes: http://www.postgresql.org/docs/8.1/interactive/sql-createindex.html Mit freundlichen Gruessen aus den USA, M On Sat, May 6, 2006 2:58 pm, Michelle Konzack wrote: > Hello, > > > for 4 weeks I have imported (text) data of 50 DVD's from a customer into > my PostgreSQL and now I have a very big problem... The maintable (called > 'timeline' is around 350 GByte in size... > ...and searching is the hell! > > > Since I try to redesign my Database since some time, I like to use for > each year ONE table using tablespace and would like to know what happen > with a backup and restoring it. > > How does the restore know whewre to place, etc. > > > And, it is possibel to get a table (restoring), if, for example I have > only one Disk where the tablespace was created? > > Or would it be better, to run several postmasters using one SCSI or > SATA (WD Raptor) HDD of 150 GByte for each Database? > > > Currently I am using a 3Ware 3w8500-12S with 2 x WD1500GD (OS, > Raid-1) and 8 HDD's using Raid-5 but I can switch to 4 x Raid-1 plus > (two "new" HDD's). I do not like LVM because too negative experience. > > > Or would it generaly better to use 1U Server Racks with each one > Raid-1 of 150 GByte? (Since the prices for 1U servers are falling > in germany) > > My current server eat 2 x 4U for the Database, 4U for the Webserver > and 3 x 4U for the Binaries (1,8 TB of original documents of any kind). > > Those three servers plus a very big Sun machine are connected of a > CISCO to a SONET Dual STM-4 (since end march). Maybe it is relevant. > > > Thanks > Michelle Konzack > > > > -- > Linux-User #280138 with the Linux Counter, http://counter.li.org/ > # Debian GNU/Linux Consultant # > Michelle Konzack Apt. 917 ICQ #328449886 > 50, rue de Soultz MSM LinuxMichi > 0033/3/8845235667100 Strasbourg/France IRC #Debian (irc.icq.com) > > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Is PostgreSQL an easy choice for a large CMS?
Just to round out the suggestions, if I remember correctly, the OP mentioned something about chat. For entirely dynamic, disposable data, perhaps a reliable database isn't what is called for at all. If the power shuts off, it may not matter that some chat log is lost. I suggest to use the right tool for the job and PostgreSQL doesn't claim to cover all the bases. -M ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] sudo-like behavior
On Thu, April 20, 2006 4:21 pm, Tom Lane wrote: > "A.M." <[EMAIL PROTECTED]> writes: > >> It seems I am stuck so please allow me to propose an extension: >> SET SESSION AUTHORIZATION user [WITH PASSWORD 'password]; >> > > This idea is extremely unlikely to be accepted, as the password would be > at risk of exposure in places like the pg_stat_activity view. > > I think the correct way to do what you want is via a SECURITY DEFINER > function. Perhaps I can't wrap my head around it- I have the SQL as a string in a table. I interpret that you propose that I accept only function names and allow users to create security definer functions which I then call as the superuser (carefully checking for the security definer flag). What about commands that can't be run from within transactions? I guess there is no way to stream arbitrary SQL in a permissions sandbox if the original login user isn't the one I want. The security definer method is a good enough workaround. Thanks. -M ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] sudo-like behavior
Hello, I have written a crontab-like daemon which accepts jobs from users through a table and executes SQL statements after certain events or intervals. This daemon maintains a persistent connection to the database as a superuser. The problem is that I wish to run arbitrary SQL as an unprivileged user but SET SESSION AUTHORIZATION is easily reversed via RESET SESSION AUTHORIZATION. Since I don't have the role's password, I cannot connect as him through a secondary connection. It seems I am stuck so please allow me to propose an extension: SET SESSION AUTHORIZATION user [WITH PASSWORD 'password]; If a password is specified, then any call to RESET SESSION AUTHORIZATION would also need to include the WITH PASSWORD clause (and the correct password) to be successful. This would allow for blocks of foreign code to be executed as an arbitrary user. I am not sure this would work for SET ROLE because of role inheritance. Does anyone have a better idea? Thanks, -M ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] How to Create View
Dear all, I Have This table Table Billing: id trx_date trx_time depart payment_method billing_amount amount_paid balance creator 1 10/09/2003 21:55:02 Resto Visa 13.800,00 10.000,00 3.800,00 middink Table Payment id r trx_date trx_timedescriptions payment_method amount creator 1 10/08/2003 18:17:40 Payment Cash 2.000,00 middink I would like to create "View " from above table with result look like: trx_date trx_time descriptions payment_method debet credit balance creator10/09/2003 21:55:02 Resto Billing 13.800,00 Paid: 10.000,00 Visa 3.800,00 3.800,00 middink10/08/2003 18:17:40Payment Cash 2.000,00 1.800,00 middink How can I create View like above?
[GENERAL] The NT services Cygwin PostgreSQL installatio
How to install Cygwin PostgreSQL as NT Services on Windows 2000. i have do all procedure in postgresql-7.3.4.README file but i found error $ net start postmasterThe postmaster service is starting.The postmaster service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. What can i do?