Re: [GENERAL] Seek within Large Object, within PL/* function?
Adam Seering wrote: Thanks for the link; still trying to figure out how to use these functions, though. The link that you sent suggests that these commands are really only useful for writing out data to server-side files, which isn't so much what I want. What I really want is to lo_open a file, then lo_lseek to a particular address and loread a constant number of bytes (to be returned as the query result), then lo_close the file. Out of curiosity, I've just quickly tried this: CREATE OR REPLACE FUNCTION lo_part(oid,integer,integer) RETURNS bytea AS $$ declare fd integer; c bytea; i integer; begin fd = lo_open($1, 262144); -- INV_READ i=lo_lseek(fd,$2,0); --arg 2 is offset c = loread(fd, $3); -- arg 3 is size i=lo_close(fd); return c; end; $$ LANGUAGE 'plpgsql'; ...and it appears to work, isn't it what you say you need? I'm currently stuck on the easy part: lo_open always returns 0, which gives me "ERROR: invalid large-object descriptor: 0" when I try to use it with loread. Generally it's because you've forgotten to start a transaction. lo_* functions need to be executed inside a transaction, otherwise you get that error (the message is not exactly helpful in this particular case, admittedly!) Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.3 libpq.dll not working on some versions of windows
--- On Sat, 15/11/08, Tony Caduto <[EMAIL PROTECTED]> wrote: > Hi, > We have been running into issues where the 8.3.x versions > of libpq.dll will not load in certain > versions of windows and WINE(does not load at all on wine). > > It seems to be hit and miss on Windows XP, mostly seems to > affect SP3 and some SP2 installs of XP. > > I have only been able to get around this by installing a > much older version of libpq.dll. > And I did have all the dependencies installed along with > the DLL, it just plain refuses to load. Pretty sure I've used most 8.3.x versions here on both sp2 and 3. How have you chacked you have all the dependencies? (I like depends.exe) http://www.dependencywalker.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] unsigned int type
On Sat, 2008-11-15 at 23:27 +0100, Enrico Pirozzi wrote: > Hi all, > Is it present on postgresql 8.3.x any implementation of an unsigned int type? No. Just use bigint with a check constraint or a domain. There also might be (I don't recall) a project over at http://www.pgfoundry.org/ that provides the functionality. Joshua D. Drake > > Regards, > Enrico > > -- > That's one small step for man; one giant leap for mankind > > www.enricopirozzi.info > [EMAIL PROTECTED] > Skype sscotty71 > -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 8.3 libpq.dll not working on some versions of windows
Hi, We have been running into issues where the 8.3.x versions of libpq.dll will not load in certain versions of windows and WINE(does not load at all on wine). It seems to be hit and miss on Windows XP, mostly seems to affect SP3 and some SP2 installs of XP. I have only been able to get around this by installing a much older version of libpq.dll. And I did have all the dependencies installed along with the DLL, it just plain refuses to load. I also check that there were no rouge copies of the files in system32. Thanks, Tony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] unsigned int type
Hi all, Is it present on postgresql 8.3.x any implementation of an unsigned int type? Regards, Enrico -- That's one small step for man; one giant leap for mankind www.enricopirozzi.info [EMAIL PROTECTED] Skype sscotty71 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vacuum Full is *hideously* slow!
Thanks Scott. Could be my I/O as I am on 15k Raptor SATA drives with RAID 1 only. Anyway, reindexing happened fast (12 minutes) and things are now humming along. How long should cluster take on a db that's about 5.5GB in size? Is it worth doing on a production db? The db is running fast now anyway, so the only reason I am even thinking about this is because of the results of a VACUUM ANALYZE, which give me this: --- INFO: "links": found 5427 removable, 8876599 nonremovable row versions in 483102 pages DETAIL: 96 dead row versions cannot be removed yet. There were 2135991 unused item pointers. 235 pages contain useful free space. --- My semi-intelligent question: in the output above, that figure of "483,102" pages. Should this in any way affect my setting for max_fsm_pages variable in pg.conf? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Windows x64 Build
On Sat, Nov 15, 2008 at 2:45 PM, Serge Fonville <[EMAIL PROTECTED]> wrote: > Hi, > > I have been using PostgreSQL for quite a while now. > And (pure out of interest) been trying to build a Windows x64 version of it. There was a discussion of this very subject here a few months ago, you should look that up. It explained a lot of the reasons there isn't currently a 64bit windows build. One of the reasons I remember is that postgresql's "shared memory" design is slow when you allocate large amounts of it in windows, and that's one of the few reasons for needing a 64 bit version of pgsql. I remember a few more reasons popping up. I'm guessing it's a subject for the -hackers list though. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vacuum Full is *hideously* slow!
On Sat, Nov 15, 2008 at 2:36 PM, Phoenix Kiula <[EMAIL PROTECTED]> wrote: > Hi. > > Per this thread: > http://archives.postgresql.org/pgsql-general/2008-11/msg00608.php -- I > think I understood that the time had come for my db to have a VACUUM > FULL. (Regular autovacuum etc is working) > > I know a full vacuum is slow. A simple google search had suggested so. > > But I had no idea it would take HOURS! I started the process against a > 5GB database (about 8.5 million rows in all) and it went on for more > than an hour. I had to kill the process. Your machine must have a pretty slow I/O subsystem. Recently our main production database, which is in the 25Gig range, had a few rogue queries (updates without where clauses, gotta love'em!) that bloated a few tables from 300,000 rows to 300,000 live and about 10,000,000 dead rows. We took our system down late at night for maintenance and vacuum fulled then reindexed the whole thing, and it took approximately 20 minutes to run. Note that you really should take your app offline if you're vacuuming full any tables, since it locks them anyway, and there's no reason for users to sit and stare at an hour glass waiting for a response that isn't coming for quite some time, as well as producing more load on a db that's already working pretty hard. > I am now reindexing just to be sure. Always a good idea after a vacuum full, as a vacuum full can bloat indexes. > Is this normal? If a vacuum full takes hours or even days then what's > the point? Define Normal (sorry, that's a favorite book of mine, I really couldn't resist). Normal depends a lot on your server and the size of the tables being vacuumed. Since my main db servers have 12 spindles under the DB and 8 cores to run on, I can run vacuum fulls on several tables at once and get the time down to about 5 to 8 minutes for the whole db. If I'm in a hurry. I'm usually not. If you're running on a single 7200RPM SATA drive don't expect it to complete a vacuum on a good sized dataset as fast as a lot of drives under a fast RAID controller. Vacuum is VERY I/O dependent. > I read here - > http://archives.postgresql.org/pgsql-performance/2005-07/msg00375.php > - that it's better to drop the indices, then vacuum, and then recreate > the indices. Is this true? This is also a bad decision for production > servers, but would this be better? Yep, it is. Since the indexes are likely to get bloated anyway, you might as well just drop them and recreate them. OR cluster your tables on your favorite index, which will all free all the dead space plus it makes the table faster for things ordered by that index. The real question is what led to you needing vacuum full instead of regular vacuum, and what are you doing to prevent it in the future. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL Windows x64 Build
Hi, I have been using PostgreSQL for quite a while now. And (pure out of interest) been trying to build a Windows x64 version of it. I suspect I either need mingw-w64 or Visual Studio. I already found a x64 version of some of the required tools. Unfortunately it seems 'impossible' to build an x64 version of PostgreSQL. Due to a lot of 'choices' inside the code. I do not understand enough of it at this time I believe, yet a lot of reading and a little help would make chances of success a lot bigger. Does anyone have pointers about how I would go about as to what to do I am currently running Windows Vista x64 and have mingw-w64 and Visual Studio 2008 installed Thanks a lot in advance. Serge Fonville
[GENERAL] Vacuum Full is *hideously* slow!
Hi. Per this thread: http://archives.postgresql.org/pgsql-general/2008-11/msg00608.php -- I think I understood that the time had come for my db to have a VACUUM FULL. (Regular autovacuum etc is working) I know a full vacuum is slow. A simple google search had suggested so. But I had no idea it would take HOURS! I started the process against a 5GB database (about 8.5 million rows in all) and it went on for more than an hour. I had to kill the process. I am now reindexing just to be sure. Is this normal? If a vacuum full takes hours or even days then what's the point? I read here - http://archives.postgresql.org/pgsql-performance/2005-07/msg00375.php - that it's better to drop the indices, then vacuum, and then recreate the indices. Is this true? This is also a bad decision for production servers, but would this be better? Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Seek within Large Object, within PL/* function?
Hello I am not sure, but you are first who try to work with LO inside PL. Current LO interface works well for client side, and probably doesn't work on server side via SPI interface - so I am sure, so some is possible, but you need do some C hacking. regards Pavel Stehule 2008/11/15 Adam Seering <[EMAIL PROTECTED]>: > > > Klint Gore wrote: >>> >>>I'm playing around with storing custom preprocessed data structures >>> within Large Objects. I'd like to be able to write a custom function that >>> will, within a query, let me select out particular bytestrings from the >>> middle of a Large Object (within C, I think 'lo_lseek' and 'lo_read' would >>> get me what I want). >>> >> >> According to http://www.postgresql.org/docs/8.3/static/lo-funcs.html , the >> functions are server side too > > Thanks for the link; still trying to figure out how to use these functions, > though. > > The link that you sent suggests that these commands are really only useful > for writing out data to server-side files, which isn't so much what I want. > What I really want is to lo_open a file, then lo_lseek to a particular > address and loread a constant number of bytes (to be returned as the query > result), then lo_close the file. > > I'm currently stuck on the easy part: lo_open always returns 0, which gives > me "ERROR: invalid large-object descriptor: 0" when I try to use it with > loread. > > Any further thoughts? > > Thanks, > Adam > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Seek within Large Object, within PL/* function?
Klint Gore wrote: I'm playing around with storing custom preprocessed data structures within Large Objects. I'd like to be able to write a custom function that will, within a query, let me select out particular bytestrings from the middle of a Large Object (within C, I think 'lo_lseek' and 'lo_read' would get me what I want). According to http://www.postgresql.org/docs/8.3/static/lo-funcs.html , the functions are server side too Thanks for the link; still trying to figure out how to use these functions, though. The link that you sent suggests that these commands are really only useful for writing out data to server-side files, which isn't so much what I want. What I really want is to lo_open a file, then lo_lseek to a particular address and loread a constant number of bytes (to be returned as the query result), then lo_close the file. I'm currently stuck on the easy part: lo_open always returns 0, which gives me "ERROR: invalid large-object descriptor: 0" when I try to use it with loread. Any further thoughts? Thanks, Adam -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Delete cascade trigger runs security definer
Dean Rasheed <[EMAIL PROTECTED]> writes: >> Referential integrity actions execute as the owner of the table, so >> anything triggered by them would execute as the owner too. > Hmm, that opens up a very nasty gotcha, as shown by the script > below. What user1 does looks, at first sight, fairly innocuous. Well, granting TRIGGER on one of your tables is never innocuous. That gives the recipient the ability to arbitrarily interfere with your use of the table, even without exploiting the fact that the trigger runs as you when you operate on the table. Possibly we ought to document the security risks a bit better. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Database access over the Internet...
> What's mess up is that the solution given the user DOES work. What's messed up is that a person asking for free help from a group of volunteers would insistently place unusual restrictions on how she will accept that help, then lecture those who tried to help but hadn't noticed her specific terms. I wouldn't have been nearly as harsh in my reply, probably wouldn't have replied at all, if she hadn't referred to the email coming to her as "terrorism". Now granted English may not be her first language so I'll assume that's a poor translation and she really meant something closer to "harassment". But even so, her position is outstandingly absurd, to accuse people of harassing her for attempting to answer her question. Perhaps someone should speak to her about the advantages of MySQL? -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Database access over the Internet...
On Sat, Nov 15, 2008 at 9:33 AM, Scott Ribe <[EMAIL PROTECTED]> wrote: >> And what does this have to do with CC? The CCs come in without >> passing the listserver and I do not want this mail! > > And if someone makes a mistake and accidentally emails you directly, WHILE > TRYING TO HELP YOU AND ANSWER YOUR QUESTION, perhaps you should react > gracefully instead of being a whiny rude jackass. What's mess up is that the solution given the user DOES work. She just refuses to try it, because she assumes that the mailing list server doesn't see the exact same CC list as her email server. Well, it does, and it then drops the message outbound for her so she ONLY gets it from the original sender. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Database access over the Internet...
> And what does this have to do with CC? The CCs come in without > passing the listserver and I do not want this mail! And if someone makes a mistake and accidentally emails you directly, WHILE TRYING TO HELP YOU AND ANSWER YOUR QUESTION, perhaps you should react gracefully instead of being a whiny rude jackass. This message sent to the list at your request ;-) -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pl/pgsql for loops
Hello I have some "for x in select ..." loops which compares current record with previous ones. Is it possible to check if such loop is reaching final pass? Now, I check "found" variable after the loop and usually duplicate part of it's code. -- Regards, Tomasz Myrta -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: Res: [GENERAL] Archive files growth!!!
paulo matadr wrote: > > Let me see, for example one insert type (inser into table2 (select * > from table1), where table have size 26megas,make 226megas for archives > files. > (i made a test with parameter wal_bufffer before this is defaul value > 64k and the same commando make 640megas of archives after modify to > 1024k a take 226m) > however continuous confused for me. > how 26mega of insert generate 226mega of archive. OK, that's interesting, since those should be full WAL archives, so your problem isn't to do with unused WAL tails. When you say that the table is 26 MB, how did you measure that? Is that the size of a dump of the table using pg_dump ? If so, that is not the real size of the table as it is actually stored by PostgreSQL. The best way to find the size of the table is by asking PostgreSQL: SELECT relname, reltuples, relpages, relpages*8 AS size_kb FROM pg_class WHERE relname = 'tablename' ORDER BY relpages DESC ; (I'm assuming that your database uses the default 8kb page size). This will be quite a bit bigger than the size of the dump. I am also fairly sure that the WAL contains a record of what is done to the indexes as well as to the tables. If you have lots of indexes, especially multi-column indexes, on the table you are interested in then the WAL files generated by an INSERT will be a lot bigger than the amount of data inserted. It might help if you could upload some example data and SQL somewhere. Right now it is not possible to see what you are doing, so there is a lot of guesswork involved. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Delete cascade trigger runs security definer
> Referential integrity actions execute as the owner of the table, so > anything triggered by them would execute as the owner too. > > regards, tom lane Hmm, that opens up a very nasty gotcha, as shown by the script below. What user1 does looks, at first sight, fairly innocuous. However, it opens him up completely, allowing user2 to do anything in his name. Admittedly, granting ALL on a relation is not good practice, without careful thought. But I wonder how many people do it just to save typing, especially if the tables in question aren't particularly important. I couldn't find anything in the documentation that said that referential integrity actions execute as the owner of the table. So how many people looking at this script would spot the danger? Dean -- Need 2 users \c - postgres DROP OWNED BY user1; DROP OWNED BY user2; DROP USER user1; DROP USER user2; CREATE USER user1; CREATE USER user2; -- First user \c - user1 CREATE TABLE foo(a int PRIMARY KEY); CREATE TABLE bar(a int REFERENCES foo ON DELETE CASCADE); CREATE TABLE fud(a int); GRANT ALL ON foo TO user2; GRANT ALL ON bar TO user2; -- Second user \c - user2 CREATE OR REPLACE FUNCTION bar_log_fn() RETURNS trigger AS $$ BEGIN EXECUTE 'DROP TABLE fud'; EXECUTE 'CREATE TABLE fud2(a int)'; RETURN OLD; END; $$ LANGUAGE plpgsql; CREATE TRIGGER bar_del_trigger BEFORE DELETE ON bar FOR EACH ROW EXECUTE PROCEDURE bar_log_fn(); INSERT INTO foo VALUES(1); INSERT INTO bar VALUES(1); DELETE FROM foo WHERE a=1; _ See the most popular videos on the web http://clk.atdmt.com/GBL/go/115454061/direct/01/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general