Re: [GENERAL] weekday from timestamp?
> -Original Message- > From: [EMAIL PROTECTED] [mailto:pgsql-general- > [EMAIL PROTECTED] On Behalf Of ..::rDk::.. > Sent: Sunday, September 07, 2008 8:08 PM > To: pgsql-general@postgresql.org > Subject: [GENERAL] weekday from timestamp? > > im strugling with my dataset.. > > got a small pgsql db with a timestamp column in format :MM:DD > HH:MM:SS for each record > > how could i possibly return day of the week for every record? http://www.postgresql.org/docs/8.0/interactive/functions-datetime.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Getting rows in a very specific order
I'm considering using an array of ints column in a table which lists a row's ancestry. For example, if item 97 is contained within itme 68 and that item is contained with in 31 and that item is contained within item 1 then the value of the hierachy column would be {1,31,68,97}, the numbers refer to the primary keys of the rows. If I use the value of the hierarchy column in a query I can get all the rows that a given row is an descendant of. (SELECT * FROM items WHERE itm_id IN (1,31,68,97), for example. However, I need the rows to be in the correct order, ie the root node first, child second, grandchild third etc. I'm guessing the order can be set with an ORDER BY, but I've only ever used ORDER BY to order by ascending or descending order of a single column. Is there a way to retrieve the rows explicitly in the order they are listed in the hierarchy array? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres 8.3.1 RowExclusiveLock With JDBC XA
I am having a problem where row exclusive locks are causing applications and vacuums to hang. I shutdown all applications, but the locks remain. I stop and start postgres, but the locks are still there. A pid is not listed. This problem started showing up after switching to the XA datasource with the Postgres JDBC driver. Is this causing problems? psql nafis Welcome to psql 8.3.1, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit select pg_class.relname,pg_locks.* from pg_class,pg_locks where pg_class.relfilenode=pg_locks.relation; SNIP: pid modevirtualtransaction granted locktypedatabase relation (null) AccessShareLock -1/309809 truerelation16384 29478 (null) AccessShareLock -1/309809 truerelation16384 29497 (null) AccessShareLock -1/317897 truerelation16384 29145 (null) RowShareLock-1/317897 truerelation16384 29239 (null) AccessShareLock -1/317897 truerelation16384 29136 (null) AccessShareLock -1/317931 truerelation16384 29647 (null) RowExclusiveLock-1/317931 truerelation 16384 29647 (null) AccessShareLock -1/309809 truerelation16384 29494 (null) RowShareLock-1/317931 truerelation16384 29347 (null) RowShareLock-1/317931 truerelation16384 29225 (null) AccessShareLock -1/317931 truerelation16384 29147 (null) AccessShareLock -1/317931 truerelation16384 29392 (null) RowExclusiveLock-1/317931 truerelation 16384 29392 (null) RowShareLock-1/317931 truerelation16384 29239 11209 AccessShareLock 3/7 truerelation16384 10969 (null) AccessShareLock -1/317931 truerelation16384 29136 (null) AccessShareLock -1/317931 truerelation16384 29145 (null) AccessShareLock -1/309809 truerelation16384 29446 (null) RowExclusiveLock-1/309809 truerelation 16384 29446 11209 AccessShareLock 3/7 truerelation16384 2663 (null) AccessShareLock -1/309809 truerelation16384 29447 (null) RowExclusiveLock-1/309809 truerelation 16384 29447 (null) AccessShareLock -1/309809 truerelation16384 29466 (null) AccessShareLock -1/317897 truerelation16384 29392 (null) RowExclusiveLock-1/317897 truerelation 16384 29392 (null) RowShareLock-1/317897 truerelation16384 29225 (null) AccessShareLock -1/317897 truerelation16384 29147 (null) AccessShareLock -1/309809 truerelation16384 29582
[GENERAL] weekday from timestamp?
im strugling with my dataset.. got a small pgsql db with a timestamp column in format :MM:DD HH:MM:SS for each record how could i possibly return day of the week for every record? any help will be much appreciated cheers r -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] You need to rebuild PostgreSQL using --with-libxml.
Hi, As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1 enterprise edition, I need to know the steps to comfigurarlo, if someone owns a manual. Thank you. .
Re: [GENERAL] OS X library path issues for libpq (ver 8.3
Have you tried putting your environmental variables in: .MacOSX I have some apps that need access to some PG enviromental variables and I had to put them in a plist in the directory .MacOSX e.g. [mbp:~/.MacOSX]$ ls -al ~/.MacOSX total 8 drwxr-xr-x3 jerry jerry 102 Jun 25 2007 . drwxr-xr-x+ 113 jerry jerry 3842 Sep 7 12:45 .. -rw-r--r--1 jerry jerry 334 Jun 25 2007 environment.plist and [mbp:~/.MacOSX]$ cat environment.plist http://www.apple.com/DTDs/PropertyList-1.0.dtd "> PGDATABASE levan PGHOST localhost PGUSER levan The plist contents act like regular unix style environmental variables for apps started from the Finder. Perhaps placing the path informationfor the dynamic loader in the plist would solve your problems. Jerry -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_dumpall problem when roles have default schemas
I've run into a problem while migrating an existing 8.2.7 data base to a new server running 8.3.3 (although I think the version numbers may not matter -- I think I've seen this problem in the past and just lived with it since so much of Postgresql is so great!). The problem stems from the fact that for certain roles, I have defined default search paths, and because the output of pg_dumpall includes role definitions first, then data base definitions, then schema definitions. Thus, when piping the output (from legacy host 192.168.2.2) to populate the newly initialized cluster, by way of running (on the new host 192.168.2.3) pg_dumpall -h 192.168.2.2|psql an error occurs in that first section when the script attempts to set a role-specific search path ... because the schema named in the search path hasn't been created yet. Not sure if there is some different way I should be using these tools to accomplish this, or if there is a work-around, or if this feature needs improvement.
[GENERAL] Hi
Hi, First of all a big thank you for all the guys at Postgres . I love you all for the whole system you have to put up ! I just have one problem that I can't get to set my general mailing list to weekly or monthly. I want to have it as a digest ; any digest upwards of weekly will be fine with me ! I sent requests to majordomo but both times it returned the requests as stalled and when I checked at the link they sent , it was reported that the "weekly" "monthly" "week" all are not valid digest types... Please can you guide me as to how to go about this procedures. Another thing :- It would be very helpful if we were to have a webpage for these settings where we could actually change them and save rather than mailing it... Thank You , Sincerely Harshad Pethe .
Re: [GENERAL] Windows ODBC Driver
Stephen Frost wrote: * Bill Todd ([EMAIL PROTECTED]) wrote: FWIW I cannot get the ODBC driver to work correctly with ADO and the OLE DB provider for ODBC. It sees TEXT fields as VARCHAR instead of LONGVARCHAR. I do not know if the problem is at the ODBC level or the ADO level but test carefully if you are going to use TEXT fields. There's an option in the ODBC configuration settings to flip that back and forth, I believe... 'Text as LongVarChar'. Stephen Been there, done that, does not work. The only way I could get a text field to be treated as LongVarChar was to check UnknownAsLongVarChar and when I do that both VarChar and Text columns are treated as LongVarChar. Bill
Re: [GENERAL] Fastest way to restore a database
On Fri, 12 Sep 2008, Robert Treat wrote: Don't forget to bump up checkpoint_timeout along with that... actually, I blogged a couple of times on this topic And with that there's enough information to start a dedicated page on this topic: http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql scripting tutorials
> Ok then. Does anyone have any tips or best practices for scripting psql? > > I'll probably write some bash scripts to pull csv files over then script > psql to do a COPY into an import schema then run a pl/pgsql procedure, er > function, to do all ETL. > > Who else is doing something like this? Can psql access environmental > variables or command line params? Or do I have to have my bash script write > a psql script every time? > Maybe you should try pltcl/pltclu - it's very powerfull (and my favorite) language for ETL within PostgreSQL server (read files, TCP, regular expresion etc.). If you have XWindow based boxes you may use tk package and even to use graphical user interface (for example DialogBox as parameters input) on remotex boxes from pltclu (sometimes I do that). For external ETL I am using Java Eclipse or Eclipse RCP Framework and embeded python language (formaly jython) - very important fact is that jython scripts may controlled Eclipse widgets (for example ProgressBar). Regards, Blazej -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
Scott Ribe <[EMAIL PROTECTED]> writes: >> The worry expressed upthread about the transaction being "too large" is >> unfounded, btw. Unlike some other DBs, PG doesn't have a finite-size >> undo log. > Sure, it won't fail. But would there be some point at which it would become > slower than multiple transactions? Or is it always faster (or at least as > fast)? I can't think of any reason it would be slower. There are certainly issues you could run into with very long transactions, like vacuum not being able to remove bloat elsewhere. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
> The worry expressed upthread about the transaction being "too large" is > unfounded, btw. Unlike some other DBs, PG doesn't have a finite-size > undo log. Sure, it won't fail. But would there be some point at which it would become slower than multiple transactions? Or is it always faster (or at least as fast)? -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
On Friday 12 September 2008 14:32:07 Greg Smith wrote: > On Fri, 12 Sep 2008, William Garrison wrote: > > Is there a definitive list of things to do? > > That section of the documention is pretty good: > http://www.postgresql.org/docs/current/static/populate.html > > The main thing it's missing is a discussion of how to cut down on disk > commit overhead by either usinc async commit or turning fsync off. If > you've got a good caching controller that may not be needed though. > > The other large chunk of information it doesn't really go into is what > server tuning you could do to improve general performance, which obviously > would then help with loading as well. > http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over > much of that. > > > * Turn off full_page_writes > > Don’t write the WAL archives in a safe way. But we don’t need WAL > > archives during a restore. Future versions of postgres will let you turn > > off WAL archives entirely > > Ideally you'd be using COPY such that the table was just created or > truncated before loading, which (if archive_mode is off) keeps them from > being WAL logged, as described in 14.4.7. If you do that and vastly > increase checkpoint_segments, full_page_writes has minimal impact. > > > * Increase the checkpoint_segments parameter (the default is 3 – so... > > maybe 10?) > > 64-256 is the usual range you'll see people using for bulk loading. > Don't forget to bump up checkpoint_timeout along with that... actually, I blogged a couple of times on this topic: http://people.planetpostgresql.org/xzilla/index.php?/archives/133-Getting-faster-database-restores-on-postgresql-8.1.html http://people.planetpostgresql.org/xzilla/index.php?/archives/223-Measuring-database-restore-times.html A little old, but might be helpful. -- Robert Treat http://www.omniti.com Database: Scalability: Consulting: -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
William Garrison <[EMAIL PROTECTED]> writes: > So... if I am using pg_dump and pg_restore with a compressed backup, > then it is using COPY, correct? And I think that would follow a CREATE > TABLE statement as mentioned in the first link... so no WAL files written? Only if you use --single-transaction. The worry expressed upthread about the transaction being "too large" is unfounded, btw. Unlike some other DBs, PG doesn't have a finite-size undo log. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
Thanks so much! So... if I am using pg_dump and pg_restore with a compressed backup, then it is using COPY, correct? And I think that would follow a CREATE TABLE statement as mentioned in the first link... so no WAL files written? Greg Smith wrote: On Fri, 12 Sep 2008, William Garrison wrote: Is there a definitive list of things to do? That section of the documention is pretty good: http://www.postgresql.org/docs/current/static/populate.html The main thing it's missing is a discussion of how to cut down on disk commit overhead by either usinc async commit or turning fsync off. If you've got a good caching controller that may not be needed though. The other large chunk of information it doesn't really go into is what server tuning you could do to improve general performance, which obviously would then help with loading as well. http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over much of that. * Turn off full_page_writes Don’t write the WAL archives in a safe way. But we don’t need WAL archives during a restore. Future versions of postgres will let you turn off WAL archives entirely Ideally you'd be using COPY such that the table was just created or truncated before loading, which (if archive_mode is off) keeps them from being WAL logged, as described in 14.4.7. If you do that and vastly increase checkpoint_segments, full_page_writes has minimal impact. * Increase the checkpoint_segments parameter (the default is 3 – so... maybe 10?) 64-256 is the usual range you'll see people using for bulk loading. * Increase the maintenance_work_mem setting to 512MB I haven't really seen any real improvement setting that over 256MB. If you've got RAM to waste it doesn't really matter if you set it too high though. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] connection timeouts and "killing" users
On Sep 11, 2008, at 12:02 PM, Scott Marlowe wrote: I don't think so. What might work best is to have two pg_hba.conf files, and link to each one. so one is pg_hba.conf.lockout and one is pg_hba.conf.normal, let's say. lockout is set to only answer to the postgres user. Switch the pg_hba.conf files, and do a pg_ctl stop;pg_ctl start or equivalent (/etc/init.d/postgresql stop / start) and then do your processing. switch them back and restart pgsql again. Note that if he's not manually killing off each of the client connections only the first restart is necessary (to kill off the child connections) as config reload will take care of pg_hba.conf changes. Erik Jones, Database Administrator Engine Yard Support, Scalability, Reliability (415) 963-4410 x 260 Location: US/Pacific IRC: mage2k -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fastest way to restore a database
On Fri, 12 Sep 2008, William Garrison wrote: Is there a definitive list of things to do? That section of the documention is pretty good: http://www.postgresql.org/docs/current/static/populate.html The main thing it's missing is a discussion of how to cut down on disk commit overhead by either usinc async commit or turning fsync off. If you've got a good caching controller that may not be needed though. The other large chunk of information it doesn't really go into is what server tuning you could do to improve general performance, which obviously would then help with loading as well. http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over much of that. * Turn off full_page_writes Don’t write the WAL archives in a safe way. But we don’t need WAL archives during a restore. Future versions of postgres will let you turn off WAL archives entirely Ideally you'd be using COPY such that the table was just created or truncated before loading, which (if archive_mode is off) keeps them from being WAL logged, as described in 14.4.7. If you do that and vastly increase checkpoint_segments, full_page_writes has minimal impact. * Increase the checkpoint_segments parameter (the default is 3 – so... maybe 10?) 64-256 is the usual range you'll see people using for bulk loading. * Increase the maintenance_work_mem setting to 512MB I haven't really seen any real improvement setting that over 256MB. If you've got RAM to waste it doesn't really matter if you set it too high though. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_restore parameters
Hello: I am move to a new production server and am testing my backup and restore procedures. Given a backup created with the follow command C:\>C:\progFiles\PostgreSQL\8.2\bin\pg_dump -Fc -b -C -o -f E:\backupPostgres\benchxx_c20080912.backup -U postgres benchxx_c What is the best way to do a restore of the above backup? I tinkered with a couple of scenarios. The one the gave a clean restore was a) drop the database b) restore using the following: pg_restore -C -d template1 -U postgres "E:\data\postgres\ benchxx_c20080912.backup" Is this the correct way to do a restore? Note: the above E Drives are on different machines. Thanks for your kind assistance. KD
Re: [GENERAL] TSearch2: find a QUERY that does match a single document
On Fri, 12 Sep 2008, Dmitry Koterov wrote: Hello. TSearch2 allows to search a table of tsvectors by a single tsquery. I need to solve the reverse problem. *I have a large table of tsquery. I need to find all tsqueries in that table that match a single document tsvector: * CREATE TABLE "test"."test_tsq" ( "id" SERIAL, "q" TSQUERY NOT NULL, CONSTRAINT "test_tsq_pkey" PRIMARY KEY("id") ); insert into test.test_tsq(q) select to_tsquery(g || 'x' || g) from generate_series(10, 90) as g; explain analyze select * from test.test_tsq where to_tsvector('40x40') @@ q why do you need tsvector @@ q ? Much better to use tsquery = tsquery test=# explain analyze select * from test_tsq where q = '40x40'::tsque> QUERY PLAN --- Seq Scan on test_tsq (cost=0.00..16667.01 rows=1 width=38) (actual time=129.208..341.111 rows=1 loops=1) Filter: (q = '''40x40'''::tsquery) Total runtime: 341.134 ms (3 rows) Time: 341.478 ms This gets a strange explain analyze: QUERY PLAN Seq Scan on test_tsq (cost=0.00..17477.01 rows=800 width=36) (actual time=68.698..181.458 rows=1 loops=1) Filter: ('''40x40'':1'::tsvector @@ q) Total runtime: 181.484 ms No matter if I use GIST index on test_tsq.q or not: the explain analyze result is the same. So, why "rows=800"? The table contains much more rows... '800' is the number of estimated rows, which is not good, since you got only 1 row. Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Fastest way to restore a database
I know that PostgreSQL is slow at restoring databases. But there are some tricks to use when speeding it up. Here is a brief list I compiled from reading the docs and reading some forums. Is there a definitive list of things to do? * Turn off fsync So it won’t flush after every commit * Turn off full_page_writes Don’t write the WAL archives in a safe way. But we don’t need WAL archives during a restore. Future versions of postgres will let you turn off WAL archives entirely * Turn off archive_mode This disables the archiving of WAL logs * Increase the checkpoint_segments parameter (the default is 3 – so... maybe 10?) Increases the number of transactions that can happen before a WAL checkpoint * The --single-transaction paremeter to pg_restore *might* speed it up One transaction is more efficient, but an entire DB restore might be too big of a transaction. So I’m not so sure about this option * Increase the maintenance_work_mem setting to 512MB Gives more memory to CREATE_INDEX commands, which is part of the restore process * (PostgreSql 8.3 only) Turn off synchronous_commit This makes it so that the database won’t wait for the WAL checkpoint to be completed before moving on to the next operation. Again, we don’t want WAL archiving during a restore anyway. Are any of the above items not going to help? Anything I'm missing? Is there a way to disable foreign key constraints during the restore since I know it is already a good database? I am using postgreSQL 8.2.9 on Win32 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting string to IN query
On Fri, Sep 12, 2008 at 04:04:18PM +0100, Richard Huxton wrote: > Andrus wrote: > > String contains list of document numbers (integers) like: > > > > '1,3,4' > > > > How to SELECT documents whose numbers are contained in this string. > > > Numbers should be passed as single string literal since FYIReporting > > RDLEngine does not allow multivalue parameters. > > Hmm - might be worth bringing that to their attention. I'm probably missing something, but does PG? > Try string_to_array(). Example: > > SELECT * FROM generate_series(1,10) s > WHERE s = ANY (string_to_array('1,3,5,7', ',')::int[]); I don't think you need the string_to_array function call, an array literal should do perfectly well here. I.e. WHERE s = ANY ('{1,2,3}'::int[]); the literal can of course be a parameter as well: WHERE s = ANY ($1::int[]); Sam -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] TSearch2: find a QUERY that does match a single document
Hello. TSearch2 allows to search a table of tsvectors by a single tsquery. I need to solve the reverse problem. *I have a large table of tsquery. I need to find all tsqueries in that table that match a single document tsvector: * CREATE TABLE "test"."test_tsq" ( "id" SERIAL, "q" TSQUERY NOT NULL, CONSTRAINT "test_tsq_pkey" PRIMARY KEY("id") ); insert into test.test_tsq(q) select to_tsquery(g || 'x' || g) from generate_series(10, 90) as g; explain analyze select * from test.test_tsq where to_tsvector('40x40') @@ q This gets a strange explain analyze: QUERY PLAN Seq Scan on test_tsq (cost=0.00..17477.01 rows=800 width=36) (actual time=68.698..181.458 rows=1 loops=1) Filter: ('''40x40'':1'::tsvector @@ q) Total runtime: 181.484 ms No matter if I use GIST index on test_tsq.q or not: the explain analyze result is the same. So, why "rows=800"? The table contains much more rows...
Re: [GENERAL] Converting string to IN query
Andrus wrote: > String contains list of document numbers (integers) like: > > '1,3,4' > > How to SELECT documents whose numbers are contained in this string. > Numbers should be passed as single string literal since FYIReporting > RDLEngine does not allow multivalue parameters. Hmm - might be worth bringing that to their attention. Try string_to_array(). Example: SELECT * FROM generate_series(1,10) s WHERE s = ANY (string_to_array('1,3,5,7', ',')::int[]); Note that I'm casting it to an array of integers so the "= ANY" knows what types it will need to match. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting string to IN query
Andrus escreveu: SELECT * FROM invoices WHERE invoiceno IN ( '1,3,4' ); but this causes error. change it to ( '1','3','4' ) or ( 1,3,4 ) -- ACV -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Autocommit, isolation level, and vacuum behavior
Tomasz Ostrowski wrote: On 2008-09-11 17:21, Jack Orenstein wrote: The id > last_id trick doesn't work for me -- I don't have an index that would support it efficiently. You do not have a primary key? If you do then you have an index as it is automatically created. Sorry, I misspoke. I have an index, but preferred doing a scan without the index in this case. Jack -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Autocommit, isolation level, and vacuum behavior
Tomasz Ostrowski wrote: On 2008-09-11 18:03, Jack Orenstein wrote: When you do: result = query("select something from sometable") then all rows of a result will be cached by a client program. I am very sure this is not happening. Maybe some rows are being cached (specifying fetch size), but certainly not all of them. It used to, with older drivers, (7.4?) but I've been using 8.1 drivers (at least) for a long time. Maybe some result set options you're using cause such memory usage? Wanna bet? http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor | There a number of restrictions which will make the driver silently | fall back to fetching the whole ResultSet at once. (...) The | Connection must not be in autocommit mode. The backend closes cursors | at the end of transactions, so in autocommit mode the backend will | have closed the cursor before anything can be fetched from it. So, when you turn on autocommit then it is caching it all. Fetch size is ignored. Well that explains what I've been seeing (autocommit on scan producing behavior that looks like SERIALIZABLE). Not the behavior I would prefer, but I understand it now. Jack -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Converting string to IN query
String contains list of document numbers (integers) like: '1,3,4' How to SELECT documents whose numbers are contained in this string. I tried create temp table invoices ( invoiceno int ); insert into invoices values (1); insert into invoices values (2); insert into invoices values (3); insert into invoices values (4); SELECT * FROM invoices WHERE invoiceno IN ( '1,3,4' ); but this causes error. Numbers should be passed as single string literal since FYIReporting RDLEngine does not allow multivalue parameters. How to fix this so that query returns invoices whose numbers are contained in string literal ? Can arrays used to convert string to list or any other solution ? Andrus. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql scripting tutorials
Alvaro Herrera wrote: Doesn't say about variable expansion ... And it seems to be in a different realm, because the point is that the command is going to have a single destination (either \-processing or sending it to the server). Is psql being just lazy here and avoiding parsing the command? The intent is to leave open an avenue to pass a command to the server without any interference from psql at all. I have never been very comfortable with overloading -c for that purpose, and it certainly confuses users from time to time. But that's the idea anyway. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Autocommit, isolation level, and vacuum behavior
On 2008-09-11 17:21, Jack Orenstein wrote: > The id > last_id trick doesn't work for me -- I don't have an index that > would > support it efficiently. You do not have a primary key? If you do then you have an index as it is automatically created. Watch this: test=> create temporary table test ( id int primary key, data text ); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "test_pkey" for table "test" CREATE TABLE test=> insert into test (select i, 'this is a row number '||i::text from (select generate_series(1,100) as i) as q); INSERT 0 100 test=> explain analyze select * from test where id>50 order by id limit 1; QUERY PLAN Limit (cost=0.00..0.07 rows=1 width=36) (actual time=0.150..0.151 rows=1 loops=1) -> Index Scan using test_pkey on test (cost=0.00..23769.63 rows=322248 width=36) (actual time=0.148..0.148 rows=1 loops=1) Index Cond: (id > 50) Total runtime: 0.191 ms (4 rows) Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general