Re: [GENERAL] pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"

2017-05-26 Thread David Wall
On 5/25/17 6:30 AM, Tom Lane wrote: David Wall <d.w...@computer.org> writes: They do have a slave DB running via WAL shipping. Would that likely help us in any way? Have you tried taking a backup from the slave? It's possible that the corruption exists only on the master. We wil

Re: [GENERAL] pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"

2017-05-24 Thread David Wall
On 5/24/17 4:18 PM, Tom Lane wrote: David Wall <d.w...@computer.org> writes: We have not noted any issues, but when I ran a pg_dump on an 8.3.3 database, it failed after an hour or so with the error: 8.3.3? Yes, it's old. cat /proc/version Linux version 2.6.18-92.1.10.el5.xs5.0.0

[GENERAL] pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"

2017-05-24 Thread David Wall
We have not noted any issues, but when I ran a pg_dump on an 8.3.3 database, it failed after an hour or so with the error: ERROR: invalid page header in block 2264419 of relation "pg_largeobject" pg_dump: The command was: FETCH 1000 IN bloboid As we seem to have some data corruption issue,

Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread David Wall
On 12/6/16 12:33 PM, Tom Lane wrote: John R Pierce writes: On 12/6/2016 12:10 PM, Rich Shepard wrote: I did not realize that a BLOB is not the same as a bytea (page 217 of the 9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please point me in the right

Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread David Wall
On 12/6/16 11:12 AM, Eric Schwarzenbach wrote: On 12/06/2016 01:34 PM, Joshua D. Drake wrote: On 12/06/2016 10:30 AM, Rich Shepard wrote: My thinking is to not store these documents in the database, but to store them in subdirectories outside the database. Your thoughts? Due to the

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-06 Thread David Wall
On 7/6/2014 9:06 AM, Tom Lane wrote: David Wall d.w...@computer.org writes: There's one row in pg_largeobject_metadata per large object. The rows in pg_largeobject represent 2KB pages of large objects (so it looks like your large objects are averaging only 8KB-10KB apiece). The metadata

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-05 Thread David Wall
On 7/4/2014 11:30 AM, Bosco Rama wrote: Random thought: What OS kernel are you running? Kernels between 3.2.x and 3.9.x were known to have IO scheduling issues. This was highlighted most by the kernel in Ubuntu 12.04 (precise) as shown here:

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-05 Thread David Wall
On 7/4/2014 9:18 PM, Tom Lane wrote: There are only 32 table, no functions, but mostly large objects. Not sure how to know about the LOs, but a quick check from the table sizes I estimate at only 2GB, so 16GB could be LOs. There are 7,528,803 entries in pg_catalog.pg_largeobject. Hmm ... how

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-04 Thread David Wall
On 7/4/2014 7:19 AM, Tom Lane wrote: You haven't given us much info about the contents of this database. Are there a lot of tables? functions? large objects? How many is a lot, if so? I'm suspicious that you're paying a penalty associated with pg_dump's rather inefficient handling of metadata

[GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
I'm running PG 9.3.4 on CentOS 6.4 and noted that backing up my database takes much longer than restoring it. That seems counter-intuitive to me because it seems like reading from a database should generally be faster than writing to it. I have a database that pg_database_size reports as

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 10:36 AM, Bosco Rama wrote: If those large objects are 'files' that are already compressed (e.g. most image files and pdf's) you are spending a lot of time trying to compress the compressed data ... and failing. Try setting the compression factor to an intermediate value, or even

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 10:38 AM, Tim Clarke wrote: I'd also check the effect of those other run components; the vacuum's and other things that are only running with the backup and not during the restore. The vacuumlo, vacuum and analyze run before the pg_dump. I am not talking about any of the time

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 5:13 PM, Bosco Rama wrote: If you use gzip you will be doing the same 'possibly unnecessary' compression step. Use a similar approach to the gzip command as you would for the pg_dump command. That is, use one if the -[0-9] options, like this: $ pg_dump -Z0 -Fc ... | gzip -[0-9]

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 6:26 PM, Bosco Rama wrote: Hmmm. You are using '--oids' to *include* large objects? IIRC, that's not the intent of that option. Large objects are dumped as part of a DB-wide dump unless you request that they not be. However, if you restrict your dumps to specific schemata and/or

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 10:13 PM, Bosco Rama wrote: Is the issue with S3 or just transfer time? I would expect that 'rsync' with the '--partial' option (or -P if you want progress info too) may help there. Don't know if rsync and S3 work together or what that would mean, but it's not an issue I'm

Re: [GENERAL] pg_dump slower than pg_restore

2014-07-03 Thread David Wall
On 7/3/2014 11:47 AM, Eduardo Morras wrote: No, there's nothing wrong. All transparent compressed objects stored in database, toast, lo, etc.. is transparently decompressed while pg_dump access them and then you gzip it again. I don't know why it doesn't dump the compressed data directly.

[GENERAL] Warm standby (log shipping) from PG 8.3 to 9.3

2014-06-10 Thread David Wall
Is it safe to assume that my working PG 8.3 archive command on the master and recovery.conf (using contrib's pg_standby) on the standby will work the same under 9.3? That is, under PG 8.3, my master server uses: archive_mode = on archive_command = '~/postgresql/bin/copyWAL %p %f'

Re: [GENERAL] Warm standby (log shipping) from PG 8.3 to 9.3

2014-06-10 Thread David Wall
On 6/10/2014 11:54 AM, hubert depesz lubaczewski wrote: On Tue, Jun 10, 2014 at 8:13 PM, David Wall d.w...@computer.org mailto:d.w...@computer.org wrote: Is it safe to assume that my working PG 8.3 archive command on the master and recovery.conf (using contrib's pg_standby

[GENERAL] Upgrading from PG 8.3.3 to 9.3.4 - FATAL: invalid value for parameter TimeZone: PST

2014-06-01 Thread David Wall
I am posting to both PG and the JDBC group since I am using JDBC to connect, so I'm not sure if this is a PG issue or a PG JDBC issue. I am upgrading from 8.3.3 to 9.3.4. I ran pg_dump on my 8.3.3 databases (which ran on CentOS 5.2, Java 6) and ran pg_restore on my 9.3.4 databases (which

Re: [GENERAL] Upgrading from PG 8.3.3 to 9.3.4 - FATAL: invalid value for parameter TimeZone: PST

2014-06-01 Thread David Wall
On 5/31/2014 11:47 PM, John R Pierce wrote: On 5/31/2014 11:41 PM, David Wall wrote: FATAL: invalid value for parameter TimeZone: PST I'd be using America/Los_Angeles as the timezone rather than PST, as the TLA timezones are ambiguous (CST is both Central Standard Time in the USA

Re: [GENERAL] Upgrading from PG 8.3.3 to 9.3.4 - FATAL: invalid value for parameter TimeZone: PST

2014-06-01 Thread David Wall
On 6/1/2014 9:05 AM, Adrian Klaver wrote: I ask because a look at the PG JDBC code shows this, which did not show up until Sept 22, 2011. Not sure what release, but it looks like 9.2+: // Construct and send a startup packet. String[][] params = {

Re: [GENERAL] Permissions on large objects - db backup and restore

2013-04-04 Thread David Wall
On 4/3/2013 5:57 PM, Tom Lane wrote: $ pg_restore -? ... -O, --no-owner skip restoration of object ownership ... So there you have it. pg_restore just restored all the objects (blobs and otherwise) as owned by the user running it. I should think you'd have had issues with other things besides

[GENERAL] Permissions on large objects - db backup and restore

2013-04-03 Thread David Wall
When we upgraded from PG 8.4 to PG 9.2, we ran into a permissions issue with large objects as discussed here: http://postgresql.1045698.n5.nabble.com/Large-Object-permissions-lost-in-transfer-td4281604.html The basic solution was to do an ALTER LARGE OBJECT and set the OWNER TO using a script

Re: [GENERAL] Permissions on large objects - db backup and restore

2013-04-03 Thread David Wall
On 4/3/2013 3:14 PM, Tom Lane wrote: A 9.2-9.2 dump and restore certainly should preserve large object ownership (and permissions, if you've set any). In a quick check I do see ALTER LARGE OBJECT nnn OWNER TO ... commands in pg_dump's output for such a case. Are you sure this is really the

Re: [GENERAL] PG 8.4 to 9.2 upgrade issues with ownership of large objects

2012-12-17 Thread David Wall
On 12/16/2012 6:21 PM, Tom Lane wrote: David Wall d.w...@computer.org writes: On 12/16/2012 11:22 AM, Tom Lane wrote: Large objects didn't have privileges, nor owners, in 8.4. If you don't feel like fixing your apps right now, you can return to the previous behavior by setting the obscurely

Re: [GENERAL] PG 8.4 to 9.2 upgrade issues with ownership of large objects

2012-12-17 Thread David Wall
On 12/16/2012 6:21 PM, Tom Lane wrote: David Wall d.w...@computer.org writes: On 12/16/2012 11:22 AM, Tom Lane wrote: Large objects didn't have privileges, nor owners, in 8.4. If you don't feel like fixing your apps right now, you can return to the previous behavior by setting the obscurely

[GENERAL] PG 8.4 to 9.2 upgrade issues with ownership of large objects

2012-12-16 Thread David Wall
In past PG upgrades, we've done a pg_dump on the current version, then a pg_restore on the new version. But during the 8.4 to 9.2 upgrade (on Linux x64), we ran into issues with the permissions associated with the large objects after the restore. Is this something new or were we just lucky

Re: [GENERAL] PG 8.4 to 9.2 upgrade issues with ownership of large objects

2012-12-16 Thread David Wall
On 12/16/2012 11:22 AM, Tom Lane wrote: David Wall d.w...@computer.org writes: In past PG upgrades, we've done a pg_dump on the current version, then a pg_restore on the new version. But during the 8.4 to 9.2 upgrade (on Linux x64), we ran into issues with the permissions associated

Re: [GENERAL] UUID column as pimrary key?

2011-01-04 Thread David Wall
We're using UUID for primary keys in PG 8.4 without any issues. I have no real insights into the details or performance issues, but always figured it was stored as a binary 128-bit value, but with added benefits of being able to enter and view them using a standard string format. We don't

Re: [GENERAL] Storing many big files in database- should I do it?

2010-04-29 Thread David Wall
Things to consider when /not /storing them in the DB: 1) Backups of DB are incomplete without a corresponding backup of the files. 2) No transactional integrity between filesystem and DB, so you will have to deal with orphans from both INSERT and DELETE (assuming you don't also update the

Re: [GENERAL] Storing many big files in database- should I do it?

2010-04-29 Thread David Wall
Huh ??? isn't that point of using bytea or text datatypes. I could have sworn bytea does not use large object interface it uses TOAST or have i gone insane You're not insane :) Put it another way: bytea values are not stored in the pg_largeobject catalog. I missed the part that

[GENERAL] VACUUM FULL vs backup/restore

2010-04-27 Thread David Wall
We are copying a production database and then pairing it down dramatically (perhaps removing 99% of records, most of which are large objects). It seems my options are 1) VACUUM FULL with a reindex_db; or 2) backup, then restore. Is there anything one does better than the other? Our

Re: [GENERAL] Where do you store key for encryption

2009-11-17 Thread David Wall
In our open-esignforms project we use a layered approach for keys in which we have a boot key for the application that requires dual passwords which we then combine into a single password for PBE encryption of the boot key. We then have session keys that are encrypted with the boot key, and

Re: [GENERAL] pg_dump on Linux with pg_restore on Windows?

2009-11-03 Thread David Wall
I know this used to be a problem, but I don't have a Windows server to test it on again. Can I do a pg_dump on Linux and then pg_restore it on Windows? We're using PG 8.3 now, but could certainly upgrade if that made the difference. Okay, maybe I answered my own question, though if

[GENERAL] pg_dump on Linux with pg_restore on Windows?

2009-11-03 Thread David Wall
I know this used to be a problem, but I don't have a Windows server to test it on again. Can I do a pg_dump on Linux and then pg_restore it on Windows? We're using PG 8.3 now, but could certainly upgrade if that made the difference. Thanks, David -- Sent via pgsql-general mailing list

[GENERAL] Operational performance: one big table versus many smaller tables

2009-10-26 Thread David Wall
If I have various record types that are one up records that are structurally similar (same columns) and are mostly retrieved one at a time by its primary key, is there any performance or operational benefit to having millions of such records split across multiple tables (say by their

Re: [GENERAL] When much of a DB is large objects - PG 8.4

2009-10-20 Thread David Wall
the bytea limit is 1gb (as are all datums in postgres). pg_largeobject can go up to 2gb, but in either case you'd likely run into severe performance/scalability issues long before objects began approaching those size because of memory usage and other issues. With 100kb objects though, you

[GENERAL] When much of a DB is large objects - PG 8.4

2009-10-19 Thread David Wall
We have a system in which a large amount of the data is stored as large objects now in PG 8.4. The relational aspects of these objects are stored using traditional tables, but we store a lot of binary data (files, images) and XML objects that are compressed and then encrypted. The ACID

[GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread David Wall
We have a database report function that seemed clean when the number of users was small, but as the number of users grow, I was wondering if anybody had any good ideas about how to handle OR or IN for SELECTs. The general scenario is that a manager runs reports that list all records that were

Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread David Wall
(quoted from Chris) Select field1,field2 FROM table1 inner join relationships on table1.creator_user_id = relationships.employee WHERE relationships.manager = ? (quoted from Steve) select table1.field1, table2.field2 from table1, reports where table1.creator_user_id = reports.peon and

Re: [GENERAL] Array, bytea and large objects

2009-02-05 Thread David Wall
Thanks, Filip. If I have an unlimited number of name-value pairs that I'd like to get easy access to for flexible reports, could I store these in two arrays (one for name, the other for value) in a table so that if I had 10 name-value pairs or 200 name-value pairs, I could

[GENERAL] Array, bytea and large objects

2009-02-04 Thread David Wall
I am trying to assess the db issues surrounding several constructs allowed in PG 8.3, including ARRAY, BYTEA and large objects (LO). We store a lot of data as encrypted XML structures (name-value pairs mostly) that can be updated many times during its lifetime (most updates occur over several

Re: [GENERAL] Data Replication

2008-12-10 Thread David Wall
We've done warm standby as you indicate, and we've not needed anything special. On the primary's postgresql.conf we use: archive_command = '~/postgresql/bin/copyWAL %p %f' Our copyWAL script is just a wrapper for 'scp' since we want to copy the data encrypted over the network: #!/bin/bash

[GENERAL] Get PG version using JDBC?

2008-10-15 Thread David Wall
Is there a way to get the PG version string from JDBC? I'm using PG 8.3. Thanks, David -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Large objects oids

2008-06-10 Thread David Wall
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion large objects across all of my various tables (actually, I presume OIDs are used elsewhere besides just large objects)? Is there any plan on allowing large objects to support more than 2GB? As data gets larger and larger, I

Re: [GENERAL] Large objects oids

2008-06-10 Thread David Wall
Tom Lane wrote: David Wall [EMAIL PROTECTED] writes: Since large objects use OIDs, does PG 8.3 have a limit of 4 billion large objects across all of my various tables Yup, and in practice you'd better have a lot less than that or assigning a new OID might take a long time

Re: [GENERAL] Open Source CRM - Options?

2008-05-28 Thread David Wall
What about SugarCRM? David -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] pg_standby / WAL archive-restore through system restarts

2008-05-13 Thread David Wall
Thanks, Guillaume. Can anyone confirm his answer to the question below about restarting the backup postmaster? It seems counter-intuitive since the backup server is only waiting for new WAL files to restore, it would seem it could be killed and restarted at any time and it should just

Re: [GENERAL] pg_standby / WAL archive-restore through system restarts

2008-05-13 Thread David Wall
Thanks, Guillaume. Can anyone confirm his answer to the question below about restarting the backup postmaster? IIRC, it works conveniently in 8.2 and up. Just do pg_ctl stop -m fast. Thanks, Tom. What about if the postmaster is just killed (kill pid, or kill -9 pid) or the

[GENERAL] pg_standby / WAL archive-restore through system restarts

2008-05-12 Thread David Wall
Just wanted to see if others would confirm my impressions about running WAL archiving and pg_standby restore. Server A (Primary): Runs PG 8.3 with WAL archiving enabled. Each WAL is copied over the network to Server B. (A previous 'tar backup' of the database along with the requisite psql

Re: [GENERAL] JDBC addBatch more efficient?

2008-04-21 Thread David Wall
The JDBC driver's batch processing is more efficient than regular execution because it requires fewer network roundtrips so there's less waiting. The JDBC batch is broken into an internal batch size of 256 statement and all of these are sent over to the server at once. That's great, Kris.

Re: [GENERAL] JDBC addBatch more efficient?

2008-04-21 Thread David Wall
Does this means that the two features are independent one from each other ? In other words, can we say that JDBC batch will limit information exchange between client and server while Postgres prepared statements will optimize their execution ? I've not used it yet, but my impression is that

[GENERAL] JDBC addBatch more efficient?

2008-04-20 Thread David Wall
Just checking if the JDBC library's batch processing code is more efficient with respect to the postgresql back end or not. Does it really batch the requests and submit them once over the link, or does it just send them to the database to be processed one at a time? Thanks, David -- Sent

Re: [GENERAL] postgre vs MySQL

2008-03-14 Thread David Wall
My understanding is that's not quite true. The client libraries are GPL, so you can't use them directly, but I don't see what would stop you using their ODBC/JDBC drivers with your non-GPL application (especially if you support other ODBC databases as well). The server can't be bundled in

Re: [GENERAL] postgre vs MySQL

2008-03-14 Thread David Wall
I imagine you can get round the second one by building your software so it supports PostgreSQL as well - that way you don't 'require customes to install MySQL'. Well, I'm not sure how they'd even know you were doing this, but as a commercial company, I'd suggest you not follow that advice

Re: [GENERAL] postgre vs MySQL

2008-03-13 Thread David Wall
What then? Could it be marketing or the sad results of a avalanche effect? Geee, there's a thought. What a wide variety of topics. One big difference for me is that MySQL used to be open source, but it no longer is. It's an odd hybrid OSS that barely makes sense to me since they claim

Re: [GENERAL] postgre vs MySQL

2008-03-12 Thread David Wall
Well, if you have a crappy system that cannot sustain concurrent load or even be backed up concurrently with regular operation, one solution is to write a kick-ass replication system. Still, it would be nice to have a kick-ass replication system for PG, too. We've been toying with WAL

Re: [GENERAL] standby questions

2008-02-08 Thread David Wall
2) if archive_command is activated and working, the primary server sends (preferably with rsync or some other atomic tool) the NEW WAL files to the standby server. Later, at some point, the primary server will delete this files when considers that are not necessary anymore. Is 'scp'

Re: [GENERAL] standby questions

2008-02-08 Thread David Wall
That's correct. You have to do it that way or the system in recovery mode can start to consume the new segment file before it has been completely copied over yet. Does pg_standby take care of this by checking file sizes or the like? In my testing with scp, we never experienced any

Re: [GENERAL] Question for Postgres 8.3

2008-02-04 Thread David Wall
If you want to support multiple encodings, the only safe locale choice is (and always has been) C. If you doubt this, troll the archives for awhile --- for example, searching for locale+encoding in pgsql-bugs should provide plenty of amusing reading matter. This is most interesting. I think

[GENERAL] Autovacuum and stats_row_level

2008-01-28 Thread David Wall
Noted that to use autovacuum we need to turn on stats_row_level (along with stats_start_collector that is on by default). Since stats_row_level is off by default, I wonder what sort of overhead is incurred since it sounds like it could add up if it's storing additional stats information on

Re: [GENERAL] Postgresql + digital signature

2008-01-23 Thread David Wall
We're in a stage where I need to implement a mechanism to prevent the data modification. I'm thinking on 'Digital Signatures' (maybe RSA) in each row. If there's a modification, the signature doesn't verify. Like all such solutions, the key (lame pun intended) is how to do you manage the

Re: [GENERAL] Stupid question about WAL archiving

2008-01-18 Thread David Wall
Which is exactly why I pointed out that using pg_standby's -k switch was the more reliable option. And supposedly even that switch is not needed once we can get to 8.3, which should be soon. Even the -k switch can be an issue since you don't really know how many you should keep around.

Re: [GENERAL] pg_dumping large objects

2007-09-24 Thread David Wall
Large objects are defined in the DDL as type 'OID'. Those with various drivers may not know the lo_* calls are being used, since in Java this takes place if you use the SQL BLOB type in JDBC and the field itself is defined as 'OID'. David Alvaro Herrera wrote: Morris Goldstein escribió:

Re: [GENERAL] OFFSET and LIMIT - performance

2007-06-28 Thread David Wall
Network transmission costs alone would make the second way a loser. Large OFFSETs are pretty inefficient because the backend generates and discards the rows internally ... but at least it never converts them to external form or ships them to the client. Rows beyond the LIMIT are not generated

Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-18 Thread David Wall
On Jun 17, 2007, at 2:15 PM, Sergei Shelukhin wrote: This is my first (and, by the love of the God, last) project w/pgsql and everything but the simplest selects is so slow I want to cry. This is especially bad with vacuum analyze - it takes several hours for a database of mere 15 Gb on a fast

Re: [GENERAL] Replication for PG 8 recommendations

2007-05-10 Thread David Wall
Hannes Dorbath wrote: Replicate the whole block device, PostgreSQL sits on. For Linux, are you talking about something like DRDB? That would be nice in that it would also replicate the web app itself. David ---(end of broadcast)--- TIP 3:

[GENERAL] Replication for PG 8 recommendations

2007-05-09 Thread David Wall
Is there a preferred replication system for PG 8 db users? Obviously, we're looking for robustness, ease of operations/installation, low latency and efficient with system and network resources, with an active open source community being preferred. Thanks, David

Re: [GENERAL] Replication for PG 8 recommendations

2007-05-09 Thread David Wall
On Wed, 2007-05-09 at 14:40 -0700, David Wall wrote: Is there a preferred replication system for PG 8 db users? Obviously, we're looking for robustness, ease of operations/installation, low latency and efficient with system and network resources, with an active open source community

[GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall
When I create a brand new database and then run the reindexdb program under PG 8.1.3, I get the follow error: reindexdb: reindexing of database failed: ERROR: could not open relation with OID 41675 I'm not sure what this error is since we have not even added any data yet. Is this

Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall
(repost to newsgroup since the other went directly to Mr. Lane's email address) I think you've done something to template1, even though you say you haven't. Out-of-the-box, an 8.1.3 database won't have any OIDs assigned above about 11000, so OID 41675 would have to belong to a

Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall
. Is that not the case here? Thanks, David Tom Lane wrote: David Wall [EMAIL PROTECTED] writes: Thanks, but I clearly lack that expertise. How do I find "a table with that OID"? Try "select * from pg_class where oid = 41675" and "select * from pg_attribu

Re: [GENERAL] reindexdb program error under PG 8.1.3

2006-05-25 Thread David Wall
It really isn't neccessary. That said, there is *something* going on with your db, so it might be prudent to figure out what it is. Thanks, Robert. Actually, I noted that each time I ran my backup script, the OID mentioned in the error was bigger than previous number. Before I run the

Re: [GENERAL] Is difference between PostgreSQL and mySQL licences

2006-04-09 Thread David Wall
That doesn't sound consistent with the arrangements that MySQL AB expect for commercial users of their products. They indicate assortedly that: - If you are developing and distributing open source applications under the GPL, or some OSI-approved license, you are free to use MySQL(tm) for

[GENERAL] reindexdb script required in 8.1?

2006-03-07 Thread David Wall
I've upgraded from 8.0 to 8.1 and want to be sure I'm making the changes that are required. For my backups, I have removed the option --blobs because it says this is no longer needed. I guess the backup automatically includes blobs now? I have also been running the contributed 'reindexdb'

Re: [GENERAL] VACUUM is hanging - jdbc?

2001-05-07 Thread David Wall
Try not to leave open transactions hanging around in your Apache code. An idle database connection couldn't block VACUUM like that; it'd have to have been in the middle of a BEGIN block, or maybe even an unfinished query. It's possible that the JDBC library always creates a new transaction

Re: [GENERAL] Re: Postgresql table reloading

2001-05-05 Thread David Wall
In the table, you're storing the OID of the large object itself, so if you copy all the contents of the table, so far as I can see, yes, you'd still have the reference to your large object. Thanks. I did see this in a quick test, but I wasn't sure if the data would be removed when I dropped

Re: [GENERAL] VACUUM is hanging

2001-05-04 Thread David Wall
Try not to leave open transactions hanging around in your Apache code. An idle database connection couldn't block VACUUM like that; it'd have to have been in the middle of a BEGIN block, or maybe even an unfinished query. It's possible that the JDBC library always creates a new transaction

Re: [GENERAL] Re: Getting milliseconds out of TIMESTAMP

2001-04-22 Thread David Wall
Just curious, but what is the point of having times 'acurate' to the milisecond in a database? In my case it's a simple matter that I include a timestamp in a digital signature, and since my timestamp comes from Java (and most Unixes are the same), it has millisecond resolution "built in."

Re: [GENERAL] Re: Getting milliseconds out of TIMESTAMP

2001-04-22 Thread David Wall
"David Wall" [EMAIL PROTECTED] writes: The real question for me is that 7.1 docs say that the resolution of a timestamp is 8 bytes at "1 microsecond / 14 digits", yet I generally see -MM-DD HH-MM-SS.cc returned in my queries (both with pgsql and with JDBC).

[GENERAL] 7.1 dumps with large objects

2001-04-14 Thread David Wall
Wonderful job on getting 7.1 released. I've just installed it in place of a 7.1beta4 database, with the great advantage of not even having to migrate the database. It seems that 7.1 is able to handle large objects in its dump/restore natively now and no longer requires the use of the contrib

[GENERAL] LOs and pg_dump, restore, vacuum for 7.1

2001-04-10 Thread David Wall
Does 7.1 "natively" handle large objects in dumps, restores and vaccums? In 7.0.3 there was a contrib for LOs. Thanks, David ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Re: [GENERAL] Data type for storing images?

2001-03-11 Thread David Wall
bytea is probably your best bet. The 8k limit is toast in 7.1, btw. If you're using JDBC, you may not be able to get bytea types to work. I was able to get the SQL type OID to work with JDBC's get/setBytes() methods. You may also want to use 7.1 since I believe it's large object support is

Re: [GENERAL] how do you run your backups?

2001-01-26 Thread David Wall
Is there a way to start with yesterday's dump and load each transaction log dump in order until you have a complete restore of the database? Does recovery of pg_dump and the transaction logs work with a backup and then doing a rollforward on the transaction logs? What would be the procedure?

[GENERAL] 7.1 expected features list?

2001-01-24 Thread David Wall
Is there a place where I can see what the new features for 7.1 are expected to be? I'm hopeful for native BLOB support smile David

Re: [GENERAL] MySQL has transactions

2001-01-24 Thread David Wall
Something I'm curious about: does MySQL support user-defined datatypes? No, it does not. If so, how hard is it to make a new datatype? I think datatype extensibility is one of the cooler features of Postgres ... Making new datatypes sounds good, but in practice how easy is it to do? I