On 5/25/17 6:30 AM, Tom Lane wrote:
David Wall <d.w...@computer.org> writes:
They do have a slave DB running via WAL shipping. Would that likely
help us in any way?
Have you tried taking a backup from the slave? It's possible that
the corruption exists only on the master.
We wil
On 5/24/17 4:18 PM, Tom Lane wrote:
David Wall <d.w...@computer.org> writes:
We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:
8.3.3?
Yes, it's old.
cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0
We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:
ERROR: invalid page header in block 2264419 of relation "pg_largeobject"
pg_dump: The command was: FETCH 1000 IN bloboid
As we seem to have some data corruption issue,
On 12/6/16 12:33 PM, Tom Lane wrote:
John R Pierce writes:
On 12/6/2016 12:10 PM, Rich Shepard wrote:
I did not realize that a BLOB is not the same as a bytea (page 217
of the
9.6 PDF manual), and I cannot find BLOB as a postgres data type. Please
point me in the right
On 12/6/16 11:12 AM, Eric Schwarzenbach wrote:
On 12/06/2016 01:34 PM, Joshua D. Drake wrote:
On 12/06/2016 10:30 AM, Rich Shepard wrote:
My thinking is to not store these documents in the database, but
to store
them in subdirectories outside the database.
Your thoughts?
Due to the
On 7/6/2014 9:06 AM, Tom Lane wrote:
David Wall d.w...@computer.org writes:
There's one row in pg_largeobject_metadata per large object. The rows in
pg_largeobject represent 2KB pages of large objects (so it looks like
your large objects are averaging only 8KB-10KB apiece). The metadata
On 7/4/2014 11:30 AM, Bosco Rama wrote:
Random thought: What OS kernel are you running? Kernels between
3.2.x and 3.9.x were known to have IO scheduling issues. This was
highlighted most by the kernel in Ubuntu 12.04 (precise) as shown
here:
On 7/4/2014 9:18 PM, Tom Lane wrote:
There are only 32 table, no functions, but mostly large objects. Not
sure how to know about the LOs, but a quick check from the table sizes I
estimate at only 2GB, so 16GB could be LOs. There are 7,528,803 entries
in pg_catalog.pg_largeobject.
Hmm ... how
On 7/4/2014 7:19 AM, Tom Lane wrote:
You haven't given us much info about the contents of this database.
Are there a lot of tables? functions? large objects? How many is
a lot, if so?
I'm suspicious that you're paying a penalty associated with pg_dump's
rather inefficient handling of metadata
I'm running PG 9.3.4 on CentOS 6.4 and noted that backing up my database
takes much longer than restoring it.
That seems counter-intuitive to me because it seems like reading from a
database should generally be faster than writing to it.
I have a database that pg_database_size reports as
On 7/3/2014 10:36 AM, Bosco Rama wrote:
If those large objects are 'files' that are already compressed (e.g.
most image files and pdf's) you are spending a lot of time trying to
compress the compressed data ... and failing.
Try setting the compression factor to an intermediate value, or even
On 7/3/2014 10:38 AM, Tim Clarke wrote:
I'd also check the effect of those other run components; the vacuum's
and other things that are only running with the backup and not during
the restore.
The vacuumlo, vacuum and analyze run before the pg_dump. I am not
talking about any of the time
On 7/3/2014 5:13 PM, Bosco Rama wrote:
If you use gzip you will be doing the same 'possibly unnecessary'
compression step. Use a similar approach to the gzip command as you
would for the pg_dump command. That is, use one if the -[0-9] options,
like this: $ pg_dump -Z0 -Fc ... | gzip -[0-9]
On 7/3/2014 6:26 PM, Bosco Rama wrote:
Hmmm. You are using '--oids' to *include* large objects? IIRC, that's
not the intent of that option. Large objects are dumped as part of a
DB-wide dump unless you request that they not be. However, if you
restrict your dumps to specific schemata and/or
On 7/3/2014 10:13 PM, Bosco Rama wrote:
Is the issue with S3 or just transfer time? I would expect that
'rsync' with the '--partial' option (or -P if you want progress info
too) may help there.
Don't know if rsync and S3 work together or what that would mean, but
it's not an issue I'm
On 7/3/2014 11:47 AM, Eduardo Morras wrote:
No, there's nothing wrong. All transparent compressed objects stored
in database, toast, lo, etc.. is transparently decompressed while
pg_dump access them and then you gzip it again. I don't know why it
doesn't dump the compressed data directly.
Is it safe to assume that my working PG 8.3 archive command on the
master and recovery.conf (using contrib's pg_standby) on the standby
will work the same under 9.3?
That is, under PG 8.3, my master server uses:
archive_mode = on
archive_command = '~/postgresql/bin/copyWAL %p %f'
On 6/10/2014 11:54 AM, hubert depesz lubaczewski wrote:
On Tue, Jun 10, 2014 at 8:13 PM, David Wall d.w...@computer.org
mailto:d.w...@computer.org wrote:
Is it safe to assume that my working PG 8.3 archive command on the
master and recovery.conf (using contrib's pg_standby
I am posting to both PG and the JDBC group since I am using JDBC to
connect, so I'm not sure if this is a PG issue or a PG JDBC issue.
I am upgrading from 8.3.3 to 9.3.4. I ran pg_dump on my 8.3.3 databases
(which ran on CentOS 5.2, Java 6) and ran pg_restore on my 9.3.4
databases (which
On 5/31/2014 11:47 PM, John R Pierce wrote:
On 5/31/2014 11:41 PM, David Wall wrote:
FATAL: invalid value for parameter TimeZone: PST
I'd be using America/Los_Angeles as the timezone rather than PST,
as the TLA timezones are ambiguous (CST is both Central Standard Time
in the USA
On 6/1/2014 9:05 AM, Adrian Klaver wrote:
I ask because a look at the PG JDBC code shows this, which did not
show up until Sept 22, 2011. Not sure what release, but it looks like
9.2+:
// Construct and send a startup packet.
String[][] params = {
On 4/3/2013 5:57 PM, Tom Lane wrote:
$ pg_restore -? ... -O, --no-owner skip restoration of object
ownership ... So there you have it. pg_restore just restored all the
objects (blobs and otherwise) as owned by the user running it. I
should think you'd have had issues with other things besides
When we upgraded from PG 8.4 to PG 9.2, we ran into a permissions issue
with large objects as discussed here:
http://postgresql.1045698.n5.nabble.com/Large-Object-permissions-lost-in-transfer-td4281604.html
The basic solution was to do an ALTER LARGE OBJECT and set the OWNER TO
using a script
On 4/3/2013 3:14 PM, Tom Lane wrote:
A 9.2-9.2 dump and restore certainly should preserve large object
ownership (and permissions, if you've set any). In a quick check I do
see ALTER LARGE OBJECT nnn OWNER TO ... commands in pg_dump's output
for such a case. Are you sure this is really the
On 12/16/2012 6:21 PM, Tom Lane wrote:
David Wall d.w...@computer.org writes:
On 12/16/2012 11:22 AM, Tom Lane wrote:
Large objects didn't have privileges, nor owners, in 8.4. If you don't
feel like fixing your apps right now, you can return to the previous
behavior by setting the obscurely
On 12/16/2012 6:21 PM, Tom Lane wrote:
David Wall d.w...@computer.org writes:
On 12/16/2012 11:22 AM, Tom Lane wrote:
Large objects didn't have privileges, nor owners, in 8.4. If you
don't
feel like fixing your apps right now, you can return to the previous
behavior by setting the obscurely
In past PG upgrades, we've done a pg_dump on the current version, then a
pg_restore on the new version. But during the 8.4 to 9.2 upgrade (on
Linux x64), we ran into issues with the permissions associated with the
large objects after the restore.
Is this something new or were we just lucky
On 12/16/2012 11:22 AM, Tom Lane wrote:
David Wall d.w...@computer.org writes:
In past PG upgrades, we've done a pg_dump on the current version, then a
pg_restore on the new version. But during the 8.4 to 9.2 upgrade (on
Linux x64), we ran into issues with the permissions associated
We're using UUID for primary keys in PG 8.4 without any issues. I have
no real insights into the details or performance issues, but always
figured it was stored as a binary 128-bit value, but with added benefits
of being able to enter and view them using a standard string format. We
don't
Things to consider when /not /storing them in the DB:
1) Backups of DB are incomplete without a corresponding backup of the files.
2) No transactional integrity between filesystem and DB, so you will
have to deal with orphans from both INSERT and DELETE (assuming you
don't also update the
Huh ??? isn't that point of using bytea or text datatypes.
I could have sworn bytea does not use large object interface it uses
TOAST or have i gone insane
You're not insane :)
Put it another way: bytea values are not stored in the pg_largeobject
catalog.
I missed the part that
We are copying a production database and then pairing it down
dramatically (perhaps removing 99% of records, most of which are large
objects).
It seems my options are 1) VACUUM FULL with a reindex_db; or 2) backup,
then restore.
Is there anything one does better than the other? Our
In our open-esignforms project we use a layered approach for keys in
which we have a boot key for the application that requires dual
passwords which we then combine into a single password for PBE
encryption of the boot key. We then have session keys that are
encrypted with the boot key, and
I know this used to be a problem, but I don't have a Windows server to
test it on again.
Can I do a pg_dump on Linux and then pg_restore it on Windows? We're
using PG 8.3 now, but could certainly upgrade if that made the
difference.
Okay, maybe I answered my own question, though if
I know this used to be a problem, but I don't have a Windows server to
test it on again.
Can I do a pg_dump on Linux and then pg_restore it on Windows? We're
using PG 8.3 now, but could certainly upgrade if that made the difference.
Thanks,
David
--
Sent via pgsql-general mailing list
If I have various record types that are one up records that are
structurally similar (same columns) and are mostly retrieved one at a
time by its primary key, is there any performance or operational benefit
to having millions of such records split across multiple tables (say by
their
the bytea limit is 1gb (as are all datums in postgres).
pg_largeobject can go up to 2gb, but in either case you'd likely run
into severe performance/scalability issues long before objects began
approaching those size because of memory usage and other issues. With
100kb objects though, you
We have a system in which a large amount of the data is stored as large
objects now in PG 8.4. The relational aspects of these objects are
stored using traditional tables, but we store a lot of binary data
(files, images) and XML objects that are compressed and then encrypted.
The ACID
We have a database report function that seemed clean when the number of
users was small, but as the number of users grow, I was wondering if
anybody had any good ideas about how to handle OR or IN for SELECTs.
The general scenario is that a manager runs reports that list all
records that were
(quoted from Chris)
Select field1,field2 FROM table1 inner join relationships on
table1.creator_user_id = relationships.employee WHERE
relationships.manager = ?
(quoted from Steve)
select table1.field1, table2.field2 from table1, reports where
table1.creator_user_id = reports.peon and
Thanks, Filip.
If I have an unlimited number of name-value pairs that I'd like
to get easy access to for flexible reports, could I store these in
two arrays (one for name, the other for value) in a table so that
if I had 10 name-value pairs or 200 name-value pairs, I could
I am trying to assess the db issues surrounding several constructs
allowed in PG 8.3, including ARRAY, BYTEA and large objects (LO).
We store a lot of data as encrypted XML structures (name-value pairs
mostly) that can be updated many times during its lifetime (most updates
occur over several
We've done warm standby as you indicate, and we've not needed anything
special.
On the primary's postgresql.conf we use:
archive_command = '~/postgresql/bin/copyWAL %p %f'
Our copyWAL script is just a wrapper for 'scp' since we want to copy the
data encrypted over the network:
#!/bin/bash
Is there a way to get the PG version string from JDBC? I'm using PG 8.3.
Thanks,
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion
large objects across all of my various tables (actually, I presume OIDs
are used elsewhere besides just large objects)?
Is there any plan on allowing large objects to support more than 2GB?
As data gets larger and larger, I
Tom Lane wrote:
David Wall [EMAIL PROTECTED] writes:
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion
large objects across all of my various tables
Yup, and in practice you'd better have a lot less than that or assigning
a new OID might take a long time
What about SugarCRM?
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thanks, Guillaume. Can anyone confirm his answer to the question below
about restarting the backup postmaster?
It seems counter-intuitive since the backup server is only waiting for
new WAL files to restore, it would seem it could be killed and restarted
at any time and it should just
Thanks, Guillaume. Can anyone confirm his answer to the question below
about restarting the backup postmaster?
IIRC, it works conveniently in 8.2 and up. Just do pg_ctl stop -m fast.
Thanks, Tom. What about if the postmaster is just killed (kill pid, or
kill -9 pid) or the
Just wanted to see if others would confirm my impressions about running
WAL archiving and pg_standby restore.
Server A (Primary): Runs PG 8.3 with WAL archiving enabled. Each WAL is
copied over the network to Server B. (A previous 'tar backup' of the
database along with the requisite psql
The JDBC driver's batch processing is more efficient than regular
execution because it requires fewer network roundtrips so there's less
waiting. The JDBC batch is broken into an internal batch size of 256
statement and all of these are sent over to the server at once.
That's great, Kris.
Does this means that the two features are independent one from each
other ?
In other words, can we say that JDBC batch will limit information
exchange between client and server while Postgres prepared statements
will optimize their execution ?
I've not used it yet, but my impression is that
Just checking if the JDBC library's batch processing code is more
efficient with respect to the postgresql back end or not. Does it
really batch the requests and submit them once over the link, or does it
just send them to the database to be processed one at a time?
Thanks,
David
--
Sent
My understanding is that's not quite true. The client libraries are
GPL, so you can't use them directly, but I don't see what would stop
you using their ODBC/JDBC drivers with your non-GPL application
(especially if you support other ODBC databases as well). The server
can't be bundled in
I imagine you can get round the second one by building your software
so it supports PostgreSQL as well - that way you don't 'require
customes to install MySQL'.
Well, I'm not sure how they'd even know you were doing this, but as a
commercial company, I'd suggest you not follow that advice
What then? Could it be marketing or the sad results of a avalanche
effect? Geee, there's a thought.
What a wide variety of topics. One big difference for me is that MySQL
used to be open source, but it no longer is. It's an odd hybrid OSS
that barely makes sense to me since they claim
Well, if you have a crappy system that cannot sustain concurrent load or
even be backed up concurrently with regular operation, one solution is
to write a kick-ass replication system.
Still, it would be nice to have a kick-ass replication system for PG,
too. We've been toying with WAL
2) if archive_command is activated and working, the primary
server sends (preferably with rsync or some other atomic tool) the
NEW WAL files to the standby server. Later, at some point, the primary
server will delete this files when considers that are not necessary
anymore.
Is 'scp'
That's correct. You have to do it that way or the system in recovery
mode can start to consume the new segment file before it has been
completely copied over yet.
Does pg_standby take care of this by checking file sizes or the like?
In my testing with scp, we never experienced any
If you want to support multiple encodings, the only safe locale choice
is (and always has been) C. If you doubt this, troll the archives for
awhile --- for example, searching for locale+encoding in pgsql-bugs
should provide plenty of amusing reading matter.
This is most interesting. I think
Noted that to use autovacuum we need to turn on stats_row_level (along
with stats_start_collector that is on by default). Since
stats_row_level is off by default, I wonder what sort of overhead is
incurred since it sounds like it could add up if it's storing additional
stats information on
We're in a stage where I need to implement a mechanism to prevent the
data modification.
I'm thinking on 'Digital Signatures' (maybe RSA) in each row. If
there's a modification, the signature doesn't verify.
Like all such solutions, the key (lame pun intended) is how to do you
manage the
Which is exactly why I pointed out that using pg_standby's -k switch
was the more reliable option.
And supposedly even that switch is not needed once we can get to 8.3,
which should be soon. Even the -k switch can be an issue since you
don't really know how many you should keep around.
Large objects are defined in the DDL as type 'OID'. Those with various
drivers may not know the lo_* calls are being used, since in Java this
takes place if you use the SQL BLOB type in JDBC and the field itself is
defined as 'OID'.
David
Alvaro Herrera wrote:
Morris Goldstein escribió:
Network transmission costs alone would make the second way a loser.
Large OFFSETs are pretty inefficient because the backend generates and
discards the rows internally ... but at least it never converts them to
external form or ships them to the client. Rows beyond the LIMIT are
not generated
On Jun 17, 2007, at 2:15 PM, Sergei Shelukhin wrote:
This is my first (and, by the love of the God, last) project w/pgsql
and everything but the simplest selects is so slow I want to cry.
This is especially bad with vacuum analyze - it takes several hours
for a database of mere 15 Gb on a fast
Hannes Dorbath wrote:
Replicate the whole block device, PostgreSQL sits on.
For Linux, are you talking about something like DRDB? That would be
nice in that it would also replicate the web app itself.
David
---(end of broadcast)---
TIP 3:
Is there a preferred replication system for PG 8 db users? Obviously,
we're looking for robustness, ease of operations/installation, low
latency and efficient with system and network resources, with an active
open source community being preferred.
Thanks,
David
On Wed, 2007-05-09 at 14:40 -0700, David Wall wrote:
Is there a preferred replication system for PG 8 db users? Obviously,
we're looking for robustness, ease of operations/installation, low
latency and efficient with system and network resources, with an active
open source community
When I create a brand new database and then run the reindexdb program
under PG 8.1.3, I get the follow error:
reindexdb: reindexing of database failed: ERROR: could not open
relation with OID 41675
I'm not sure what this error is since we have not even added any data
yet. Is this
(repost to newsgroup since the other went directly to Mr. Lane's email
address)
I think you've done something to template1,
even though you say you
haven't. Out-of-the-box, an 8.1.3 database won't have any OIDs
assigned
above about 11000, so OID 41675 would have to belong to a
. Is that not
the case here?
Thanks,
David
Tom Lane wrote:
David Wall [EMAIL PROTECTED] writes:
Thanks, but I clearly lack that expertise. How do I find "a table with
that OID"?
Try "select * from pg_class where oid = 41675" and
"select * from pg_attribu
It really isn't neccessary. That said, there is *something* going on with your
db, so it might be prudent to figure out what it is.
Thanks, Robert. Actually, I noted that each time I ran my backup
script, the OID mentioned in the error was bigger than previous number.
Before I run the
That doesn't sound consistent with the arrangements that MySQL AB
expect for commercial users of their products.
They indicate assortedly that:
- If you are developing and distributing open source applications
under the GPL, or some OSI-approved license, you are free to use
MySQL(tm) for
I've upgraded from 8.0 to 8.1 and want to be sure I'm making the changes
that are required.
For my backups, I have removed the option --blobs because it says this
is no longer needed. I guess the backup automatically includes blobs now?
I have also been running the contributed 'reindexdb'
Try not to leave open transactions hanging around in your Apache code.
An idle database connection couldn't block VACUUM like that; it'd have
to have been in the middle of a BEGIN block, or maybe even an unfinished
query.
It's possible that the JDBC library always creates a new transaction
In the table, you're storing the OID of the large object itself, so if you
copy all the contents of the table, so far as I can see, yes, you'd still
have the reference to your large object.
Thanks. I did see this in a quick test, but I wasn't sure if the data would
be removed when I dropped
Try not to leave open transactions hanging around in your Apache code.
An idle database connection couldn't block VACUUM like that; it'd have
to have been in the middle of a BEGIN block, or maybe even an unfinished
query.
It's possible that the JDBC library always creates a new transaction
Just curious, but what is the point of having times 'acurate' to the
milisecond in a database?
In my case it's a simple matter that I include a timestamp in a digital
signature, and since my timestamp comes from Java (and most Unixes are the
same), it has millisecond resolution "built in."
"David Wall" [EMAIL PROTECTED] writes:
The real question for me is that 7.1 docs say that the resolution of a
timestamp is 8 bytes at "1 microsecond / 14 digits", yet I generally see
-MM-DD HH-MM-SS.cc returned in my queries (both with pgsql and with
JDBC).
Wonderful job on getting 7.1 released. I've just installed it in place of a
7.1beta4 database, with the great advantage of not even having to migrate
the database.
It seems that 7.1 is able to handle large objects in its dump/restore
natively now and no longer requires the use of the contrib
Does 7.1 "natively" handle large objects in dumps, restores and vaccums? In
7.0.3 there was a contrib for LOs.
Thanks,
David
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
bytea is probably your best bet. The 8k limit is toast in 7.1, btw.
If you're using JDBC, you may not be able to get bytea types to work. I was
able to get the SQL type OID to work with JDBC's get/setBytes() methods.
You may also want to use 7.1 since I believe it's large object support is
Is there a way to start with yesterday's dump and load each transaction
log dump in order until you have a complete restore of the database?
Does recovery of pg_dump and the transaction logs work with a backup and
then doing a rollforward on the transaction logs? What would be the
procedure?
Is there a place where I can see what the new features for 7.1 are expected
to be? I'm hopeful for native BLOB support smile
David
Something I'm curious about: does MySQL support user-defined datatypes?
No, it does not.
If so, how hard is it to make a new datatype? I think datatype
extensibility is one of the cooler features of Postgres ...
Making new datatypes sounds good, but in practice how easy is it to do? I
86 matches
Mail list logo