KaiGai Kohei kai...@ak.jp.nec.com wrote:
I don't think this is necessarily a good idea. We might decide to treat
both things separately in the future and it having them represented
separately in the dump would prove useful.
I agree. From design perspective, the single section approach
(2010/02/09 20:16), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
I don't think this is necessarily a good idea. We might decide to treat
both things separately in the future and it having them represented
separately in the dump would prove useful.
I agree. From design
(2010/02/09 21:18), KaiGai Kohei wrote:
(2010/02/09 20:16), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
I don't think this is necessarily a good idea. We might decide to treat
both things separately in the future and it having them represented
separately in the dump
KaiGai Kohei kai...@ak.jp.nec.com wrote:
The attached patch fixed up the cleanup query as follows:
+ appendPQExpBuffer(dquery,
+ SELECT pg_catalog.lo_unlink(oid)
+ FROM pg_catalog.pg_largeobject_metadata
+ WHERE oid = %s;\n,
Takahiro Itagaki escribió:
KaiGai Kohei kai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its owner, acl and comment even
(2010/02/08 22:23), Alvaro Herrera wrote:
Takahiro Itagaki escribió:
KaiGai Koheikai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object
(2010/02/05 13:53), Takahiro Itagaki wrote:
KaiGai Koheikai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its owner, acl and
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Koheikai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumping blob contents, and to do all of this for data
dumps but not
(2010/02/04 17:30), KaiGai Kohei wrote:
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Koheikai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumping blob contents, and to do
2010/2/4 KaiGai Kohei kai...@ak.jp.nec.com:
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Koheikai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumping blob contents, and
Robert Haas escribió:
2010/2/4 KaiGai Kohei kai...@ak.jp.nec.com:
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Koheikai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also
(2010/02/05 3:27), Alvaro Herrera wrote:
Robert Haas escribió:
2010/2/4 KaiGai Koheikai...@ak.jp.nec.com:
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Koheikai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob
KaiGai Kohei kai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its owner, acl and comment even if --data-only is given.
Is
(2010/02/05 13:53), Takahiro Itagaki wrote:
KaiGai Koheikai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its owner, acl and
(2010/02/05 13:53), Takahiro Itagaki wrote:
KaiGai Koheikai...@kaigai.gr.jp wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its owner, acl and
2010/2/1 KaiGai Kohei kai...@ak.jp.nec.com:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumping blob contents, and to do all of this for data
dumps but not schema dumps. That seems about right to me.
(2010/02/01 14:19), Takahiro Itagaki wrote:
As far as I read, the patch is almost ready to commit
except the following issue about backward compatibility:
* BLOB DATA
This section is same as existing BLOBS section, except for _LoadBlobs()
does not create a new large object before opening it
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Can we remove such path and raise an error instead?
Also, even if we support the older servers in the routine,
the new bytea format will be another problem anyway.
OK, I'll fix it.
I think we might need to discuss about explicit version checks
(2010/02/02 9:33), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
Can we remove such path and raise an error instead?
Also, even if we support the older servers in the routine,
the new bytea format will be another problem anyway.
OK, I'll fix it.
I think we might
The --schema-only with large objects might be unnatural, but the
--data-only with properties of large objects are also unnatural.
Which behavior is more unnatural?
I think large object metadata is a kind of row-based access controls.
How do we dump and restore ACLs per rows when we support
KaiGai Kohei kai...@ak.jp.nec.com wrote:
The attached patch uses one TOC entry for each blob objects.
This patch does not only fix the existing bugs, but also refactor
the dump format of large objects in pg_dump. The new format are
more similar to the format of tables:
SectionTables
KaiGai Kohei kai...@ak.jp.nec.com wrote:
The attached patch uses one TOC entry for each blob objects.
When I'm testing the new patch, I found ALTER LARGE OBJECT command
returns ALTER LARGEOBJECT tag. Should it be ALTER LARGE(space)OBJECT
instead? As I remember, we had decided not to use
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
When I'm testing the new patch, I found ALTER LARGE OBJECT command
returns ALTER LARGEOBJECT tag. Should it be ALTER LARGE(space)OBJECT
instead? As I remember, we had decided not to use LARGEOBJECT
(without a space) in user-visible
(2010/01/28 18:21), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
The attached patch uses one TOC entry for each blob objects.
When I'm testing the new patch, I found ALTER LARGE OBJECT command
returns ALTER LARGEOBJECT tag. Should it be ALTER LARGE(space)OBJECT
KaiGai Kohei kai...@ak.jp.nec.com wrote:
When I'm testing the new patch, I found ALTER LARGE OBJECT command
returns ALTER LARGEOBJECT tag. Should it be ALTER LARGE(space)OBJECT
instead?
Sorry, I left for fix this tag when I was pointed out LARGEOBJECT should
be LARGE(space)OBJECT.
The attached patch uses one TOC entry for each blob objects.
It adds two new section types.
* BLOB ITEM
This section provides properties of a certain large object.
It contains a query to create an empty large object, and restore
ownership of the large object, if necessary.
| --
| -- Name:
Tom Lane t...@sss.pgh.pa.us wrote:
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
A dump of that quickly settled into running a series of these:
SELECT proretset, prosrc, probin,
pg_catalog.pg_get_function_arguments(oid) AS funcargs,
Tom Lane t...@sss.pgh.pa.us wrote:
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
Said dump ran in about 45 minutes with no obvious stalls or
problems. The 2.2 GB database dumped to a 1.1 GB text file, which
was a little bit of a
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
Said dump ran in about 45 minutes with no obvious stalls or
problems. The 2.2 GB database dumped to a
Tom Lane t...@sss.pgh.pa.us wrote:
Did you happen to notice anything about pg_dump's memory
consumption?
Not directly, but I was running 'vmstat 1' throughout. Cache space
dropped about 2.1 GB while it was running and popped back up to the
previous level at the end.
-Kevin
--
Sent via
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Did you happen to notice anything about pg_dump's memory
consumption?
Not directly, but I was running 'vmstat 1' throughout. Cache
space dropped about 2.1 GB while it was running and popped back up
to
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Did you happen to notice anything about pg_dump's memory
consumption?
I took a closer look, and there's some bad news, I think. The above
numbers were
Tom Lane t...@sss.pgh.pa.us wrote:
I'm not so worried about the amount of RAM needed as whether
pg_dump's internal algorithms will scale to large numbers of TOC
entries. Any O(N^2) behavior would be pretty painful, for
example. No doubt we could fix any such problems, but it might
take
Kevin Grittner kevin.gritt...@wicourts.gov writes:
I'm afraid pg_dump didn't get very far with this before:
pg_dump: WARNING: out of shared memory
pg_dump: SQL command failed
Given how fast it happened, I suspect that it was 2672 tables into
the dump, versus 26% of the way through 5.5
Tom Lane t...@sss.pgh.pa.us wrote:
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
So the current database is expendable? I'd just as soon delete it
before creating the other one, if you're fairly confident the other
one will do it.
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.
So the current database is expendable?
Yeah, I think it was a bad experimental design anyway...
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
Do you have the opportunity to try an experiment on hardware
similar to what you're running that on? Create a database with
7 million tables and see what the dump/restore
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
So I'm not sure whether I can get to a state suitable for starting
the desired test, but I'll stay with a for a while.
I have other commitments today, so I'm going to leave the VACUUM
ANALYZE running and come back tomorrow morning to try the
Kevin Grittner kevin.gritt...@wicourts.gov writes:
... After a few minutes that left me curious just how big
the database was, so I tried:
select pg_size_pretty(pg_database_size('test'));
I did a Ctrl+C after about five minutes and got:
Cancel request sent
but it didn't return for
KaiGai Kohei kai...@kaigai.gr.jp writes:
(2010/01/23 5:12), Tom Lane wrote:
Now the argument against that is that it won't scale terribly well
to situations with very large numbers of blobs.
Even if the database contains massive number of large objects, all the
pg_dump has to manege on RAM
KaiGai Kohei kai...@ak.jp.nec.com writes:
The attached patch is a revised version.
I'm inclined to wonder whether this patch doesn't prove that we've
reached the end of the line for the current representation of blobs
in pg_dump archives. The alternative that I'm thinking about is to
treat each
Tom Lane t...@sss.pgh.pa.us wrote:
Now the argument against that is that it won't scale terribly well
to situations with very large numbers of blobs. However, I'm not
convinced that the current approach of cramming them all into one
TOC entry scales so well either. If your large objects
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
We've heard of people with many tens of thousands of
tables, and pg_dump speed didn't seem to be a huge bottleneck for
them (at least not in recent versions). So I'm feeling we should
not dismiss the idea
Tom Lane t...@sss.pgh.pa.us wrote:
Do you have the opportunity to try an experiment on hardware
similar to what you're running that on? Create a database with 7
million tables and see what the dump/restore times are like, and
whether pg_dump/pg_restore appear to be CPU-bound or
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
Do you have the opportunity to try an experiment on hardware
similar to what you're running that on? Create a database with 7
million tables and see what the dump/restore times are like, and
whether
Tom Lane t...@sss.pgh.pa.us wrote:
Empty is fine.
I'll get started.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
I'll get started.
After a couple false starts, the creation of the millions of tables
is underway. At the rate it's going, it won't finish for 8.2 hours,
so I'll have to come in and test the dump tomorrow morning.
-Kevin
--
Sent via
(2010/01/23 5:12), Tom Lane wrote:
KaiGai Koheikai...@ak.jp.nec.com writes:
The attached patch is a revised version.
I'm inclined to wonder whether this patch doesn't prove that we've
reached the end of the line for the current representation of blobs
in pg_dump archives. The alternative
(2010/01/21 16:52), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
This patch renamed the hasBlobs() by getBlobs(), and changed its
purpose. It registers DO_BLOBS, DO_BLOB_COMMENTS and DO_BLOB_ACLS
for each large objects owners, if necessary.
This patch adds
KaiGai Kohei kai...@ak.jp.nec.com wrote:
I'm not sure whether we need to make groups for each owner of large objects.
If I remember right, the primary issue was separating routines for dump
BLOB ACLS from routines for BLOB COMMENTS, right? Why did you make the
change?
When
(2010/01/21 19:42), Takahiro Itagaki wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
I'm not sure whether we need to make groups for each owner of large objects.
If I remember right, the primary issue was separating routines for dump
BLOB ACLS from routines for BLOB COMMENTS, right? Why
The attached patch is a revised version.
List of updates:
- cleanup: getBlobs() was renamed to getBlobOwners()
- cleanup: BlobsInfo was renamed to BlobOwnerInfo
- bugfix: pg_get_userbyid() in SQLs were replaced by username_subquery which
constins a right subquery to obtain a username
KaiGai Kohei kai...@ak.jp.nec.com wrote:
This patch renamed the hasBlobs() by getBlobs(), and changed its
purpose. It registers DO_BLOBS, DO_BLOB_COMMENTS and DO_BLOB_ACLS
for each large objects owners, if necessary.
This patch adds DumpableObjectType DO_BLOB_ACLS and struct BlobsInfo. We
2009/12/22 KaiGai Kohei kai...@ak.jp.nec.com:
(2009/12/21 9:39), KaiGai Kohei wrote:
(2009/12/19 12:05), Robert Haas wrote:
On Fri, Dec 18, 2009 at 9:48 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Robert Haasrobertmh...@gmail.com writes:
Oh. This is more complicated than it appeared on the
(2009/12/21 9:39), KaiGai Kohei wrote:
(2009/12/19 12:05), Robert Haas wrote:
On Fri, Dec 18, 2009 at 9:48 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Robert Haasrobertmh...@gmail.com writes:
Oh. This is more complicated than it appeared on the surface. It
seems that the string BLOB COMMENTS
(2009/12/19 12:05), Robert Haas wrote:
On Fri, Dec 18, 2009 at 9:48 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Robert Haasrobertmh...@gmail.com writes:
Oh. This is more complicated than it appeared on the surface. It
seems that the string BLOB COMMENTS actually gets inserted into
custom dumps
(2009/12/18 15:48), Takahiro Itagaki wrote:
Robert Haasrobertmh...@gmail.com wrote:
In both cases, I'm lost. Help?
They might be contrasted with the comments for myLargeObjectExists.
Since we use MVCC visibility in loread(), metadata for large object
also should be visible in MVCC
2009/12/18 KaiGai Kohei kai...@ak.jp.nec.com:
(2009/12/18 15:48), Takahiro Itagaki wrote:
Robert Haasrobertmh...@gmail.com wrote:
In both cases, I'm lost. Help?
They might be contrasted with the comments for myLargeObjectExists.
Since we use MVCC visibility in loread(), metadata for
On Fri, Dec 18, 2009 at 9:00 AM, Robert Haas robertmh...@gmail.com wrote:
2009/12/18 KaiGai Kohei kai...@ak.jp.nec.com:
(2009/12/18 15:48), Takahiro Itagaki wrote:
Robert Haasrobertmh...@gmail.com wrote:
In both cases, I'm lost. Help?
They might be contrasted with the comments for
On Fri, Dec 18, 2009 at 1:48 AM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
In both cases, I'm lost. Help?
They might be contrasted with the comments for myLargeObjectExists.
Since we use MVCC visibility in loread(), metadata for large object
also should be visible in MVCC rule.
Robert Haas robertmh...@gmail.com writes:
Oh. This is more complicated than it appeared on the surface. It
seems that the string BLOB COMMENTS actually gets inserted into
custom dumps somewhere, so I'm not sure whether we can just change it.
Was this issue discussed at some point before
Robert Haas robertmh...@gmail.com writes:
Part of what I'm confused about (and what I think should be documented
in a comment somewhere) is why we're using MVCC visibility in some
places but not others. In particular, there seem to be some bits of
the comment that imply that we do this for
On Fri, Dec 18, 2009 at 9:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Oh. This is more complicated than it appeared on the surface. It
seems that the string BLOB COMMENTS actually gets inserted into
custom dumps somewhere, so I'm not sure whether we
On Fri, Dec 18, 2009 at 9:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Part of what I'm confused about (and what I think should be documented
in a comment somewhere) is why we're using MVCC visibility in some
places but not others. In particular, there
2009/12/17 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp:
Robert Haas robertmh...@gmail.com wrote:
2009/12/16 KaiGai Kohei kai...@ak.jp.nec.com:
? ?long desc: When turned on, privilege checks on large objects perform
with
? ? ? ? ? ? ? backward compatibility as 8.4.x or earlier
Robert Haas robertmh...@gmail.com wrote:
Another comment is I'd like to keep link
linkend=catalog-pg-largeobject-metadata
for the first structnamepg_largeobject/structname in each topic.
Those two things aren't the same. Perhaps you meant link
linkend=catalog-pg-largeobject?
Oops,
On Thu, Dec 17, 2009 at 7:27 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
Another comment is I'd like to keep link
linkend=catalog-pg-largeobject-metadata
for the first structnamepg_largeobject/structname in each topic.
Those two things aren't the same. Perhaps you meant
Robert Haas robertmh...@gmail.com wrote:
In both cases, I'm lost. Help?
They might be contrasted with the comments for myLargeObjectExists.
Since we use MVCC visibility in loread(), metadata for large object
also should be visible in MVCC rule.
If I understand them, they say:
*
On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
What's your opinion about:
long desc: When turned on, privilege checks on large objects perform with
backward compatibility as 8.4.x or earlier
(2009/12/17 7:25), Robert Haas wrote:
On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
What's your opinion about:
long desc: When turned on, privilege checks on large objects perform with
2009/12/16 KaiGai Kohei kai...@ak.jp.nec.com:
(2009/12/17 7:25), Robert Haas wrote:
On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
KaiGai Koheikai...@ak.jp.nec.com wrote:
What's your opinion about:
long desc: When turned on, privilege checks
(2009/12/17 13:20), Robert Haas wrote:
2009/12/16 KaiGai Koheikai...@ak.jp.nec.com:
(2009/12/17 7:25), Robert Haas wrote:
On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jpwrote:
KaiGai Koheikai...@ak.jp.nec.comwrote:
What's your opinion about:
Robert Haas robertmh...@gmail.com wrote:
2009/12/16 KaiGai Kohei kai...@ak.jp.nec.com:
? ?long desc: When turned on, privilege checks on large objects perform
with
? ? ? ? ? ? ? backward compatibility as 8.4.x or earlier releases.
Mostly English quality, but there are some other issues
KaiGai Kohei kai...@kaigai.gr.jp wrote:
We don't have any reason why still CASE ... WHEN and subquery for the given
LOID. Right?
Ah, I see. I used your suggestion.
I applied the bug fixes. Our tools and contrib modules will always use
pg_largeobject_metadata instead of pg_largeobject to
KaiGai Kohei wrote:
What happens when
there is no entry in pg_largeobject_metadata for a specific row?
In this case, these rows become orphan.
So, I think we need to create an empty large object with same LOID on
pg_migrator. It makes an entry on pg_largeobject_metadata without
writing
KaiGai Kohei kai...@ak.jp.nec.com wrote:
We have to reference pg_largeobject_metadata to check whether a certain
large objct exists, or not.
It is a case when we create a new large object, but write nothing.
OK, that makes sense.
In addition of the patch, we also need to fix
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp wrote:
In addition of the patch, we also need to fix pg_restore with
--clean option. I added DropBlobIfExists() in pg_backup_db.c.
A revised patch attached. Please check further mistakes.
...and here is an additional fix for contrib modules.
KaiGai Kohei wrote:
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large
KaiGai Kohei wrote:
We use SELECT loid FROM pg_largeobject LIMIT 1 in pg_dump. We could
replace pg_largeobject_metadata instead if we try to fix only pg_dump,
but it's no wonder that any other user applications use such queries.
I think to allow reading loid is a balanced solution.
Bruce Momjian さんは書きました:
KaiGai Kohei wrote:
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
We have to reference pg_largeobject_metadata to check whether a certain
large objct exists, or not.
It is a case when we create a new large object, but write nothing.
OK, that makes sense.
In addition of the patch, we
KaiGai Kohei kai...@ak.jp.nec.com wrote:
we still allow SELECT * FROM pg_largeobject ...right?
It can be solved with revoking any privileges from anybody in the initdb
phase. So, we should inject the following statement for setup_privileges().
REVOKE ALL ON pg_largeobject FROM PUBLIC;
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
we still allow SELECT * FROM pg_largeobject ...right?
It can be solved with revoking any privileges from anybody in the initdb
phase. So, we should inject the following statement for setup_privileges().
REVOKE ALL ON
KaiGai Kohei kai...@ak.jp.nec.com wrote:
What's your opinion about:
long desc: When turned on, privilege checks on large objects perform with
backward compatibility as 8.4.x or earlier releases.
I updated the description as your suggest.
Applied with minor editorialization,
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
OK, I'll add the following description in the documentation of pg_largeobject.
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large objects of all users.
This is going
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
OK, I'll add the following description in the documentation of
pg_largeobject.
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large objects of all
2009/12/10 KaiGai Kohei kai...@ak.jp.nec.com:
If so, we can inject a hardwired rule to prevent to select pg_largeobject
when lo_compat_privileges is turned off, instead of REVOKE ALL FROM PUBLIC.
it doesn't seem like a good idea to make that GUC act like a GRANT or
REVOKE on the case of
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large objects of all users.
This is going to be a
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large objects of all users.
This is
Jaime Casanova jcasa...@systemguards.com.ec wrote:
besides if a normal user can read from pg_class why we deny pg_largeobject
pg_class and pg_largeobject_metadata contain only metadata of objects.
Tables and pg_largeobject contain actual data of the objects. A normal user
can read pg_class,
KaiGai Kohei wrote:
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Tom Lane wrote:
Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp writes:
structnamepg_largeobject/structname should not be readable by the
public, since the catalog contains data in large objects of
KaiGai Kohei kai...@ak.jp.nec.com wrote:
The attached patch fixes these matters.
I'll start to check it.
We have to reference pg_largeobject_metadata to check whether a certain
large objct exists, or not.
What is the situation where there is a row in pg_largeobject_metadata
and no
Takahiro Itagaki wrote:
KaiGai Kohei kai...@ak.jp.nec.com wrote:
The attached patch fixes these matters.
I'll start to check it.
Thanks,
We have to reference pg_largeobject_metadata to check whether a certain
large objct exists, or not.
What is the situation where there is a row
Hi, I'm reviewing LO-AC patch.
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Nothing are changed in other codes, including something corresponding to
in-place upgrading. I'm waiting for suggestion.
I have a question about the behavior -- the patch adds ownership
management of large objects.
Takahiro Itagaki wrote:
Hi, I'm reviewing LO-AC patch.
KaiGai Kohei kai...@ak.jp.nec.com wrote:
Nothing are changed in other codes, including something corresponding to
in-place upgrading. I'm waiting for suggestion.
I have a question about the behavior -- the patch adds ownership
95 matches
Mail list logo