Bruce,
On Tue, 12 Jun 2001, Bruce Momjian wrote:
> > Bruce,
> >
> > On Fri, 18 May 2001, Bruce Momjian wrote:
> >
> > > We have on the TODO list:
> > >
> > > * SELECT pg_class FROM pg_class generates strange error
> > >
> > > It passes the tablename as targetlist all the way to the executo
> My project requires to use Big5/EUC_TW (two bytes per
> chinese-character).
>
> Unfortunately, Big5 code contains escape '\'.
PostgreSQL can handle Big5 since 6.5. Create a database with the encoding
EUC_TW, and set the client side encoding to BIG5. For example, set
PGCLIENTENCODING envrionmen
[EMAIL PROTECTED] (Reinoud van Leeuwen) writes:
> Well as I read back the thread I see 2 different approaches to
> replication:
> ...
> I can think of some scenarios where I would definitely want to
> *choose* one of the options.
Yes. IIRC, it looks to be possible to support a form of async
repl
Anyone know of any alternatives to using pgAdmin to migrate a database
(schema and data) from Foxpro to PostgreSQL? pgAdmin worked fine on my
initial test database, but it was slow... very slow. I'd like to try to
migrate one of our production databases, where several tables have
200,000+ record
Philip Crotwell <[EMAIL PROTECTED]> writes:
> On a similar idea, has there been any thought to allowing regular backend
> processess to run at lower priority?
People suggest that from time to time, but it's not an easy thing to do.
The problem is priority inversion: low-priority process acquires
Thomas Swan <[EMAIL PROTECTED]> writes:
> I think I missed what I was trying to say in my original statement. I
> think there's a way to use the existing API with performance benefits
> left intact.
> Take for example the table :
> create table foo {
> foo_id serial,
> foo_name varchar
On Tue, 12 Jun 2001 15:50:09 +0200, you wrote:
>
>> Here are some disadvantages to using a "trigger based" approach:
>>
>> 1) Triggers simply transfer individual data items when they
>> are modified, they do not keep track of transactions.
>> 2) The execution of triggers within a database impos
On Tue, 12 Jun 2001 13:36:02 -0400, you wrote:
>Anyone know of any alternatives to using pgAdmin to migrate a database
>(schema and data) from Foxpro to PostgreSQL? pgAdmin worked fine on my
>initial test database, but it was slow... very slow. I'd like to try to
>migrate one of our production
Bruce Momjian wrote:
Thomas Swan <[EMAIL PROTECTED]> writes:
I know that BLOBs are on the TODO list, but I had an idea.
I think you just rediscovered TOAST.
We have TOAST and people want to keep large objects for performance. Ithink we could us an API that allow
I know that vacuum has come up in the past, and even saw the
discussion about putting a cron entry to have it run every once in a while,
but I don't remember seeing anything about having it kick off via a trigger
every so may inserts.
Is there a relative consensus for how often to
> Here are some disadvantages to using a "trigger based" approach:
>
> 1) Triggers simply transfer individual data items when they
> are modified, they do not keep track of transactions.
I don't know about other *async* replication engines but Rserv
keeps track of transactions (if I understood
On Sat, 9 Jun 2001, Bruce Momjian wrote:
> > Philip Crotwell <[EMAIL PROTECTED]> writes:
> > > I was vacuuming, but as the owner of the database. When I do that there
> > > are messages that should have clued me in, lke
> > > NOTICE: Skipping "pg_largeobject" --- only table owner can VACUUM it
>
> > Here are some disadvantages to using a "trigger based" approach:
> >
> > 1) Triggers simply transfer individual data items when they
> > are modified, they do not keep track of transactions.
> I don't know about other *async* replication engines but Rserv
> keeps track of transactions (if I
Not sure if this is the right place, but...
I am evaluating a move from FoxPro to PostgreSQL. So far, I like what I
see... alot. But, I have a data migration issue looming in the near
future that I need to address. The pgAdmin tool is nice, and works okay
on small databases, but I need to mig
Here is a basic lo_copy routine
It copies a large object from an existing large object
PG_FUNCTION_INFO_V1(lo_copy);
Datum
lo_copy(PG_FUNCTION_ARGS)
{
Oid oldlobjId = PG_GETARG_OID(0);
LargeObjectDesc *lobj,*oldlobj;
int r
We have been researching replication for several months now, and
I have some opinions to share to the community for feedback,
discussion, and/or participation. Our goal is to get a replication
solution for PostgreSQL that will meet most needs of users
and applications alike (mission impossible the
> > I would be very interested in hearing about your experiences with
> > this...
Well, Eric thinks it works just spiffy. 8-)
Recall is written in C++, and is meant to be extensible. It was
extended for perl and the DBI layer.
Note that this hack for perl is not perfect, especially in the
Running postgres-7.0.3 on a RedHat 6.2 system:
Recently I updated the schema of one of our tables (create, insert
select, drop, rename). We have a boolean column "hitsingle" with a default
of 'f'.
media=> \d incantaaudioclipregistry
Table "incantaaudioclipregistry"
Attribute
Alex Pilosov <[EMAIL PROTECTED]> writes:
> Apparently, since there's no explicit function to cast from inet to cidr,
> postgresql assumes its always safe to do so, as they are
> binary-compatible.
Yes. I've thought for awhile that it was a mistake to treat them as
binary-compatible. However, yo
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Somehow I got marked down for this but I actually thought the split was
> useful. One reason was that you could restore template1 individually
> (which hasn't worked for a while).
If it ever did, which seems doubtful given the way build_indices work
> Imho an implementation that opens a separate client connection to the
> replication target is only suited for async replication, and for that a
WAL
> based solution would probably impose less overhead.
Yes there is significant overhead with opening a connection to a
client, so Postgres-R c
Hello
I have hacked up a replication layer for Perl code accessing a
database throught the DBI interface. It works pretty well with MySQL
(I can run pre-bender slashcode replicated, haven't tried the more
recent releases).
Potentially this hack should also work with Pg but I haven't tried
yet.
> Here are some disadvantages to using a "trigger based" approach:
>
> 1) Triggers simply transfer individual data items when they
> are modified, they do not keep track of transactions.
> 2) The execution of triggers within a database imposes a performance
> overhead to that database.
> 3) Tr
Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes:
>> If not, does an Order-by force a sort even if an index has the correct
>> order to satisfy the order-by?
> If a btree index is chosen that satisfies the order by, the sort is
> avoided.
And, of course, selection of that index is encouraged,
> which I believe is what the rserv implementation in contrib currently
does
> ... no?
We tried rserv, PG Link (Joseph Conway), and PosrgreSQL Replicator. All
these projects are trigger based asynchronous replication. They all have
some advantages over the current functionality of Postgres-R
Bruce,
On Fri, 18 May 2001, Bruce Momjian wrote:
> We have on the TODO list:
>
> * SELECT pg_class FROM pg_class generates strange error
>
> It passes the tablename as targetlist all the way to the executor, where
> it throws an error about Node 704 unkown.
The problem is caused in tran
which I believe is what the rserv implementation in contrib currently does
... no?
its funny ... what is in contrib right now was developed in a weekend by
Vadim, put in contrib, yet nobody has either used it *or* seen fit to
submit patches to improve it ... ?
On Tue, 12 Jun 2001, Zeugswetter A
> Frequently one wants a data set returned in the same order as the
> index used in the query. Informix (at least) has implicit order-by,
> which means that the data will be returned in collating order if the
> query forces use of the appropriate index.
>
> Does Postgresql do this?
Yes, but sam
28 matches
Mail list logo