> We're working
> to create a package of Oracle procedures and documenation for this
> process and hope to release it in the future.
Release early and often.
Cheers,
--
Tim Ellis
DBA, Gamet
---(end of broadcast)---
TIP 2: you can get off all lis
We're currently in the process of migrating an e-commerce customer off a
24x7 Oracle system that includes standby databases,
production-to-development refresh processes, data warehousing, etc. Many of
the tables run in the millions of rows. One runs almost 20 million rows. Not
big in enterprise-cl
On Fri, Jul 05, 2002 at 09:13:20AM -0700, Christopher Smith wrote:
> I have been running postgres for a short while, however, when
> I run the vaccumdb or vaccum commands I do not understand why my
> memory usage is increased and never released.
>
> Can someone explain this . Is there memory lea
We have noticed a moderate degradation of performance in our system after
upgrading from 7.1 to 7.2. We are attempting to narrow the list of
culprits (it could be different sizes of critical tables, or any of a
dozen things).
Where the 7.1 installation reports this in the postmaster log:
DEBUG:
First off, thanks for your help, guys.
I'm aware of the problems of over-allocating RAM, and I surely wouldn't
want to force the buffers into swap. (thanks, Curt, for
kern.ipc.shm_use_phys) On this particular system, though, it's doing
nothing except PG. 384 MB of RAM, I can give PG 160 of it, w
On Fri, 5 Jul 2002, Gaetano Mendola wrote:
> Stephan wrote:
> > number of rows returned). I think there was some question about
> > whether it was safe to do that optimization (ie,
> > is select * from (a union [all] b) where condition
> > always the same as
> > select * from a where condition
I have been running postgres for a short while, however, when I run the vaccumdb or vaccum commands I do not understand why my memory usage is increased and never released.
Can someone explain this . Is there memory leaks in this command. Is it possible to clear the memory cache for postgresqlDo Y
In my previous message I forgot to say that there is also another file
to patch to re-enable the old stdin input method for pg_dump:
In addition to: src/bin/psql/common.c
You have to patch also: src/bin/pg_dump/pg_backup_db.c
Changes to apply are the same.
--
On Thu, 4 Jul 2002, Tom Lane wrote:
> I can think of very very few applications where CHAR(n) is really a
> sensible choice over VARCHAR(n).
text hashes such as MD5 and crypt, stock or serial numbers, automotive
VIN codes, invoice sequences, emulated bitmasks, etc. Lots of
industry-specific
Hi there,
I've got the same problem after upgrading from pgsql 7.1.2... My dump
scripts weren't working no more.
I've solved the problem modifying the source code and recompiling
postgres.
I've simply commented out the /dev/tty fopen and re-enabled the
stdin/stdout by default.
The modified sour
On Tue, 2 Jul 2002, Bruce Momjian wrote:
> Yes, we had complaints that people were running their script and they
> wouldn't be prompted for the password on their terminal. Researching,
> we found no applications that gets passwords from stdin _if_ a
> controlling terminal (/dev/tty) can be opene
At 12:16 PM 7/4/02 -0400, Bruce Momjian wrote:
> > Well, I must admit we had some rain today, but after your answer sun came
> > from behind the clouds :)
Rain? I've heard about that. Something about moisture falling from the
sky. Haven't seen any here in Phoenix for several months.
On Thu, 4 Jul 2002, Bruce Momjian wrote:
> How about PGUSER/PGPASSWORD? That will work. This is assuming you
> don't have an OS (BSD?) that displays environment variables for a
> process.
No BSD, Linux. And it works. Thanks.
Well, I must admit we had some rain today, but after your answer sun
We have a need to store text data which typically is just a hundred or so
bytes, but in some cases may extend to a few thousand. Our current field
has a varchar of 1024, which is not large enough. Key data is fixed sized
and much smaller in this same record.
Our application is primarily transa
We hope to use views as a way to give customers odbc based ad-hoc query
access to our database while enforcing security. The reason is that we do
not want to put data into separate tables by customer, but rather use a
customer ID as part of any query criteria on any table.
So the question is:
On Thu, Jul 04, 2002 at 10:00:01AM +0800, Raymond Fung wrote:
> Dear all,
> ...
> It has translated the 4 bytes constant (0x87654321) into a one byte
> char constant (within the single quotes) during pre-processing. Seems
> this happens only when the high bit of the constant is set (i.e. it
> won'
Tom Lane wrote:
> Curt Sampson <[EMAIL PROTECTED]> writes:
> >> I still cannot set PG's shared_buffers higher than 2 (160 MB):
>
> > Shared memory pages, IIRC, are locked, meaning that they cannot be
> > swapped.
>
> Is that really how it works on *BSD? That's great if so --- it's
> exactly
Hiya. I've installed Postgres 7.2 on a dedicated FreeBSD system with 384
MB RAM. Because the system will be doing nothing except PG, I'd like to
dump as much memory as possible into PG's shared memory.
I rebuilt the kernel with very large limits: 330 MB on the MAXDSIZ and
DFLDSIZ, and 330 MB for
At 04:49 PM 7/4/2002, Tom Lane wrote:
>John Moore <[EMAIL PROTECTED]> writes:
> > So I *suspect* I want to keep the data in the physical row, rather than
> > using TEXT and having it stored separately from the record.
>
>You seem to be reading something into the TEXT type that's not there;
>perhap
John Moore <[EMAIL PROTECTED]> writes:
> So I *suspect* I want to keep the data in the physical row, rather than
> using TEXT and having it stored separately from the record.
You seem to be reading something into the TEXT type that's not there;
perhaps you are carrying over associations from som
20 matches
Mail list logo