Christian Paul B. Cosinas wrote:
Hi mark
I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)
LOL - sorry, mis-understood your previous message to mean you had
identified *one* query where
Hi mark
I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)
-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 25, 2005 3:07 AM
To: Christian Paul
Christian Paul B. Cosinas wrote:
Hi To all those who replied. Thank You.
I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
"fetch all" in my VB Code and put it in a recordset.Also in this command
If I turn on stats_command_string, how much impact would it have on
PostgreSQL server's performance during a period of massive data
INSERTs? I know that the answer to the question I'm asking will
largely depend upon different factors so I would like to know in which
situations it would be negligib
Hi All,
I am Kishore doing freelance development of J2EE applications.
We switched to use Postgresql recently because of the advantages it has over other commercial databases. All went well untill recently, untill we began working on an application that needs to maintain a huge database.
I
guess, You should check, if a blob field and large object access is
suitable for you - no escaping etc, just raw binary large objects.AFAIK, PQExecParams is not the right solution for You. Refer the "Large object" section:"28.3.5. Writing Data to a Large Object
The functionint
lo_write(PGconn *c
I want to correlate two index rows of different tables to find an
offset so that
table1.value = table2.value AND table1.id = table2.id + offset
is true for a maximum number of rows.
To achieve this, I have the two tables and a table with possible
offset values and execute a query:
SELECT value,
Hi To all those who replied. Thank You.
I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
"fetch all" in my VB Code and put it in a recordset.Also in this command the
CPU utilization is used extens
Alex Turner wrote:
This is possible with Oracle utilizing the keep pool
alter table t_name storage ( buffer_pool keep);
If Postgres were to implement it's own caching system, this seems like
it would be easily to implement (beyond the initial caching effort).
Alex
On 10/24/05, Craig A. James
This is possible with Oracle utilizing the keep pool
alter table t_name storage ( buffer_pool keep);
If Postgres were to implement it's own caching system, this seems like
it would be easily to implement (beyond the initial caching effort).
Alex
On 10/24/05, Craig A. James <[EMAIL PROTECTED]>
Jim C. Nasby" wrote:
> Stefan Weiss wrote:
> ... IMO it would be useful to have a way to tell
> PG that some tables were needed frequently, and should be cached if
> possible. This would allow application developers to consider joins with
> these tables as "cheap", even when querying on columns
Scott Marlowe wrote:
What's needed is a way for the application developer to explicitely say,
"This object is frequenly used, and I want it kept in memory."
There's an interesting conversation happening on the linux kernel
hackers mailing list right about now that applies:
http://www.gossamer
On Mon, 2005-10-24 at 12:00, Craig A. James wrote:
> Kevin Grittner wrote:
> > In addition to what Mark pointed out, there is the possibility that a
> > query
> > is running which is scanning a large table or otherwise bringing in a
> > large number of pages from disk. That would first use up all
Now this interests me a lot.
Please clarify this:
I have 5000 tables, one for each city:
City1_Photos, City2_Photos, ... City5000_Photos.
Each of these tables are: CREATE TABLE CityN_Photos (location text, lo_id largeobectypeiforgot)
So, what's the limit for these large objects? I heard I coul
Kevin Grittner wrote:
In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk. That would first use up all available
unused cache space, and then may start replacing some of
In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk. That would first use up all available
unused cache space, and then may start replacing some of your
frequently used dat
On Mon, Oct 24, 2005 at 11:09:55AM -0400, Alex Turner wrote:
> Just to play devils advocate here for as second, but if we have an algorithm
> that is substational better than just plain old LRU, which is what I believe
> the kernel is going to use to cache pages (I'm no kernel hacker), then why
> d
Just to play devils advocate here for as second, but if we have an
algorithm that is substational better than just plain old LRU, which is
what I believe the kernel is going to use to cache pages (I'm no kernel
hacker), then why don't we apply that and have a significantly larger
page cache a la Or
18 matches
Mail list logo