Dear Michael,

we have had a similar problem using the MAXDB with a Perl application using ODBC-connection. One special character of the application is, that in some specific tables inserts and deletes of around 200.000 datasets per table per day are done (approx. 6 tables).

The other parts of the application contains slowly growing tables.

In the past I've heard something about a "ping" method which could leed to bad Catalog Cache hitrates like this.

Here comes an answer from Dr. Thomas Kötter (SAP AG, Berlin):

--- Snip ---

Yes. The ping-method was implemented with a SQLTables call. I don't know, whether this is still the case. Better for MaxDB would be a simple "select * from dual". Maybe you can adjust at your site.

--- Snap ---

We changed in the ODBC perl module the ping method and now we have a catalog cache hitrate of around 93%.

Perhaps this is helpful for you.

Best regards

   Hannes

--

Hannes Degenhart GPS - Gesellschaft zur Pruefung von Software mbH
                 Hoervelsinger Weg 54        D - 89081 Ulm - Germany
                 Pho. +49 731 96657 14       Fax. +49 731 96657 57
                 mailto:[EMAIL PROTECTED] Web: http://www.gps-ulm.de

Michael Jürgens wrote:
Hello,

I don´t understand how to optimize catalog cache.
In my database it is always at round about 50-60%.
I´ve set the catalog cache.(CAT-CACHE_SUPPLY) to 32000 pages, but there is no change.
If I add all catalog cache values from the sessions I get only 8000 pages.

How can I find out what´s wrong - please help.


Best regards,

Michael








--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to