Dear all,

once more we struggle with a catalog cache hit rate around 70%, which is really bad, if I look in the list.

Our machine is
- Processor: Intel Pentium IV - 3 Ghz
- Memory:    2 Gb RAM
- OS:        Debian Linux

MaxDB Settings:
- Version:           7.5.00.16

- Data Volume:       16 Gb, filled with 28%
- Log  Volume:        4 Gb,

- CACHE_SIZE:         25.000 Pages (  200 Kb)
- CAT_CACHE_SUPPLY:  128.000 Pages (1.024 Kb)

- MAXUSERTASKS:     100

Cache Hitrates:
- Data Cache:      99,65%
- Catalog Cache:   69,64%

Application:

Perl application using ODBC-connection. One special character of the application is, that in some specific tables inserts and deletes of around 200.000 datasets per table per day are done (approx. 6 tables).

The other parts of the application contains slowly growing tables.


In the past I've heard something about a "ping" method which could leed to bad Catalog Cache hitrates like this.

Perhaps someone can give us some hints about the "best" Cache Settings.


Thanks and
best regards

   Hannes

--

Hannes Degenhart GPS - Gesellschaft zur Pruefung von Software mbH
                 Hoervelsinger Weg 54        D - 89081 Ulm - Germany
                 Pho. +49 731 96657 14       Fax. +49 731 96657 57
                 mailto:[EMAIL PROTECTED] Web: http://www.gps-ulm.de


--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to