To suggest a solution it would be important to know
how the elements of the table are accessed (by table index,
or is there a key element in each table element that is searched,
is there a key table where a binary search is done for that key or
a tree structure that yields the table index etc.), and:

whtat kind of update activity is done to the table entries,
is it, for example, possible, that short table entries are
replaces by longer ones, and how often does this occur?

I would suggest a solution, where only the really needed parts of the
variable length strings are stored, so that the storage needed is about
to 875 * 35.000 plus some administration overhead. But how exactly
this is done depends on your answers to the questions above.

Kind regards

Bernd




Am 21.12.2012 13:25, schrieb Donald Likens:
I have a table with variable length entries that range from 94 bytes to 32K and 
an average length of 875 bytes. This table has a maximum size of 35,000 
entries. I am thinking about using cells for this table but concerned on the 
impact to the system getting over a 1 gigabyte of storage (35K*32K). I am 
putting this cell pool above the bar but what about backing this storage with 
AUX and page faults? Should I be concerned? If I don’t use a cell pool the 
memory usage is around 30M.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to