on 24/7/01 5:57 PM, Richard Gaskin at [EMAIL PROTECTED] wrote:

> I did an about-face on this yesterday, setting aside the multi-field
> solution to play with some column-truncation algorithms for display in a
> single field.  I've managed to get the speed down to about half of what it
> took to parse columns out.   It works great with monospaced fonts, and most
> of the time it's "close enough" with non-monospaced fonts.  Still a tad
> wonky, thought, hence my desire to have such a function built in, so it can
> use font-metric calculations more accurately and more quickly.
> 
> 
> Of course, there's always new things to learn with MC, so if I've missed the
> Ultimate Way To Emulate A Multi-Column List, by all means let me know.

By no means the ultimate way, but a small wrinkle on the other ways (both
truncating the text to fit tab stops, or using multiple fields), which can
work better in some contexts.  I've used it either where the number of rows
in the data is very large relative to the number of rows to be displayed at
a time, and/or where the data can change frequently.

In such contexts, I've detached the scrollbar from the field(s) altogether.
Have one field with tab stops, or multiple fields; but either way, make them
non-scrolling; with a separate scrollbar control aligned at the right edge
of the last one.  If the fields can display say 10 rows, give the scrollbar
a range from 1 to (n-9), where n is the number fows in the source data.
When the user changes the scrollbar, select the new range of 10 rows from
the source data (in a global, or a property of eg the scrollbar, or in a
hidden field etc), and do the conversion display on the fly (truncating text
between tab stops, splitting into multiple fields, etc) - just for those ten
rows.

This doesn't solve issues like drag selection, and it completely rules out
drag scrolling etc.  But it does change your performance concerns.

Now you have a very fixed performance penalty, which you pay every time the
user scrolls the display or the data is changed; but it's probably quite
small; and you've avoided taking a huge hit the first time you try to
display the data, or each time the data changes.  In my experience, if the
number of rows is small, on reasonable machines you get excellent
performance - and you know the performance won't change when the size of the
data increases.

  Ben Rubinstein               |  Email: [EMAIL PROTECTED]
  Cognitive Applications Ltd   |  Phone: +44 (0)1273-821600
  http://www.cogapp.com        |  Fax  : +44 (0)1273-728866


Archives: http://www.mail-archive.com/metacard@lists.runrev.com/
Info: http://www.xworlds.com/metacard/mailinglist.htm
Please send bug reports to <[EMAIL PROTECTED]>, not this list.

Reply via email to