> As with Roy, I am entirely for consistency of the API, and the work that you > did to clean it up is Good. But apr_off_t is the real, potential size of a > bucket's data.
In other words, if the portability library isn't abstracting this under the covers, then it isn't much of a portability library. All of our sizes should be apr_off_t until we get to the os layer, at which point the messy code of dealing with os_limit < bucket_size can be dealt with in the inevitably butt-ugly and os-specific ways. Bill is absolutely right that this is going to be nasty to get right, and will add overhead to platforms where apr_off_t != apr_ssize_t, but the alternative is to define a lowest common denominator interface and force every generator of buckets to deal with the complexity of large-file buckets. > (and yes, I could also agree with tossing the platform-specific apr_off_t > and saying they are apr_int64_t or apr_uint64_t (if the latter, then we > would need a const for the "-1" concept we have now)) Eww, yuck... the whole reason we have apr_off_t (instead of just using plain old off_t) is to not be tied to a particular size -- I hear that some platforms use a 32-bit off_t and another type for large-file interfaces, using a separate interface library for the large-file calls. ....Roy
