> On Jan 29, 2017, at 10:36, Xiaodi Wu <xiaodi...@gmail.com> wrote:
> 
> Hmm, interesting. I might be tempted to use a 40-bit type for large arrays, 
> but the performance hit for any useful computation over a large array would 
> probably tilt heavily in favor of plain 64-bit integers. What's your use case 
> for such a 40-bit type? And is it common enough to justify such a facility in 
> the stdlib vs. providing the tools to build it yourself?

I can think of two use-cases. One — saving memory for large #s of allocations — 
you already mentioned. The other is for easing interactions with on-disk data. 
For example, if you're working with some format that has 24-bit ints, you could 
use "CompoundWhateverItWasCalled<Int8,Int16>". It doesn't make much difference 
when you're loading the data, but when you're writing it back out, you wouldn't 
have to worry about trimming that last byte or what to do if the value won't 
fit in 24 bits. I mean, obviously you'd still have to handle it, but the 
overflow would happen in the calculation that causes it rather then while 
you're busy doing something else.

In terms of justification, I think probably all I can offer is that I think it 
wouldn't be materially harder or less efficient to implement this than it would 
be to write a "DoubleWidth<T>" type... It's extra functionality for free, at 
least in terms of effort. It would increase the API surface, but not by much. 
Assuming that "DoubleWidth" could just be a typealias, that is. If I'm wrong 
about it being "that easy", then I don't think it'd be worth it. As you noted, 
it is somewhat niche.

- Dave Sweeris
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to