> On Nov 10, 2016, at 1:02 PM, Dave Abrahams <[email protected]> wrote: > > > on Thu Nov 10 2016, Stephen Canon <scanon-AT-apple.com> wrote: > >>> On Nov 10, 2016, at 1:30 PM, Dave Abrahams via swift-evolution >>> <[email protected]> wrote: >>> >>> >>> on Thu Nov 10 2016, Joe Groff <jgroff-AT-apple.com> wrote: >>> >> >>>>> On Nov 8, 2016, at 9:29 AM, John McCall <[email protected]> wrote: >>>>> >>>>>> On Nov 8, 2016, at 7:44 AM, Joe Groff via swift-evolution >>>>>> <[email protected]> wrote: >>>>>>> On Nov 7, 2016, at 3:55 PM, Dave Abrahams via swift-evolution >>>>>>> <[email protected]> wrote: >>>>>>> >>>> >>>>>>> >>>>>>> on Mon Nov 07 2016, John McCall <[email protected]> wrote: >>>>>>> >>>>>>>>> On Nov 6, 2016, at 1:20 PM, Dave Abrahams via swift-evolution >>>>>>>>> <[email protected]> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> Given that we're headed for ABI (and thus stdlib API) stability, I've >>>>>>>>> been giving lots of thought to the bottom layer of our collection >>>>>>>> >>>>>>>>> abstraction and how it may limit our potential for efficiency. In >>>>>>>>> particular, I want to keep the door open for optimizations that work >>>>>>>>> on >>>>>>>>> contiguous memory regions. Every cache-friendly data structure, even >>>>>>>>> if >>>>>>>>> it is not an array, contains contiguous memory regions over which >>>>>>>>> operations can often be vectorized, that should define boundaries for >>>>>>>>> parallelism, etc. Throughout Cocoa you can find patterns designed to >>>>>>>>> exploit this fact when possible (NSFastEnumeration). Posix I/O >>>>>>>>> bottoms >>>>>>>>> out in readv/writev, and MPI datatypes essentially boil down to >>>>>>>>> identifying the contiguous parts of data structures. My point is that >>>>>>>>> this is an important class of optimization, with numerous real-world >>>>>>>>> examples. >>>>>>>>> >>>>>>>>> If you think about what it means to build APIs for contiguous memory >>>>>>>>> into abstractions like Sequence or Collection, at least without >>>>>>>>> penalizing the lowest-level code, it means exposing >>>>>>>>> UnsafeBufferPointers >>>>>>>>> as a first-class part of the protocols, which is really >>>>>>>>> unappealing... unless you consider that *borrowed* >>>>>>>>> UnsafeBufferPointers >>>>>>>>> can be made safe. >>>>>>>>> >>>>>>>>> [Well, it's slightly more complicated than that because >>>>>>>>> UnsafeBufferPointer is designed to bypass bounds checking in release >>>>>>>>> builds, and to ensure safety you'd need a BoundsCheckedBuffer—or >>>>>>>>> something—that checks bounds unconditionally... but] the point remains >>>>>>>>> that >>>>>>>>> >>>>>>>>> A thing that is unsafe when it's arbitrarily copied can become safe if >>>>>>>>> you ensure that it's only borrowed (in accordance with well-understood >>>>>>>>> lifetime rules). >>>>>>>> >>>>>>>> UnsafeBufferPointer today is a copyable type. Having a borrowed value >>>>>>>> doesn't prevent you from making your own copy, which could then escape >>>>>>>> the scope that was guaranteeing safety. >>>>>>>> >>>>>>>> This is fixable, of course, but it's a more significant change to the >>>>>>>> type and how it would be used. >>>>>>> >>>>>>> It sounds like you're saying that, to get static safety benefits from >>>>>>> ownership, we'll need a whole parallel universe of safe move-only >>>>>>> types. Seems a cryin' shame. >>>>>> >>>>>> We've discussed the possibility of types being able to control >>>>>> their "borrowed" representation. Even if this isn't something we >>>>>> generalize, arrays and contiguous buffers might be important enough >>>>>> to the language that your safe BufferPointer could be called >>>>>> 'borrowed ArraySlice<T>', with the owner backreference optimized >>>>>> out of the borrowed representation. Perhaps Array's own borrowed >>>>>> representation would benefit from acting like a slice rather than a >>>>>> whole-buffer borrow too. >>>>> >>>>> The disadvantage of doing this is that it much more heavily >>>>> penalizes the case where we actually do a copy from a borrowed >>>>> reference — it becomes an actual array copy, not just a reference >>>>> bump. >>>> >>>> Fair point, though the ArraySlice/Array dichotomy strikes me as >>>> already kind of encouraging this—you might pass ArraySlices down into >>>> your algorithm, but we encourage people to use Array at storage and >>>> API boundaries, forcing copies. >>>> >>>> From a philosophical perspective of making systems Swift feel like >>>> "the same language" as Swift today, it feels better to me to try to >>>> express this as making our high-level safe abstractions efficient >>>> rather than making our low-level unsafe abstractions safe. >>> >>> +1, or maybe 10 >>> >>> What worries me is that if systems programmers are trying to get static >>> guarantees that there's no ARC traffic, they won't be willing to handle >>> a copyable thing that carries ownership. >> >> FWIW, we (frequently) only need a static guarantee of no ARC traffic >> *within a critical section*. If we can guarantee that whatever ARC >> operations need to be done happen in a precisely-controlled manner at >> a known interface boundary, that’s often good enough. > > I don't think you can get those guarantees without static protection > against escaping borrowed references, though, can you?
You shouldn't be able to do that without copying it, and copying a borrow seems like it ought to at least be explicit. -Joe _______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
