Re: OT: Swift is now open source
On 2015-12-06 14:45, Michel Fortin wrote: That only works if the actual underlying representation is UTF8 (or other single-byte encoding). String abstracts that away from you. But you can do this if you want to work with bytes: let utf8View = str.utf8 utf8View[utf8View.startIndex.advancedBy(2) ..< utf8View.endIndex.advancedBy(-1)] or: let arrayOfBytes = Array(str.utf8) arrayOfBytes[2 ..< arrayOfBytes.count-1] All these just show that it's still too complicated ;) container.indexOf(predicate) container.indexOf { (element) in element == "p" } container.indexOf { $0 == "p" } container[index] That's what I ended up doing. -- /Jacob Carlborg
Re: Swift is coming, Swift is coming
On Tuesday, 24 November 2015 at 17:59:35 UTC, Joakim wrote: A Wired article about Swift coming to the server, particularly after the imminent open-sourcing, that also mentions D as an alternative, especially since it's written by the same guy who wrote about D for Wired last year: http://www.wired.com/2015/11/apples-swift-ios-programming-language-is-being-remade-for-data-centers/ Will be interesting to see how Swift does, a good natural experiment for those pushing D to focus on one niche before expanding, as Swift is doing really well on one of the most important development platforms today, iOS, before expanding onto the server. Of course, Apple is unlikely to really push it on the server, other than open-sourcing and accepting patches, so they have a built-in excuse if it doesn't do well. ;) Pretty much what we guessed: they're expanding the Swift platform but Apple is not going to develop for Windows/Android, though others are free to do so - "In open-sourcing Swift, Apple has two main goals in mind. The first and most obvious is to make Swift code more portable and versatile, enabling its use in projects outside of apps for Apple’s platforms. The company’s long-term vision is even more ambitious. “We think [Swift] is how really everyone should be programming for the next 20 years,” Federighi told Ars. “We think it’s the next major programming language. “A number of developers, including enterprise developers like IBM, very early on as they began developing their mobile applications in Swift, really wanted to be able to take the talents that their developers were developing and even some of the code and be able to deploy it in the cloud, for instance,” Federighi continued. “We thought the best way [to enable that], ultimately, was open source.” The other goal is educational: when developers put time into learning Swift (or when educators take the time to teach it), Apple wants those skills to be more broadly applicable. “We’re working with educators, and many professors are very interested in teaching Swift because it’s such an expressive language that’s such a great way to introduce all sorts of programming concepts,” Federighi said. “And enabling it as open source makes it possible for them to incorporate Swift really as part of the core curriculum.” When we spoke with developers about the first year of Swift back in June, Swift’s teachability was definitely a major selling point. As useful as Swift might be to communicate programming ideas, it’s ultimately more useful to be able to take that knowledge and use it in multiple places." http://arstechnica.com/apple/2015/12/craig-federighi-talks-open-source-swift-and-whats-coming-in-version-3-0/
Re: Safe reference counting cannot be implemented as a library
Am 01.11.2015 um 21:47 schrieb Martin Nowak: On 10/27/2015 12:41 PM, Andrei Alexandrescu wrote: - I'm not a fan of adding yet another attribute but as inference support is currently limited it seems we'd need an explicit attribute for public APIs. I've very likely missed that part of the discussion - what were the reasons to not use "scope" for this?
Re: __traits(getAttributes, ...) gets attributes for the first overload only
On 12/06/2015 08:13 PM, Vladimir Panteleev wrote: On Sunday, 6 December 2015 at 02:00:30 UTC, Andrei Alexandrescu wrote: On 12/5/15 7:41 PM, John Colvin wrote: On Saturday, 5 December 2015 at 20:44:40 UTC, Andrei Alexandrescu wrote: Working on the big-oh thing I noticed that for an overloaded function, __traits(getAttributes, ...) applied to overloaded functions only fetches attributes for the first syntactically present overload. Bug or feature? Andrei In an ideal world I would want it to be an error to use __traits(getAttributes, ...) on anything ambiguous, would catch the odd bug. The current behaviour is dumb, but some union of attributes over the overload sets seems worse. Yah, error is the way to go. -- Andrei The potential problem I see with this idea is that adding an overload to an otherwise non-overloaded function might break some code elsewhere which queries the function's attributes. In other circumstances, adding an unambiguous overload is never a breaking change, right? Well what if you add it before the existing function? -- Andrei
Re: __traits(getAttributes, ...) gets attributes for the first overload only
On Sunday, 6 December 2015 at 02:00:30 UTC, Andrei Alexandrescu wrote: On 12/5/15 7:41 PM, John Colvin wrote: On Saturday, 5 December 2015 at 20:44:40 UTC, Andrei Alexandrescu wrote: Working on the big-oh thing I noticed that for an overloaded function, __traits(getAttributes, ...) applied to overloaded functions only fetches attributes for the first syntactically present overload. Bug or feature? Andrei In an ideal world I would want it to be an error to use __traits(getAttributes, ...) on anything ambiguous, would catch the odd bug. The current behaviour is dumb, but some union of attributes over the overload sets seems worse. Yah, error is the way to go. -- Andrei The potential problem I see with this idea is that adding an overload to an otherwise non-overloaded function might break some code elsewhere which queries the function's attributes. In other circumstances, adding an unambiguous overload is never a breaking change, right?
Re: Complexity nomenclature
On 12/06/2015 07:50 PM, Timon Gehr wrote: On 12/06/2015 08:48 PM, Andrei Alexandrescu wrote: >The next step up the expressiveness scale would be to have a >sum-of-products representation. > >Proof of concept (disclaimer: hacked together in the middle of the >night, and not tested thoroughly): > >http://dpaste.dzfl.pl/d1512905accd > >I think this general approach is probably close to the sweet spot. ... > Brilliant! ... I have noticed another thing. The comparison operator is an underapproximation (it sometimes returns NaN when ordering would actually be possible). E.g., O(n·m) ⊆ O(n²+m²), because n·m ≤ n²+m². Interesting. It would be nice if the final version had a complete decision procedure for ⊆. I think it's okay to leave N^^2 + N1 and N + N1^^2 unordered. -- Andrei
Re: Complexity nomenclature
On 12/06/2015 06:21 PM, Timon Gehr wrote: The implementation of power on BigO given does not actually work in general (there should have been an enforcement that makes sure there is only one summand). I'll think about whether and how it can be made to work for multiple summands. No matter, you may always use runtime assertions - after all it's all during compilation. That's the beauty of it! Define global constants K, N, and N1 through N7. K is constant complexity all others are free variables. Then complexities are regular D expressions e.g BigO(N^2 * log(N)). On a the phone sry typos. I generally tend to avoid DSLs based solely on operator overloading, as they don't always work and hence are awkward to evolve. Here, the biggest current nuisance is that the variable names are not very descriptive. The bright side is you get to standardize names. If you limit names to K, N, and N1 through N7 then you can always impose to APIs the meaning of these. Another bright side is people don't need to learn a new, juuust slightly different grammar for complexity expressions - just use D. For example the grammar you defined allows log n without parens - what's the priority of log compared to power etc? Why is power ^ in this notation and ^^ in D? All these differences without a distinction are gratuitous. Just use D. If we'll go with a log(BigO) function, we possibly want to make BigO closed under log without approximating iterated logarithms: struct Term{ Variable n; Fraction[] exponents; // exponents[0] is exponent of n, // exponents[1] is exponent of log n, // exponents[2] is exponent of log log n, ... } Then O(log(f^c*g^d)) = O(log(f)+log(g)) = O(log(f+g)) [1], and hence every BigO has a well-defined logarithm. [1] O(log(f+g)) ⊆ O(log(f*g)) = O(log(f)+log(g)). O(log(f)+log(g)) ⊆ O(max(log(f),log(g))) = O(log(max(f,g))) ⊆ O(log(f+g)). Yah, the stuff under log must be restricted. Here's the grammar I'm toying with: Atom ::= K | N | N1 | ... | N7 SimpleExp ::= SimpleTerm ('+' SimpleTerm)* SimpleTerm ::= SimpleFactor ('*' SimpleFactor)* SimpleFactor ::= Atom ('^^' double)? | '(' SimpleExp ')' BigO ::= Term ('+' Term)* Term ::= SimpleFactor ('*' 'log' '(' SimpleExp ')' ('^^' double)?)? (I used regex notations for "optional" and "zero or more".) This is expressible with D's native operations (so no need for custom parsing) and covers, I think, what we need. It could be further simplified if we move some of the grammar's restrictions to runtime (e.g. no need for SimpleXxx, some expressions can be forced to be simple during "runtime"). Andrei
Re: Complexity nomenclature
On 12/06/2015 08:48 PM, Andrei Alexandrescu wrote: >The next step up the expressiveness scale would be to have a >sum-of-products representation. > >Proof of concept (disclaimer: hacked together in the middle of the >night, and not tested thoroughly): > >http://dpaste.dzfl.pl/d1512905accd > >I think this general approach is probably close to the sweet spot. ... > Brilliant! ... I have noticed another thing. The comparison operator is an underapproximation (it sometimes returns NaN when ordering would actually be possible). E.g., O(n·m) ⊆ O(n²+m²), because n·m ≤ n²+m². Interesting. It would be nice if the final version had a complete decision procedure for ⊆.
Re: Complexity nomenclature
On Sunday, 6 December 2015 at 23:49:00 UTC, Timon Gehr wrote: Yes, that's what you said later down the post. It's completely unrelated to the sentence you claimed was false. i would assume that there would have to be a usecase for something added to a standard library. Based on the presented use case it is like using the classifications "younger than 100" and "younger than 16", apply them randomly to indivduals of the same age and use the classifications for making decisions about whether they should be allowed to see adult movies or not. Putting an ordering on the classification is in that case useless.
Re: Complexity nomenclature
On 12/06/2015 10:35 PM, Ola Fosheim Grøstad wrote: On Sunday, 6 December 2015 at 14:39:05 UTC, Timon Gehr wrote: Are you complaining that the given implementation does not support 'min', or what are you trying to say here? I am saying that comparing bounds is not the same as comparing running time of implemented algorithms. Yes, that's what you said later down the post. It's completely unrelated to the sentence you claimed was false.
Re: Complexity nomenclature
On 12/06/2015 08:48 PM, Andrei Alexandrescu wrote: > >The next step up the expressiveness scale would be to have a >sum-of-products representation. > >Proof of concept (disclaimer: hacked together in the middle of the >night, and not tested thoroughly): > >http://dpaste.dzfl.pl/d1512905accd > >I think this general approach is probably close to the sweet spot. ... > Brilliant! My wife has a work emergency so I've been with the kids all day, but here's what can be done to make this simpler. Use D parsing and eliminate the whole parsing routine. Add multiply and power are all defined so you only need log of bigo. ... The implementation of power on BigO given does not actually work in general (there should have been an enforcement that makes sure there is only one summand). I'll think about whether and how it can be made to work for multiple summands. Define global constants K, N, and N1 through N7. K is constant complexity all others are free variables. Then complexities are regular D expressions e.g BigO(N^2 * log(N)). On a the phone sry typos. I generally tend to avoid DSLs based solely on operator overloading, as they don't always work and hence are awkward to evolve. Here, the biggest current nuisance is that the variable names are not very descriptive. If we'll go with a log(BigO) function, we possibly want to make BigO closed under log without approximating iterated logarithms: struct Term{ Variable n; Fraction[] exponents; // exponents[0] is exponent of n, // exponents[1] is exponent of log n, // exponents[2] is exponent of log log n, ... } Then O(log(f^c*g^d)) = O(log(f)+log(g)) = O(log(f+g)) [1], and hence every BigO has a well-defined logarithm. [1] O(log(f+g)) ⊆ O(log(f*g)) = O(log(f)+log(g)). O(log(f)+log(g)) ⊆ O(max(log(f),log(g))) = O(log(max(f,g))) ⊆ O(log(f+g)).
Re: I hate new DUB config format
Example: name =dedcpu author =Luis Panadero Guardeño author =Jin targetType =none license =BSD 3-clause description =DCPU-16 tools =and other staff subPackage name =lem1802 description =Visual LEM1802 font editor targetType =executable targetName =lem1802 excludedSourceFile =src/bconv.d excludedSourceFile =src/ddis.d lib name =gtkd platform =windows config name =nogtk platform =windows config name =gtk platform =posix dependency name =gtk-d:gtkd version ~> 3.2.0
Re: I hate new DUB config format
How about this format? https://github.com/nin-jin/tree.d
Re: Complexity nomenclature
On Sunday, 6 December 2015 at 14:39:05 UTC, Timon Gehr wrote: Are you complaining that the given implementation does not support 'min', or what are you trying to say here? I am saying that comparing bounds is not the same as comparing running time of implemented algorithms. Insertion sort is both O(n^2) and O(n^3), but if you run it on a sorted array where each element have been swapped with neighbouring elements 16 times, then it is O(N). So these derived bounds are too loose to be useful, generic algorithms cannot make use of them beyond the trivial case. BigO represents a set of functions. Comparing BigO checks for subset inclusion. But what can you use it for? When you compose algorithms and even run an optimizer over it, then combining a O(N^2) with O(N) kan turn into O(1). You need advanced compiler support for this to be valuable. You can also get tighter bounds for specific input models. Yes, you can. Exactly, and when you compose/combine algorithms you often end up constraining the input model.
Re: __traits(getAttributes, ...) gets attributes for the first overload only
On Sunday, 6 December 2015 at 02:00:30 UTC, Andrei Alexandrescu wrote: Yah, error is the way to go. -- Andrei https://issues.dlang.org/show_bug.cgi?id=15414
Re: Complexity nomenclature
Timon Gehr wrote: > On 12/05/2015 09:48 PM, Andrei Alexandrescu wrote: >> On 12/04/2015 10:24 PM, Timon Gehr wrote: >>> void foo(@(BigO.linear) int n,@(BigO.linear) int m); >>> >>> But UDAs for parameters are not supported. >> >> That's actually pretty neat and easy to work around like this: >> >> void foo(int n, int m) @(BigOParam!2(BigO.linear, BigO.linear); >> >> In fact I went through the implementation but soon I hit a wall: what's >> the _relationship_ between the two growths? It may be the sum O(m + n) >> but also the product O(m * n). So the operator must be encoded as well. >> >> Then what do we do with more complex relationships like O((m + n) log n) >> etc. >> >> Then once you get to some reasonable formula, what's the ordering on top >> of these complexities? Probably difficult. >> ... > > Some upper bounds are incomparable, so there would not be a total order. > But that is not a problem. > >> I gave up on this for the time being. Ideas welcome. >> ... > > The next step up the expressiveness scale would be to have a > sum-of-products representation. > > Proof of concept (disclaimer: hacked together in the middle of the > night, and not tested thoroughly): > > http://dpaste.dzfl.pl/d1512905accd > > I think this general approach is probably close to the sweet spot. (The > implementation is not feature-complete yet, though. It would be nice if > it supported automatically computing a new asymptotic runtime bound from > asymptotic bounds on the arguments.) > Brilliant! My wife has a work emergency so I've been with the kids all day, but here's what can be done to make this simpler. Use D parsing and eliminate the whole parsing routine. Add multiply and power are all defined so you only need log of bigo. Define global constants K, N, and N1 through N7. K is constant complexity all others are free variables. Then complexities are regular D expressions e.g BigO(N^2 * log(N)). On a the phone sry typos.
Re: DMD unittest fail reporting…
On Sun, 06 Dec 2015 12:11:08 +0100, Jacob Carlborg wrote: > I also don't think one should test using loops. Table based testing is quite handy in a number of circumstances as long as you're using a framework that makes it viable. Asserts that throw exceptions make it far less viable. One other reason it works in Go is that you already have a tradition of laboriously constructing descriptive error messages by hand due to the lack of stacktraces. But since Spock automates that for you, it would be more viable with Spock than with D's default unittests.
Re: OT: Swift is now open source
On Thursday, 3 December 2015 at 17:19:03 UTC, Jack Stouffer wrote: On Thursday, 3 December 2015 at 17:13:49 UTC, Jack Stouffer wrote: https://github.com/apple/swift Fun Fact: in the time it took apple to open source this (announcement to now), D has had six open source releases (2.068 - 2.069.2). Multiple mentions of D in the last paragraphs of the recent Wired article about Swift finally getting open-sourced: http://www.wired.com/2015/12/apple-open-sources-its-swift-programming-language/
Re: Complexity nomenclature
On 12/06/2015 03:39 PM, Timon Gehr wrote: For our purposes, O(f) = { g | ∃c. ∀x⃗. |g(x⃗)| ≤ c·|f(x⃗)|+c }. Hm, this does not actually work too well. E.g., we want O(n+m) ⊆ O(n log m + m log n). This breaks down with that definition if we e.g. fix m=1 and let n vary. Anyway, I think this can be fixed.
Re: Complexity nomenclature
On 12/06/2015 08:59 AM, Ola Fosheim Grøstad wrote: On Sunday, 6 December 2015 at 03:24:24 UTC, Timon Gehr wrote: Some upper bounds are incomparable, so there would not be a total order. But that is not a problem. It is a problem in all cases as you usually dont have an optimal bound. Are you complaining that the given implementation does not support 'min', or what are you trying to say here? And with your approach that will most certainly be guaranteed to happen. How is that specific to my approach? I only showed a more expressive BigO implementation. Comparing bounds does not mean you are comparing running time. ... BigO represents a set of functions. Comparing BigO checks for subset inclusion. O(1) implies O(f(x)), O(N) implies O(N^2). For our purposes, O(f) = { g | ∃c. ∀x⃗. |g(x⃗)| ≤ c·|f(x⃗)|+c }. O(1) ⊆ O(f(x)), O(N) ⊆ O(N²). <= checks ⊆. == checks =. (The final implementation should use exact fractions instead of doubles.) You can also get tighter bounds for specific input models. Yes, you can.
Re: OT: Swift is now open source
On 2015-12-06 10:43:24 +, Jacob Carlborg said: You're decoding characters (grapheme clusters) as you advance those indexes. Not really what I needed, for me it would be enough with slicing the bytes. That only works if the actual underlying representation is UTF8 (or other single-byte encoding). String abstracts that away from you. But you can do this if you want to work with bytes: let utf8View = str.utf8 utf8View[utf8View.startIndex.advancedBy(2) ..< utf8View.endIndex.advancedBy(-1)] or: let arrayOfBytes = Array(str.utf8) arrayOfBytes[2 ..< arrayOfBytes.count-1] It's called indexOf. (Remember, the index type is an iterator.) It does return an optional. It will work for any type conforming to the ContainerType protocol where Element conforms to Equatable. Like this: let str = "Hello, playground" let start = str.unicodeScalars.indexOf("p")! let end = str.unicodeScalars.indexOf("g")! str.unicodeScalars[start ..< end] // "play" str.unicodeScalars[start ... end] // "playg" I was looking for a method to return the first element matching a predicate. container.indexOf(predicate) container.indexOf { (element) in element == "p" } container.indexOf { $0 == "p" } If it's an iterator I would expect to be able to get the value it points to. I can't see how I can do that with an Index in Swift. container[index] The index is an iterator in the sense that it points at one location in the container and apply some container-released logic as you advance. But you still have to use the container to access its value. The index does not expose the value even when it knows about it internally. Not all index types are like that. Containers with random access normally use Int as their index type because it's sufficient and practical. -- Michel Fortin michel.for...@michelf.ca http://michelf.ca
Re: OT: Swift is now open source
On Sunday, 6 December 2015 at 10:43:24 UTC, Jacob Carlborg wrote: Not really what I needed, for me it would be enough with slicing the bytes. I can't really test it, since I'm stuck on "Mavericks" and they stopped updating XCode. (last version before they destroyed the UI). But this should work, no? String(Array(str.characters)[1...2]) Guess now that it's open source I can finally play around with Swift2+ though.
Re: DMD unittest fail reporting…
On 2015-12-05 12:12, Russel Winder via Digitalmars-d wrote: I put it the other way round: why do you want a stack trace from a failure of a unit test? The stack trace tells you nothing about the code under test that the test doesn't already tell you. All you need to know is which tests failed and why. This of course requires power asserts or horrible things like assertEqual and the like to know the state that caused the assertion fail. For me, PyTest is the model system here, along with Spock, and ScalaTest. Perhaps also Catch. ScalaTest will print a stack trace on failure, at least when I run it from inside Eclipse. So will RSpec which I'm guessing ScalaTest is modeled after. In RSpec, with the default formatter it will print a dot for a passed test and a F for a failed test. Then at the end it will print the stack traces for all failed tests. Just because some unittests have done something in the past doesn't mean it is the right thing to do. The question is what does the programmer need for the task at hand. Stack traces add nothing useful to the analysis of the test pass or fail. I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon as the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there? I will be looking at dunit, specd and dcheck. The current hypothesis is though that the built in unit test is not as good as it needs to be, or at least could be. The built-in runner is so bad it's almost broken. -- /Jacob Carlborg
Re: DMD unittest fail reporting…
On 2015-12-06 00:09, Chris Wright wrote: But there are problems with saying that the builtin assert function should show the entire expression with operand values, nicely formatted. assert has to serve both unittesting and contract programming. When dealing with contract programming and failed contracts, you risk objects being in invalid states. Trying to call methods on such objects in order to provide descriptive error messages is risky. A helpful stacktrace might be transformed into a segmentation fault, for instance. Or an assert error might be raised while attempting to report an assert error. assert is a builtin function. It's part of the runtime. That puts rather strict constraints on how much it can do. The runtime can't depend on the standard library, for instance, so if you want assert() to include the values that were problematic, the runtime has to include that formatting code. That doesn't seem like a lot on its own, but std.format is probably a couple thousand lines of code. (About 3,000 semicolons, including unittests.) I would like these nicely formatted messages. I don't think it's reasonably practical to add them to assert. I'll spend some thought on how to implement them outside the runtime, for a testing framework, though I'm not optimistic on a nice API. Catch does it with macros and by parsing C++, and the nearest equivalent in D is string mixins, which are syntactically more complex. Spock does it with a compiler plugin. I know I can do it with strings and string mixins, but that's not exactly going to be a clean API. Another good use case for AST macros. -- /Jacob Carlborg
Re: DMD unittest fail reporting…
On 2015-12-05 21:44, Russel Winder via Digitalmars-d wrote: For the purposes of this argument, let's ignore crashes or manually executed panics. The issue is the difference in behaviour between assert and Errorf. assert in languages that use it causes an exception and this causes termination which means execution of other tests does not happen unless the framework makes sure this happens. D unittest does not. Errorf notes the failure and carries on, this is crucial important for good testing using loops. I think that the default test runner is completely broken for terminating the complete test suite if a test fails. Although I do think I should terminate the rest of the test that failed. I also don't think one should test using loops. Very true, and that is core to the issue here. asserts raise exceptions which , unless handled by the testing framework properly, cause termination. This is at the heart of the problem. For data-driven testing some form of loop is required. The loop must not terminate if all the tests are to run. pytest.mark.parametrize does the right thing, as do normal loops and Errorf. D assert does the wrong thing. Nothing says that you have to use assert in a unit test ;) I'm not sure how your data looks like or what you're actually testing. But when I had the need to test multiple values it was either a data structure, then I could do one assert for the whole data structure. Or I used multiple tests. I think this is the evidence that proves that the current D testing framework is in need of work to make it better than it is currently. Absolutely, the built in support is almost completely broken. If a stacktrace is needed the testing framework is inadequate. I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon ass the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there? dspecs I'm not sure if you're referring to my "framework" [1] or this one [2]. But none of them will catch any exception and behave just as the standard test runner. But would like to implement a custom runner that catches assertions and continues with the next tests. and specd are This one seems to only catch "MatchException". So if any other exception is thrown, including assert error, it will have the same behavior as the standard test runner. [1] https://github.com/jacob-carlborg/dspec [2] https://github.com/youxkei/dspecs -- /Jacob Carlborg
Re: OT: Swift is now open source
On 2015-12-06 03:02, Michel Fortin wrote: Apple's API is still rather verbose and hard to discover, but that is not swift's fault. They could have gone the D route by separating the method name from the selector: extern(Objective-C) class Foo { void bar() @selector("thisIsMyReallyLongSelector:withAnotherSelector:"); } You can do that in Swift too with @objc(some:selector:). But they (Apple) didn't choose to use that feature ;) And for Swift 3 they do plan to give Swift-specific names to pretty much all methods in the Apple frameworks. https://github.com/apple/swift-evolution/blob/master/proposals/0005-objective-c-name-translation.md Seems interesting. You can be less verbose if you want: let str = "Hello, playground" str[str.startIndex.advancedBy(2) ..< str.endIndex.advancedBy(-1)] I tried that but couldn't make it to work. Not sure what I did wrong. Also note that those special index types are actually iterators. Aha, I didn't know that. You're decoding characters (grapheme clusters) as you advance those indexes. Not really what I needed, for me it would be enough with slicing the bytes. It's called indexOf. (Remember, the index type is an iterator.) It does return an optional. It will work for any type conforming to the ContainerType protocol where Element conforms to Equatable. Like this: let str = "Hello, playground" let start = str.unicodeScalars.indexOf("p")! let end = str.unicodeScalars.indexOf("g")! str.unicodeScalars[start ..< end] // "play" str.unicodeScalars[start ... end] // "playg" I was looking for a method to return the first element matching a predicate. If it's an iterator I would expect to be able to get the value it points to. I can't see how I can do that with an Index in Swift. -- /Jacob Carlborg
Re: Complexity nomenclature
On Sunday, 6 December 2015 at 03:24:24 UTC, Timon Gehr wrote: Some upper bounds are incomparable, so there would not be a total order. But that is not a problem. It is a problem in all cases as you usually dont have an optimal bound. And with your approach that will most certainly be guaranteed to happen. Comparing bounds does not mean you are comparing running time. O(1) implies O(f(x)), O(N) implies O(N^2). You can also get tighter bounds for specific input models.