On 9 Dec 2013, at 14:51, Igor Elland igor.ell...@me.com wrote:
Are you taking into account that 见,≠, and 見 are composed character sequences,
not individual unichars?
This method:
- (void)printString: (NSString *)line
{
NSLog(@%s \%@\ has characters:,__FUNCTION__, line);
On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
NSString *b = @见≠見; // 0x89c1 0x2260 0x898b
So what are the results with:
NSString *b = @见”;
NSString *b = @≠”;
NSString *b = @見”;
?
And what’s the current locale? Does specifying an explicit locale
On 9 Dec 2013, at 15:05, Quincey Morris quinceymor...@rivergatesoftware.com
wrote:
On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
NSString *b = @见≠見;// 0x89c1 0x2260 0x898b
So what are the results with:
NSString *b = @见”;
NSString *b
On 9 Dec 2013, at 15:05, Quincey Morris quinceymor...@rivergatesoftware.com
wrote:
On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
NSString *b = @见≠見;// 0x89c1 0x2260 0x898b
So what are the results with:
NSString *b = @见”;
NSString *b
On 9 Dec 2013, at 15:05, Quincey Morris quinceymor...@rivergatesoftware.com
wrote:
On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
NSString *b = @见≠見;// 0x89c1 0x2260 0x898b
So what are the results with:
NSString *b = @见”;
NSString *b
On Dec 9, 2013, at 00:22 , Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
but I have great difficulties imagining a place on this world where = is the
same as ≠.
NSDiacriticInsensitiveSearch → 见≠見 (3 shorts) occurs in 见=見見 (4
shorts) at {0, 3}
The latter suggests that the bar
I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty
command line utility, copied the code, and I get NSNotFound.
2013-12-09 02:50:19.822 Test[73850:303] main 见≠見 (3 shorts) occurs in
见=見見 (4 shorts) at {9223372036854775807, 0}
On Mon, Dec 9, 2013 at 2:43 AM, Gerriet M.
On 9 Dec 2013, at 16:00, Stephen J. Butler stephen.but...@gmail.com wrote:
I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty command
line utility, copied the code, and I get NSNotFound.
2013-12-09 02:50:19.822 Test[73850:303] main 见≠見 (3 shorts) occurs in
见=見見 (4
On Dec 9, 2013, at 01:00 , Stephen J. Butler stephen.but...@gmail.com wrote:
I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty
command line utility, copied the code, and I get NSNotFound.
2013-12-09 02:50:19.822 Test[73850:303] main 见≠見 (3 shorts) occurs in
见=見見 (4
OK, you are right. Copy+paste didn't preserve the compatibility character.
Does look like a bug of sorts, or at least something a unicode expert
should explain.
On Mon, Dec 9, 2013 at 3:20 AM, Gerriet M. Denkmann gerr...@mdenkmann.dewrote:
On 9 Dec 2013, at 16:00, Stephen J. Butler
Would converting each string to NFD (decomposedStringWithCanonicalMapping)
be an acceptable work around in this case?
On Mon, Dec 9, 2013 at 3:43 AM, Stephen J. Butler
stephen.but...@gmail.comwrote:
OK, you are right. Copy+paste didn't preserve the compatibility character.
Does look like a
Hi All,
I am taking video on in my iPhone app at 1280x720 this turns out at about
40Mb/min What I want is to drop the bit rate not the size using
AVAssetWriter/AVAssetReader, is this possible or even the right way of doing
this?
I am able to get what I want by dropping the bit rate in an
On 9 Dec 2013, at 16:53, Stephen J. Butler stephen.but...@gmail.com wrote:
Would converting each string to NFD (decomposedStringWithCanonicalMapping) be
an acceptable work around in this case?
No, it would not. I am changing all my rangeOfString calls to use
NSLiteralSearch, which does not
On 9 Dec 2013, at 10:38, Gerriet M. Denkmann gerr...@mdenkmann.de wrote:
On 9 Dec 2013, at 16:53, Stephen J. Butler stephen.but...@gmail.com wrote:
Would converting each string to NFD (decomposedStringWithCanonicalMapping)
be an acceptable work around in this case?
No, it would not. I
On 6 Dec 2013, at 5:46 pm, Graham Cox graham@bigpond.com wrote:
OK, I’ve now tried this approach, and it’s much cleaner in that it works with
scrollers, with and without “responsive” scrolling (which appears to buffer
its contents) and also zooming. Code follows. In this case, drawing
On 9 Dec 2013, at 15:47, Graham Cox graham@bigpond.com wrote:
On 6 Dec 2013, at 5:46 pm, Graham Cox graham@bigpond.com wrote:
OK, I’ve now tried this approach, and it’s much cleaner in that it works
with scrollers, with and without “responsive” scrolling (which appears to
On 9 Dec 2013, at 5:01 pm, Mike Abdullah mabdul...@karelia.com wrote:
Maybe a dumb question: How about using CATiledLayer?
Well, I haven’t explored it very much, and certainly not in this context, but
it seems to me that it’s solving a different problem. It sounds similar, but
it’s not
I've been impressed with the thought you've put into this.
Probably a question you don't want at this point, because by now your looking
for closure, but did you try different blend modes when calling DrawImage,
specifically the copy blend mode. I'm wondering if that might be faster as
On Dec 9, 2013, at 7:47 AM, Graham Cox graham@bigpond.com wrote:
This last step is where it all falls down, because this one call, to
CGContextDrawImage, takes a whopping 67% of the overall time for drawRect: to
run, and normal drawing doesn’t need this call (this is testing in a
On Dec 9, 2013, at 8:36 AM, Jens Alfke j...@mooseyard.com wrote:
On Dec 9, 2013, at 7:47 AM, Graham Cox graham@bigpond.com wrote:
This last step is where it all falls down, because this one call, to
CGContextDrawImage, takes a whopping 67% of the overall time for drawRect:
to run,
On 9 Dec 2013, at 5:36 pm, Jens Alfke j...@mooseyard.com wrote:
So if you can avoid it, you shouldn’t be doing your own rendering into
images. I haven’t been following the details of this thread, but my guess is
you’ll get better performance by drawing the tiles directly to the view, just
On 9 Dec 2013, at 5:17 pm, Kevin Meaney k...@yvs.eu.com wrote:
Probably a question you don't want at this point, because by now your looking
for closure, but did you try different blend modes when calling DrawImage,
specifically the copy blend mode. I'm wondering if that might be faster as
On 9 Dec 2013, at 5:45 pm, David Duncan david.dun...@apple.com wrote:
If you have a buffer to draw into, then you can easily slice that buffer to
use between multiple graphics contexts, but you will fundamentally have to
draw them all into the source context at the end.
I wasn’t able to
On Dec 9, 2013, at 9:50 AM, Graham Cox graham@bigpond.com wrote:
By “slice the buffer”, I assume you mean set up a context on some region of
that buffer, but when I tried to do that, CGBitmapContextCreate[WithData]
would not accept my bytesPerRow value because it was inconsistent with
On Dec 9, 2013, at 8:45 AM, David Duncan david.dun...@apple.com wrote:
One major impediment to this is that you cannot use the same graphics context
between multiple threads, and as such using the graphics context that AppKit
gives you forces you into a single threaded model.
Ah,
On Dec 9, 2013, at 8:50 AM, Graham Cox graham@bigpond.com wrote:
On 9 Dec 2013, at 5:45 pm, David Duncan david.dun...@apple.com wrote:
If you have a buffer to draw into, then you can easily slice that buffer to
use between multiple graphics contexts, but you will fundamentally have
On Dec 9, 2013, at 9:23 AM, Kyle Sluder k...@ksluder.com wrote:
On Dec 9, 2013, at 8:50 AM, Graham Cox graham@bigpond.com wrote:
On 9 Dec 2013, at 5:45 pm, David Duncan david.dun...@apple.com wrote:
If you have a buffer to draw into, then you can easily slice that buffer to
use
I think I’ve explored this as far as I can go. Here’s my wrap-up, for what
it’s worth to anyone. Not a lot, I expect.
The conclusion is, I don’t think it can be done with the current graphics
APIs with any worthwhile performance. Here’s my summary of why that is…
… This last step is
On 9 Dec 2013, at 7:03 pm, Seth Willits sli...@araelium.com wrote:
If all the drawRect is doing is making a single call to CGContextDrawImage
then it should rightly be 100% of the time, so that measure isn’t interesting
on its own. :)
Yes, that’s true. It’s hard to be totally objective,
The single CGContextDrawImage in drawRect: should end up essentially being a
memcpy which will be ridiculously fast
The bottleneck for image blitting is copying the pixels from CPU RAM to GPU
texture RAM. This is often a bottleneck in high-speed image drawing, and I know
that Quartz goes
On Mon, Dec 9, 2013, at 02:04 PM, Jens Alfke wrote:
The single CGContextDrawImage in drawRect: should end up essentially being
a memcpy which will be ridiculously fast
The bottleneck for image blitting is copying the pixels from CPU RAM to
GPU texture RAM. This is often a bottleneck in
I have a program that solves problems that are very computationally intense. I
divide up the work and create an NSOperation for each segment. Then I put the
operations in NSOperationQueue, and start the queue. Expecting the job to take
three or four hours, I go to dinner.
When I return, and
On Dec 9, 2013, at 3:17 PM, Jim Elliott sjameselli...@me.com wrote:
I have a program that solves problems that are very computationally intense.
I divide up the work and create an NSOperation for each segment. Then I put
the operations in NSOperationQueue, and start the queue. Expecting
There are also APIs to disable the new “app nap” power-saving feature in OS X
10.9 — look at NSProcessInfo.
—Jens
___
Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)
Please do not post admin requests or moderator comments to the list.
Contact the
The single CGContextDrawImage in drawRect: should end up essentially being a
memcpy which will be ridiculously fast, as long as your contexts/backing all
use the same color space and bitmap layout as the view’s context’s backing.
Definitely make sure they’re using the same color space
On 9 Dec 2013, at 23:32, Jens Alfke j...@mooseyard.com wrote:
There are also APIs to disable the new “app nap” power-saving feature in OS X
10.9 — look at NSProcessInfo.
My understanding is this is basically a wrapper around the lower level Power
Assertion APIs, which have been extended to
On Dec 9, 2013, at 2:27 AM, Damien Cooke dam...@smartphonedev.com wrote:
I am taking video on in my iPhone app at 1280x720 this turns out at about
40Mb/min What I want is to drop the bit rate not the size using
AVAssetWriter/AVAssetReader, is this possible or even the right way of doing
I occasionally am getting the following message when I use the address book api
to add a record to my sharedAddressBook, which then causes the add to fail:
is being ignored because main executable
38 matches
Mail list logo