Re: UIStackView: Variable Spacing

2016-07-06 Thread Daniel Stenmark
It’s not so much that adding a single dummy view wrecks us.  Our cell layout 
has a lot going on, with a fair amount of variable spacing and multiple views 
often being hidden and swapped out.  The UIStackView scrolling performance slog 
I’m seeing is just sum of all that.

Sigh, oh well.  I guess that’s just another refactoring branch I’ll have to 
shelve for now.

Dan

> On Jul 6, 2016, at 4:45 PM, Roland King  wrote:
> 
> 
>> On 7 Jul 2016, at 04:37, Daniel Stenmark  wrote:
>> 
>> What’s the best way to achieve variable spacing between children of a 
>> UIStackView?  I know that a popular approach is to add an empty dummy view 
>> to act as padding, but this is being used in a UITableView cell, so 
>> scrolling performance is critical and the implicit constraints created by 
>> adding a ‘padding’ view are a death knell for us.  
>> 
>> Dan
> 
> 
> There’s no trick way to do it, you need some extra view one way or another. 
> 
> It’s a bit surprising that adding extra, fixed sized children to the stack 
> really adds that much overhead, that’s a few very simple constraints, all 
> constant, and shouldn’t really make that much difference. Perhaps the 
> stackview is being inefficient with the number of constraints it adds when 
> you add an extra child. You could take a look at the view hierarchy and see 
> if that’s the case or not. 
> 
> You could try going the other way around and making your real elements 
> children of dummy views so you get to add the simplest top/bottom-padding 
> constraints possible to those views, that may minimise the number of extra 
> constraints added and you get to control it somewhat. But if your hierarchy 
> is such that it’s straining the constraint system performance wise, whatever 
> way you try to do this is going to have similar performance.


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: UIStackView: Variable Spacing

2016-07-06 Thread Roland King

> On 7 Jul 2016, at 04:37, Daniel Stenmark  wrote:
> 
> What’s the best way to achieve variable spacing between children of a 
> UIStackView?  I know that a popular approach is to add an empty dummy view to 
> act as padding, but this is being used in a UITableView cell, so scrolling 
> performance is critical and the implicit constraints created by adding a 
> ‘padding’ view are a death knell for us.  
> 
> Dan


There’s no trick way to do it, you need some extra view one way or another. 

It’s a bit surprising that adding extra, fixed sized children to the stack 
really adds that much overhead, that’s a few very simple constraints, all 
constant, and shouldn’t really make that much difference. Perhaps the stackview 
is being inefficient with the number of constraints it adds when you add an 
extra child. You could take a look at the view hierarchy and see if that’s the 
case or not. 

You could try going the other way around and making your real elements children 
of dummy views so you get to add the simplest top/bottom-padding constraints 
possible to those views, that may minimise the number of extra constraints 
added and you get to control it somewhat. But if your hierarchy is such that 
it’s straining the constraint system performance wise, whatever way you try to 
do this is going to have similar performance. 
___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: UIStackView: Variable Spacing

2016-07-06 Thread Daniel Stenmark
No, adding additional horizontal or vertical spacing constraints to the 
UIStackView’s arranged subviews results in conflicts with UIStackView's 
implicit constraints.

Dan

On Jul 6, 2016, at 3:52 PM, Quincey Morris 
>
 wrote:

On Jul 6, 2016, at 15:41 , Daniel Stenmark 
> wrote:

This would require my UIStackView’s children to have children of their own, 
which just means even more layout constraints to resolve at scroll-time.

Can’t you set constraints between the stack view children and/or the parent? I 
thought I had done that.



___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: UIStackView: Variable Spacing

2016-07-06 Thread Quincey Morris
On Jul 6, 2016, at 15:41 , Daniel Stenmark  wrote:
> 
> This would require my UIStackView’s children to have children of their own, 
> which just means even more layout constraints to resolve at scroll-time.  

Can’t you set constraints between the stack view children and/or the parent? I 
thought I had done that.


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: UIStackView: Variable Spacing

2016-07-06 Thread Daniel Stenmark
This would require my UIStackView’s children to have children of their own, 
which just means even more layout constraints to resolve at scroll-time.

Dan

On Jul 6, 2016, at 3:34 PM, Quincey Morris 
>
 wrote:

On Jul 6, 2016, at 13:37 , Daniel Stenmark 
> wrote:

What’s the best way to achieve variable spacing between children of a 
UIStackView?

I’ve had success placing leading/trailing/top/bottom constraints on the child 
view, or on components within the child view, to siblings or parent as 
appropriate.


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: UIStackView: Variable Spacing

2016-07-06 Thread Quincey Morris
On Jul 6, 2016, at 13:37 , Daniel Stenmark  wrote:
> 
> What’s the best way to achieve variable spacing between children of a 
> UIStackView?

I’ve had success placing leading/trailing/top/bottom constraints on the child 
view, or on components within the child view, to siblings or parent as 
appropriate.

___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

UIStackView: Variable Spacing

2016-07-06 Thread Daniel Stenmark
What’s the best way to achieve variable spacing between children of a 
UIStackView?  I know that a popular approach is to add an empty dummy view to 
act as padding, but this is being used in a UITableView cell, so scrolling 
performance is critical and the implicit constraints created by adding a 
‘padding’ view are a death knell for us.  

Dan

___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Prioritize my own app's disk access

2016-07-06 Thread Jonathan Taylor
On 6 Jul 2016, at 18:01, Quincey Morris  
wrote:
> On Jul 6, 2016, at 03:06 , Jonathan Taylor  
> wrote:
>> 
>> a single lost frame will be fairly catastrophic for the scientific experiment
> 
> If this is genuinely your scenario, then nothing mentioned in this thread is 
> going to satisfy your requirements. It is pure whimsy to expect any 
> prioritization mechanic to ensure that the capture is buffered around random 
> unrelated user interactions with the Mac.

All fair points, but the fact is that in practice it works remarkably well at 
the moment. There is still some spare i/o and cpu capacity, and in practice the 
8GB of ram *does* act as a very effective, and very large, buffer. There is a 
clear indication displayed if a backlog starts building, and even a backlog of 
1000 frames is easily recoverable without loss. 

A colleague had tried to transfer some old data to another machine for analysis 
during an experiment, and it became clear this was causing a backlog to build 
in the live recording. It was no problem just to cancel the Finder copy 
operation, and the live recording recovered and cleared the backlog. My 
question was just intended to explore whether there was something easy I could 
do to make that sort of low-priority background request more likely to work 
without causing conflicts. It sounds like the answer is probably not! (Though I 
have definitely learned some very useful stuff in the process)
___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Prioritize my own app's disk access

2016-07-06 Thread Jens Alfke

> On Jul 6, 2016, at 3:06 AM, Jonathan Taylor  
> wrote:
> 
> I should also clarify that (in spite of my other email thread running in 
> parallel to this) I am not doing any complex encoding of the data being 
> streamed to disk - these are just basic TIFF images and metadata.

Since you said previously that you use -[NSData writeToFile:…], it sounds like 
you’re creating a lot of files with one image in each. This is going to incur a 
lot of extra overhead for updating filesystem metadata: creating a file is 
pretty heavyweight compared to writing to a file. This is partly because HFS 
has greater durability guarantees for the filesystem itself than for data 
within files, so changes to filesystem structures are more expensive to write. 
And there’s also the overhead of the kernel calls for opening and closing the 
file.

(You can see this for yourself by comparing how long it takes the Finder or “cp 
-R” to copy a single 1GB file, vs. 1000 1MB files.)

TL;DR: It will be a lot more efficient to write all of the images+metadata to a 
single file. You can make up your own file format by just prefixing each image 
with a byte count. Or you can get a library that knows how to write Zip files 
and use that. (I don’t mean gzip; I mean the .zip archive format that contains 
multiple files.)

—Jens
___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Prioritize my own app's disk access

2016-07-06 Thread Quincey Morris
On Jul 6, 2016, at 03:06 , Jonathan Taylor  
wrote:
> 
> a single lost frame will be fairly catastrophic for the scientific experiment

If this is genuinely your scenario, then nothing mentioned in this thread is 
going to satisfy your requirements. It is pure whimsy to expect any 
prioritization mechanic to ensure that the capture is buffered around random 
unrelated user interactions with the Mac.

I assume you’ve accepted that power failure, hard OS crash, etc will 
necessitate re-starting the experiment. However, excuse me for saying so, but I 
think you’re crazy if you let users log in to this Mac while an experiment is 
running. You need a dedicated Mac in a locked room.

If for some reason it’s infeasible to dedicate a Mac to the capture, then your 
best option is to tape a sign over the Mac’s screen saying “Experiment running, 
don’t use this Mac”. That’s going to work better than writing code, and it has 
the side benefit of making it not your fault if someone ruins the experiment by 
ignoring the sign.

 
___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Prioritize my own app's disk access

2016-07-06 Thread Alastair Houghton
On 6 Jul 2016, at 11:06, Jonathan Taylor  wrote:
> 
> Hopefully my detail above explains why I really do not want to drop frames 
> and/or use a ring buffer. Effectively I have a buffer pool, but if I exhaust 
> the buffer pool then (a) something is going badly wrong, and (b) I prefer to 
> expand the buffer pool as a last-ditch attempt to cope with the backlog 
> rather than terminating the experiment right then and there.

Better, I think, to design it with a worst case in mind to start with, 
particularly if you know how much RAM is in the machine you’re using.  
Expanding the buffer space either means you under-sized your buffer pool in the 
first place, or the I/O system is simply not fast enough (so you’ll have to 
stop anyway).  I suppose if it’s really just a last ditch attempt to capture 
what you can, it’s OK in that context, but it’s still unclear whether it’ll be 
a significant benefit in practice.

>> Without knowing exactly how much video data you’re generating and what 
>> encoder you’re using (if any), it’s difficult to be any more specific, but 
>> hopefully this gives you some useful pointers.
> 
> As I say, there is no encoding going on in this particular workflow. Absolute 
> maximum data rates are of the order of 50MB/s, but [and this is a non-optimal 
> point, but one that I would prefer to stick with] this is split out into a 
> sequence of separate files, some of which are as small as ~100kB in size.

OK, well, FWIW, typical spinning disks tend to run at around 80MB/s if you 
write efficiently; I’d say that 50MB/s in separate 100KB files is actually 
pretty good going, but I think you’d be much better off writing a single data 
stream to a single file, and splitting it out into individual frames later if 
necessary.  In the context of TIFF files, it might be worth pointing out that 
they can already hold multiple images, so you could store the entire sequence 
in a single TIFF if you wanted (depending on what software you are using to 
process it after that).

Also, if you’re saving lots of individual image files, I’d strongly recommend 
turning Spotlight’s indexer off for the folder you’re saving them into, 
otherwise you’re going to have extra traffic from that.  Likewise, you might 
want to exclude the folder from Time Machine backups (if you’re using those) 
or—possibly better—turn Time Machine off completely.

RAIDs and SSDs can be much faster, so you might also want to consider using 
those for storage, particularly if you’re determined to stick with individual 
files.

Kind regards,

Alastair.

--
http://alastairs-place.net


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Prioritize my own app's disk access

2016-07-06 Thread Jonathan Taylor
Thanks for your reply Alastair. Definitely interested in thinking about your 
suggestions - some responses below that will hopefully help clarify:

> The first thing to state is that you *can’t* write code of this type with the 
> attitude that “dropping frames is not an option”.  Fundamentally, the problem 
> you have is that if you generate video data faster than it can be saved to 
> disk, there is only so much video data you can buffer up before you start 
> swapping, and if you swap you will be dead in the water --- it will kill 
> performance to the extent that you will not be able to save data as quickly 
> as you could before and the result will be catastrophic, with far more frames 
> dropped than if you simply accepted that there was the possibility the 
> machine you were on was not fast enough and would have to occasionally drop a 
> frame.

I should clarify exactly what I mean here. Under normal circumstances I know 
from measurements that the i/o can keep up with the maximum rate at which 
frames can be coming in. I very rarely see any backlog at all reported, but 
might occasionally see a transient glitch (if CPU load momentarily spikes) of 
the order of 10MB backlog that is soon cleared. With that as the status quo, 
and 8GB of RAM available, something has gone badly, badly wrong if we enter vm 
swap chaos. 

When I say "dropping frames is not an option", what I mean is that a single 
lost frame will be fairly catastrophic for the scientific experiment, and so my 
priorities in order are: (1) ensure the machine specs leave plenty of headroom 
above my actual requirements, (2) try and do anything relatively simple I can 
do to ensure my code is efficient and marks threads/operations/etc as high or 
low priority where possible, (3) identify stuff that the user should avoid 
doing (which looks like it includes transferring data off the machine while a 
recording session is in progress - hence this email thread!), (4) not worry too 
much about what to do when we *have* already ended up with a catastrophic 
backlog (i.e. whether to drop frames or do something else), because at that 
point we have failed in the sense that the scientific experiment will basically 
need to be re-run.

I should also clarify that (in spite of my other email thread running in 
parallel to this) I am not doing any complex encoding of the data being 
streamed to disk - these are just basic TIFF images and metadata. The encoding 
I referred to in my other thread is optional offline processing of 
previously-recorded data.

> The right way to approach this type of real time encoding problem is as 
> follows:
> 
> 1. Use statically allocated buffers (or dynamically allocated once at encoder 
> or program startup).  DO NOT dynamically allocate buffers as you generate 
> data.
> 
> 2. Knowing the rate at which you generate video data, decide on the maximum 
> write latency you need to be able to tolerate.  This (plus a bit as you need 
> some free to encode into) will tell you the total size of buffer(s) you need.

OK.

> 3. *Either*
> 
>   (i)  Allocate a ring buffer of the size required, then interleave encoding 
> and issuing I/O requests.  You should keep track of where the 
> as-yet-unwritten data starts in your buffer, so you know when your encoder is 
> about to hit that point.  Or
> 
>   (ii) Allocate a ring *of* fixed size buffers totalling the size required; 
> start encoding into the first one, then when finished, issue an I/O request 
> for that buffer and continue encoding into the next one.  You should keep 
> track of which buffers are in use, so you can detect when you run out.
> 
> 4. When issuing I/O requests, DO NOT use blocking I/O from the encoder 
> thread.  You want to be able to continue to fetch video from your camera and 
> generate data *while* I/O takes place.  GCD is a good option here, or you 
> could use a separate I/O thread with a semaphore, or any other asynchronous 
> I/O mechanism (e.g. POSIX air, libuv and so on).
> 
> 5. If you find yourself running out of buffers, drop frames until buffer 
> space is available, and display the number of frame drops to the user.  This 
> is *much* better than attempting to use dynamic buffers and then ending up 
> swapping, which is I think what’s happening to you (having read your later 
> e-mails).

I am making good use of GCD here (and like it very much!). There are quite a 
few queues involved, and one is a dedicated disk-writing queue. The main 
CPU-intensive work going on in parallel with this is some realtime image 
analysis, but this is running on a concurrent queue.

Hopefully my detail above explains why I really do not want to drop frames 
and/or use a ring buffer. Effectively I have a buffer pool, but if I exhaust 
the buffer pool then (a) something is going badly wrong, and (b) I prefer to 
expand the buffer pool as a last-ditch attempt to cope with the backlog rather 
than terminating the experiment right then and there.

> Without knowing 

Re: Prioritize my own app's disk access

2016-07-06 Thread Alastair Houghton
On 5 Jul 2016, at 13:36, Jonathan Taylor  wrote:
> 
> This is a long shot, but I thought I would ask in case an API exists to do 
> what I want. One of the roles of my code is to record video to disk as it is 
> received from a camera. A magnetic hard disk can normally keep up with this, 
> but if the user is also doing other things on the computer (e.g. long file 
> copy in the Finder) then we are unable to keep up, and accumulate an 
> ever-increasing backlog of frames waiting to be saved. This eventually leads 
> to running out of memory, thrashing, and an unresponsive computer. Dropping 
> frames is not an option. In this case, the computer is a dedicated 
> workstation running my code, so it *is* correct for me to consider my code to 
> be the number 1 priority on the computer.

Let’s start this again, because I think the fundamental problem here is that 
you’re going about this the wrong way.  Whether you use Cocoa or not is, I 
think, largely an irrelevance (I *wouldn’t* for this kind of task, but I see no 
fundamental reason why performance should be a problem just because you choose 
to e.g. use NSMutableData to manage your buffer space, *provided* you do it 
right).

The first thing to state is that you *can’t* write code of this type with the 
attitude that “dropping frames is not an option”.  Fundamentally, the problem 
you have is that if you generate video data faster than it can be saved to 
disk, there is only so much video data you can buffer up before you start 
swapping, and if you swap you will be dead in the water --- it will kill 
performance to the extent that you will not be able to save data as quickly as 
you could before and the result will be catastrophic, with far more frames 
dropped than if you simply accepted that there was the possibility the machine 
you were on was not fast enough and would have to occasionally drop a frame.

The right way to approach this type of real time encoding problem is as follows:

1. Use statically allocated buffers (or dynamically allocated once at encoder 
or program startup).  DO NOT dynamically allocate buffers as you generate data.

2. Knowing the rate at which you generate video data, decide on the maximum 
write latency you need to be able to tolerate.  This (plus a bit as you need 
some free to encode into) will tell you the total size of buffer(s) you need.

3. *Either*

   (i)  Allocate a ring buffer of the size required, then interleave encoding 
and issuing I/O requests.  You should keep track of where the as-yet-unwritten 
data starts in your buffer, so you know when your encoder is about to hit that 
point.  Or

   (ii) Allocate a ring *of* fixed size buffers totalling the size required; 
start encoding into the first one, then when finished, issue an I/O request for 
that buffer and continue encoding into the next one.  You should keep track of 
which buffers are in use, so you can detect when you run out.

4. When issuing I/O requests, DO NOT use blocking I/O from the encoder thread.  
You want to be able to continue to fetch video from your camera and generate 
data *while* I/O takes place.  GCD is a good option here, or you could use a 
separate I/O thread with a semaphore, or any other asynchronous I/O mechanism 
(e.g. POSIX air, libuv and so on).

5. If you find yourself running out of buffers, drop frames until buffer space 
is available, and display the number of frame drops to the user.  This is 
*much* better than attempting to use dynamic buffers and then ending up 
swapping, which is I think what’s happening to you (having read your later 
e-mails).

Without knowing exactly how much video data you’re generating and what encoder 
you’re using (if any), it’s difficult to be any more specific, but hopefully 
this gives you some useful pointers.

Kind regards,

Alastair.

--
http://alastairs-place.net


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com