Keit hi,
On 12 Jan 2009, at 22:51, Schultz Keith J. wrote:

Hi Julius,

If I understand your problem correctly you are:
        1) processing a very large amount of intergers
        2) using highly optimized code that is:
            a) you are manipulating the data directly via pointers
            b) the data in memory is expected to be in a specific
              order/structure
            c) the data is stored on disk in pure binary form
              that is the same format as in memory
That's about right.
I'm creating a painting system that I want eventually to make use of all the available pixels on large display panels, e.g. 1600 x 1000 pixel resolution. The shape and contents of each brushstroke change over time and the contents themselves are complex, e.g. random patterns. Effectively there is very little redundancy both in each frame and in any image sequence. I don't know all the problems that trying to pump images of this size at say 15 or 25 fps onto the screen will entail so I'm adopting a gradualistic approach to program development. If I can, I want to anticipate difficulties sufficient to at least maintaining a stable overall program structure.

However, pushing this data out onto the screen is not the main problem. The main problem is that I need to have the picture with all its dynamics displayed as I am painting and that I want to keep my options when painting reasonably open, for instance to have the ability to edit stroke shape variation and colour variation after it has been painted. Essentially colour variation can be thought of as a movie. A very simple (13Mb) early example may be seen here:
http://animatedpaint.co.uk/nestaMovies/pearCity4Half.mpg.
Without going into details, I need to use lots of data, there's a fair bit of disk I/O and processing. With every increase in manipulative freedom and image complexity comes a corresponding increase in the data and processing requirement


Several years back I had optimized the code of a C program and gained a speed bump by the factor of 100 by doing the above and doing the pointer arithmatic by hand for accessing the data in the structure instead of using
builtins and standard structures.
Yes, this can be a very good way to go however I was getting a bit scared off by my lack of familiarity in using GC on malloc'd data, which I'm over now, in fact no problem, but working by oneself with no one to discuss the simplest of things can blow anxieties up from molehills to veritable everests.


So you do not need to worry about the size of your data just how you access it, I had to have the program work on different architectures with different word sizes. The inital data where in text for so the conversion to integer was easy. The trick was to use the sizeof function to get the correct values for the pointer math.
Right I'll pay attention to these


Far as stuffing two 32-bit values into a 64-bit value to avoid possible context switching is probaly a very bad trade off as the handling to such values and doing any kind of math with will hurt you badly speed wise with no space savings.
yes, which is why earlier advice I received on using standard c types has been a big relief to me.


Of course if you can do the math with bitwise operation directly you could process two integers at one time. But, I do not know exactly what you are up to.

Hope this helps.

        Keith.


Yes thanks loadsa.
Essentially everything I'm doing is very simple. It is just that there's an awful lot of data and it could all become very complex indeed if I didn't continually struggle to stop it going that way.

best wishes
Julius

http://juliuspaintings.co.uk



_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to