On Mon, Feb 21, 2011 at 3:49 PM, Ken Thomases <k...@codeweavers.com> wrote:
> On Feb 21, 2011, at 5:07 PM, Jonathan Taylor wrote: > > > So as far as my situation goes, what I'm after is an efficient way of > starting from a path to some sort of raster image and obtaining a raw pixel > buffer I can read. [Obviously if actually reading from disk then the disk > will be the bottleneck, but it's often in the disk cache, in which case the > extra data copy associated with [NSBitmapImageRep getBitmapDataPlanes] is a > problem. I was rather liking the file-format-agnostic abilities of > initWithContentsOfFile. Any thoughts? > > What makes you think the data copy is "extra"? > > If you read that AppKit release note, you'll see that NSImage may not be > decoding the file contents right off the bat. For example, it mentions that > "[i]f you initialize a NSImage from a JPEG file, then draw it in a PDF, you > should get a PDF of the same file size as the original JPEG". In other > words, NSImage is keeping the image data in its original JPEG-compressed > format, not as a rasterized bitmap. > > So, you may be seeing the one necessary/required data copy during decoding > of the image, not anything extra. > The other issue, as mentioned in those release notes, is pixel format. It's pretty nearly always not correct to call bitmapData on a rep you did not create yourself with the method of NSBitmapImageRep that explicitly gives pixel format. It sounds like you want to do this: (1) Read image file into NSBitmapImageRep (2) Directly access data buffer of NSBitmapImageRep The problem with that is that you are almost certainly hardcoding the expected pixel format (e.g., 8 bit per component ARGB host endian) into your app. Probably you think this is safe because you created the image file in the first place. However! ImageIO, the underlying home framework of all image file format reading and writing on Mac OS X, makes no promises about the relationship of the file to the in-memory pixel format. Certainly you know that many file formats are compressed, whereas the buffer you returned from the bitmapData method of NSBitmapImageRep is not. Further, there are many on-disk formats that are not supported by Quartz. Besides that, you might figure that the OS will never reduce the number of pixel formats, so if you're currently getting something tightly related to what you see in the file, you will expect to get that in the future. That's not the case though… CoreGraphics has talked about standardizing everything with less data than ARGB 32 host endian to ARGB 32 host endian right when its read in, to simplify and speedup later steps of the graphics pipeline. There are formats for which that happens today (I think maybe RGB JPEG data is coming in as as ARGB as of 10.6, perhaps?). That change was made as part of work in 10.6 to support decoding only rectangular blocks of an image throughout the graphics pipeline. Asking for the bitmapData also defeats that pipeline and forces everything into memory. Plus, it's pretty hard to write code that deals expressly with pixel data without putting things in a standard colorspace. RGBA (.3, .5, .6, 1.0) is a different color in sRGB vs generic RGB. For most pixel-processing algorithms, you'll get different visual results if you naively apply the same mathematics in different colorspaces. Anyway, point is, in all likelihood, you do need to draw your image to a bitmap in known format in order to work with the pixels. You probably only need to do this once though… if you're seeing repeated copying in bitmapData, that suggests that you're alternately drawing and asking for pixel data from the image. If you're never modifying the data, you could instead store the data on the side in addition to the rep. Storing can be done via subclass, wrapper object, global dictionary mapping rep object to data, objc associative storage, etc. Or, look at just drawing the image into a bitmap when you need to see the data. That's less work for you to implement, and may find that it performs well, depending on what was really costing you before. You could consider reusing one bitmap to draw in from multiple images to avoid malloc'ing and free'ing memory. -Ken Cocoa Frameworks _______________________________________________ Cocoa-dev mailing list (Cocoa-dev@lists.apple.com) Please do not post admin requests or moderator comments to the list. Contact the moderators at cocoa-dev-admins(at)lists.apple.com Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com This email sent to arch...@mail-archive.com