GitHub user dieppe opened a pull request:

    https://github.com/apache/cordova-plugin-camera/pull/185

    [iOS] Major refactor

    This is a rebased version of the previous pull-request 
https://github.com/apache/cordova-plugin-camera/pull/101

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jnuine/cordova-plugin-camera ios-refactor

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/cordova-plugin-camera/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #185
    
----
commit 5124898c237e5db78128d2b97ef74587b2690dd7
Author: Clément Vollet <[email protected]>
Date:   2015-06-05T16:46:49Z

    Major refactor
    
    The major thing added here is that metadata is correctly handled in
    most (all?) cases. I also think some bugs have been fixed, but I did
    not check the bug list ;)
    
    As a foreword, I want to say that I am far from being an expert in
    either Objective-C nor the iOS APIs (I only did some basic patches
    before). So please correct me if I say/did anything wrong.
    
    As a general design decision, I tried to break all the workflow in
    small readable methods that either return something (I chose to use
    reference passing when returning more than one result which I find
    clearer than returning an NSDictionary where the result is specified
    nowhere), or use the convention completionBlock / resultBlock /
    failureBlock which it seems Apple APIs use.
    
    There are three main paths from the image retrieval to the result.
    
    1. The user wants the original image from the PhotoLibrary or from the
    SavedPhotoAlbum with a destination type NATIVE_URI.
    2. The user wants a modified version of the image, wherever it comes
    from.
    3. The user wants the geolocation.
    
    There are also some combinations of options that are disallowed because
    they don’t make any sense:
    1. if the user wants an image with a source type CAMERA, and a
    destination type NATIVE_URI, he has to save it to the photo album. An
    image that is not there does not have a NATIVE_URI.
    2. if the user wants an image with a source type PHOTOLIBRARY or
    SAVEDPHOTOALBUM, needs editing (resize, orientation correction), and a
    destination type NATIVE_URI, he has to save it to the photo album.
    Otherwise, we would return the original image with no modification.
    
    As we can see, if we want a destination type of NATIVE_URI, there are
    only two possible paths: either we want an unmodified version of a
    library image, or we need to save to the photo album.
    
    Regarding metadata, we do not need it in only one case, when the
    destination type is NATIVE_URI, the source is the library, and no edits
    have to be done.
    
    Otherwise, we need to either fetch if from the
    UIImagePickerControllerMediaMetadata key of the info dictionary
    provided by UIImagePickerController in case the source type is CAMERA,
    or from the ALAsset corresponding to the library image (see
    https://developer.apple.com/library/ios/documentation/UIKit/Reference/UI
    ImagePickerControllerDelegate_Protocol/index.html#//apple_ref/doc/consta
    nt_group/Editing_Information_Keys).
    This actually introduces asynchronicity in the metadata retrieval,
    since we first need to fetch the ALAsset using
    UIImagePickerControllerReferenceURL. *We cannot use UIImage as it
    strips out most (all?) of the metadata*.
    
    Once we actually retrieved the metadata we need to modify it for every
    transformation of the image (orientation, resize, gps).
    
    In the previous code, the saving was done after computing the
    CDVResult. This is not the case anymore as we need access to the saved
    ALAsset when we want to return a NATIVE_URI. The saving is now done
    after all image editing has occurred.
    
    Since the metadata retrieval and the album saving is done
    asynchronously, the process pipeline itself is made asynchronous. We
    can use resultBlock and failureBlock to get the processed image with
    its corresponding metadata or to get the image Asset NATIVE_URI.
    
    CropToSize is not a valid option (it is always set to NO anyway) so it
    has been removed.
    
    The documentation lacks some clarity for two things (see FIXME #1 & #2):
    -  Does the quality option apply to the library images as well. This
    would mean that we cannot use NATIVE_URI with the quality set, except
    if we save to the photo album.
    - Does the orientation correction apply to the library images as well.
    Same thing. I actually removed that options at first, but this would
    mean that we need to read the EXIF data in the js code and orient the
    image appropriately (except for NATIVE_URIs since the web view does
    that automagically, but orientation correction does not make sense
    here).
    
    - usesGeolocation does not make sense for an image with a source type
    other that CAMERA (maybe on some fringe cases?)
    - fix a bug where allowsEdit + saveToPhotoAlbum + NATIVE_URI make the
    image save in the wrong orientation (despite the metadata having the
    correct orientation afaict)
    
    WARNING:
    I did not test the usesGeolocation option due to a lack of time (and,
    frankly, a lack of interest;).
    
    ACKNOWLEDGMENT:
    Shoutout to @athibaud who also took part in the refactoring effort. Any
    bug’s on him! ;)

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to