Hello all.

I have an app which the user can transform an image (rotate,
translate, scale), On top of this image is an image has, so mostly the
user fits in the mask the portion of the image that he wants.

Then I need to compose an image based on what is inside the mask.
Problem is that the composed image must have 900wX567h px, the mast is
1/3 of those dimensions.

So this is what i do.

For the transformation I'm using gesture recognizers.
when I want to compose the image this is are the steps I'm doing

 UIGraphicsBeginImageContext(size)  //size is a CGSize with 900x567
 CGContextRef ctx = UIGraphicsGetCurrentContext();
 CGContextSetAllowsAntialiasing(ctx, YES);
 CGContextSetShouldAntialias(ctx, YES);
 CGContextConcatCTM(ctx, _imageView.transform);

[_imageView.image
drawAtPoint:CGPointMake(-_image.size.width/2,-_image.size.height/2)];

 UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
  UIGraphicsEndImageContext();


So I get an image, with the right size, but what is in the image is
completely different of what was inside the mask when the user edited.

I have tried scaling the context and translating it, but nothing. I
have close results but mostly the position and scale are wrong.

I would appreciate any help you can provide me in this matter, as I
dunno what I'm missing...

Thanks in advance

Gustavo
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to