We are not looking to allow the application to retrieve the preview
frames during video recording.  The callback I talk about is for
another feature.  I'll email you directly the general feature
description.

Another thought occurred to me, does the current architecture allow
the displayed preview size to be different from the encoder video
size?  It seems as if this may not be the case, but maybe I'm mis-
understand something here.  Before I understood that the media
recorder reconnects with camera services and effectively takes over
from the application, I thought that the architecture could
potentially allow the application to dictate the displayed frame size
while the media recorder would dictate the video encoding frame size.
A use case where these differing frame sizes could be useful is when
encoding at very low resolution, say in order to send the video over
MMS, but allowing the preview frame size to be larger so that the
preview is not just a tiny box in the middle of a large display.

Other than the frame size/format/rate, what other parameters would
need to be locked down in order to maintain the camera services to
media recorder contract?  Could these parameters be passed from media
recorder to camera services/HAL outside of the normal application path
(camera parameters), thus indicating an active video encoding use case
and allow HAL to prevent changes that might affect the video encode
use case?  This could prevent the need for reconnect and still allow
Camera/App additions to function.

Steve.

On Feb 18, 10:54 am, Dave Sparks <davidspa...@android.com> wrote:
> I don't want to add another API to MediaRecorder.
>
> We can add another parameter to the setParameter interface in the
> camera HAL to indicate if the call comes from the owner of the camera.
> The camera driver is then responsible for preventing other callers
> from changing anything that might affect the owner's use case. This
> provides flexibility to vendors to add new features without explicitly
> adding them to a whitelist in the CameraService or the MediaRecorder.
>
> Which callback did you want to get at the application level? It's not
> practical to bring the video frames back to the app because the
> encoder node needs to control the release of frames.
>
> On Feb 18, 7:21 am, steve2641 <steve2...@gmail.com> wrote:
>
>
>
> > Dave,
>
> > Can you explain a little further on what you mean when you say "enable
> > a subset of safe API's"?
>
> > Since a good percentage of the interface configuration is handled by
> > the Camera Parameters, and the Camera Parameters are pretty much an
> > direct interface between the App and the Camera HAL, I'm not sure how
> > the framework could limit these interactions without having a direct
> > hand in how the Camera HAL processes the parameter changes.
>
> > Also, in my case I need the app to continue to receive an additional
> > camera callback.  Will this be handled by this future release?  The
> > other thought we had was adding the controls and callbacks for this
> > new feature to the Media Recorder API as well as the Camera API, but
> > generally we don't like the idea of having to duplicate the
> > interfaces.  Would it make any sense to have a secondary API (say
> > Camera Effects) for those camera related features that are common
> > between the two use cases and should/could be usable by the
> > application in both cases?
>
> > Steve.
>
> > On Feb 17, 10:00 am, Dave Sparks <davidspa...@android.com> wrote:
>
> > > We cannot allow the application to make any changes to the camera that
> > > could potentially violate the contract between the camera and the
> > > media recorder. For example, let's say that the video frame size is
> > > set to CIF, and the application changes it to QCIF in the middle of a
> > > recording. This will cause the encoder to read beyond the end of the
> > > frame and probably crash the media server process.
>
> > > In a future release, we will enable a subset of safe API's that the
> > > application can use while video recording is in progress.
>
> > > On Feb 17, 6:20 am, steve2641 <steve2...@gmail.com> wrote:
>
> > > > Hi,
>
> > > > In the cupcake baseline and beyond, the potential exists for an
> > > > application to create and configure a Camera object and then pass that
> > > > object to the media recorder to be used to deliver preview frames to
> > > > be encoded into a video file.  This is fine except for the fact that
> > > > the media recorder does a "reconnect" to the Camera object, thus
> > > > seemingly disconnecting the application from the camera object.
>
> > > > Does this "reconnect" prevent the application from any further
> > > > interaction with the camera device?
>
> > > > I'm pretty sure, but please correct me if I'm wrong, that at a minimum
> > > > the "reconnect" will prevent the application from being able to
> > > > receive any callbacks that may be of interest.  For instance, say an
> > > > implementation adds a new capability to the camera interface which
> > > > includes a callback to denote completion of a requested task.  When
> > > > the camera object is passed off to the media recorder the Application
> > > > seems to be losing access to these expanded features that maybe
> > > > desired during the capture of the video.  We have such an enhancement
> > > > planned, but I don't feel comfortable disclosing the details in a
> > > > public forum.
>
> > > > It would seem more appropriate, at least in my case, that the media
> > > > recorder "reconnect" be changed to a very specific "request frames"
> > > > call which would not take over all of the potential callbacks that
> > > > could occur, but simply tell Camera Services that the video engine
> > > > requests the preview frames.  With this Camera Services could continue
> > > > to service the application as it does when no video record use case is
> > > > running.  The addition would be that Camera Services would also have
> > > > an additional client to pass frames to, the video engine.  When Camera
> > > > Services receives a preview frame from HAL it would pass it to the
> > > > Surface Flinger, potentially pass it to the application (when
> > > > requested), and potentially send it to the video engine (also, when
> > > > requested).
>
> > > > In the case where the application does not provide a Camera object the
> > > > media recorder would still have to create it's own camera object.
> > > > That's a hook that seems very clean and clearly needed.
>
> > > > Comments?
>
> > > > Thanks,
>
> > > > Steve.- Hide quoted text -
>
> > > - Show quoted text -- Hide quoted text -
>
> - Show quoted text -
--~--~---------~--~----~------------~-------~--~----~
unsubscribe: android-porting+unsubscr...@googlegroups.com
website: http://groups.google.com/group/android-porting
-~----------~----~----~----~------~----~------~--~---

Reply via email to