> On Sat, Sep 26, 2009 at 6:32 PM, Guennadi Liakhovetski
> <g.liakhovet...@gmx.de> wrote:
>> On Sat, 26 Sep 2009, Dongsoo, Nathaniel Kim wrote:
>>
>>> On Fri, Sep 25, 2009 at 3:07 AM, Guennadi Liakhovetski
>>> <g.liakhovet...@gmx.de> wrote:
>>> > Hi Hans
>>> >
>>> > Thanks for keeping us updated. One comment:
>>> >
>>> > On Wed, 23 Sep 2009, Hans Verkuil wrote:
>>> >
>>> >> In the afternoon we discussed the proposed timings API. There was no
>>> >> opposition to this API. The idea I had to also use this for sensor
>>> setup
>>> >> turned out to be based on a misconception on how the S_FMT relates
>>> to sensors.
>>> >> ENUM_FRAMESIZES basically gives you the possible resolutions that
>>> the scaler
>>> >> hidden inside the bridge can scale the native sensor resolution. It
>>> does not
>>> >> enumerate the various native sensor resolutions, since there is only
>>> one. So
>>> >> S_FMT really sets up the scaler.
>>> >
>>> > Just as Jinlu Yu noticed in his email, this doesn't reflect the real
>>> > situation, I am afraid. You can use binning and skipping on the
>>> sensor to
>>> > scale the image, and you can also use the bridge to do the scaling,
>>> as you
>>> > say. Worth than that, there's also a case, where there _several_ ways
>>> to
>>> > perform scaling on the sensor, among which one can freely choose, and
>>> the
>>> > host can scale too. And indeed it makes sense to scale on the source
>>> to
>>> > save the bandwidth and thus increase the framerate. So, what I'm
>>> currently
>>> > doing on sh-mobile, I try to scale on the client - in the best
>>> possible
>>> > way. And then use bridge scaling to provide the exact result.
>>> >
>>>
>>> Yes I do agree with you. And it is highly necessary to provide a clear
>>> method which obviously indicates which device to use in scaling job.
>>> When I use some application processors which provide camera
>>> peripherals with scaler inside and external ISP attached, there is no
>>> way to use both scaler features inside them. I just need to choose one
>>> of them.
>>
>> Well, I don't necessarily agree, in fact, I do use both scaling engines
>> in
>> my sh setup. The argument is as mentioned above - bus usage and
>> framerate
>> optimisation. So, what I am doing is: I try to scale on the sensor as
>> close as possible, and then scale further on the host (SoC). This works
>> well, only calculations are not very trivial. But you only have to
>> perform
>> them once during setup, so, it's not time-critical. Might be worth
>> implementing such calculations somewhere centrally to reduce error
>> chances
>> in specific drivers. Same with cropping.
>>
>
> I think that is a good approach. And considering the image quality, I
> should make bypass the scaler when user is requesting the exact
> resolution supported by the external camera ISP. Because some of
> camera interface embedded scalers are very poor in image quality and
> performance thus they may reduce in framerate as well. So, user can
> choose "with scaler" or "without scaler".

There are two ways of doing this: one is to have a smart driver that will
attempt to do the best thing (soc-camera, uvc, gspca), the other will be
to give the application writer full control of the SoC capabilities
through the media controller. Through a media controller you will be able
to setup the sensor scaler and a SoC scaler independently.

For a digital camera for example you probably want to be able to control
the hardware from the application in order to get the very best results,
rather than let the driver do it.

Regards,

         Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to