Stagefright already supports OMX codecs in the Froyo release. This is
how we integrate hardware codecs and 3rd party codecs.
On Aug 24, 7:53 am, Lakshman lakshma...@gmail.com wrote:
Hi Dave Sparks,
As you have mentioned in QA session in last Google IO, Stagefright media
framework
Does your raw frame match the format that the encoder is expecting?
I think PV encoder uses YCbCr420 planar format.
On Jun 26, 4:04 am, scs sek scs...@gmail.com wrote:
Hi,
I am not able to see the playback of the clip which is recorded using PV
H264 Encoder. I can see only green patch. I am
Your gstreamer audio sink should be an AudioTrack, not ALSA.
On Jun 22, 10:44 pm, Nilly ni...@oriolesoftware.com wrote:
hi,
android is default using alsa.
I want to use alsa sink in gstreamer audio as android is using alsa
device I am getting DEVICE BUSY ERROR.
But some how i got out of
The software decoders don't use multiple buffers because there is no
benefit since the decoders run synchronously.
For hardware decoders, you specify the number of buffers during the
OMX negotiation. We have used anywhere from 4 to 12 buffers depending
on the specific codec (AVC usually needs
It looks like your OpenCORE project is out of sync. Which branch are
you building from?
There were some changes we made internally in the Cupcake release that
PV didn't have when they did the OC 2.0 release. I wonder if you have
a mix of old and new.
On Jun 5, 5:38 pm, Alan Cramer
,
Sreedhar
On Thu, May 7, 2009 at 9:50 PM, Dave Sparks davidspa...@android.com wrote:
I don't think SurfaceFlinger has anything to do with the problem. You
can check this by looking at the window size requested by VideoView
after the video size is determined. If the requested view size
Does the media server process have write access to the directory where
you are trying to create the file?
On May 29, 7:31 pm, max max.xi...@gmail.com wrote:
Hi Guru,
I am doing the job to enable sound recording on my device.
I put all android stuff on a 4GB sdcard, it is formated using
With the Cupcake release, by default we do software color conversion
in the video MIO. If your hardware supports it, you can override this
and use the hardware CC. This is the way it is done on the G1 and
other msm7k based devices.
In the release after Donut, you'll be able use a GL shader to do
You're not really going to find that information from the logs. The
call flow involves 3 different processes acting in concert. The only
way to understand it is to dig into the source code.
On May 17, 9:53 pm, somu somuaric...@gmail.com wrote:
Hi I would like to know how to enable logs for
the rescaling of QVGA to VGA by using configuartions and
setup mentioned in 2. How this can be done ? Please suggest some ideas.
Regards,
Sreedhar
On Fri, May 1, 2009 at 5:58 AM, Dave Sparks davidspa...@android.com wrote:
This capability is already built into the framework. You just scale
This capability is already built into the framework. You just scale
the SurfaceView to the desired size. You probably want to adjust the
height and width to maintain the aspect ratio of the original
material. The rescaling is handled in SurfaceFlinger by the blitter
engine. In fact, if you use
I believe you need android.permission.ACCESS_SURFACE_FLINGER in your
manifest.
On Apr 24, 8:22 am, Guian guiandou...@gmail.com wrote:
I'm porting my app on the t-mobile G1. this app uses OpenGL (using the
glSurfaceView from API demo )
my app works fine on the emulator with these minor errors
This should happen automatically. Did reset the data partition?
On Apr 16, 2:51 am, Luca Belluccini lucabellucc...@gmail.com wrote:
I built the installer image for x86 (added include frameworks/base/
data/sounds/OriginalAudio.mk in mk files).
The ogg files are correctly placed under
than the video size, requiring
SurfaceFlinger to scale up or down.
On Apr 1, 8:54 am, jerryfan2000 jerryfan1...@gmail.com wrote:
Hi Mr. Sparks,
I have same problem on eeepc using i915 video driver.
On Mar 24, 11:17 am, Dave Sparks davidspa...@android.com wrote:
Thevideobit rate is a little
The video bit rate is a little higher than the G1 officially supports.
What device are you testing it on?
On Mar 23, 10:26 am, chiaminghu...@gmail.com
chiaminghu...@gmail.com wrote:
Dear Dave Sparks,
Thank your hlep,
This is the mediafile information which I test for about this issue.
C
writing decoders.
On 1月9日, 上午11時36分, Dave Sparks davidspa...@android.com wrote:
I'm guessing a bit here because I haven't looked too deeply into it,
but I believe you'll need an FLV parser/demuxer plus a VP6 decoder.
The parser would have to be integrated as a native OpenCore node
SurfaceFlinger is the 2D abstraction layer. If you remove it, you will
break all the native code that talks to it. You will have to rewrite
the WindowManager code. It's a fundamental part of the system.
What do you want to achieve?
On Feb 26, 2:48 am, F H expelia...@googlemail.com wrote:
As I
This is probably best asked in android-framework, the Packet Video
engineers are more likely to answer.
However, OMX enforces pretty strict timing on interface calls, I
believe all calls must return within 20 msecs. I can't think of many
encoder/decoder nodes that can satisfy that timing
prevent the need for reconnect and still allow
Camera/App additions to function.
Steve.
On Feb 18, 10:54 am, Dave Sparks davidspa...@android.com wrote:
I don't want to add another API to MediaRecorder.
We can add another parameter to the setParameter interface in the
camera HAL
the
interfaces. Would it make any sense to have a secondary API (say
Camera Effects) for those camera related features that are common
between the two use cases and should/could be usable by the
application in both cases?
Steve.
On Feb 17, 10:00 am, Dave Sparks davidspa...@android.com wrote
We cannot allow the application to make any changes to the camera that
could potentially violate the contract between the camera and the
media recorder. For example, let's say that the video frame size is
set to CIF, and the application changes it to QCIF in the middle of a
recording. This will
The binder error message might be a symptom of the problem and not the
cause. Do you have a log?
On Feb 11, 10:16 am, Manish Sharma manishsharm...@gmail.com wrote:
Hi All,
We have integrated our decoder in PVPlayer and are able to playback mp4/3gp
files.
Sometimes we have observed a pop-up
-porting/browse_thread/thread/c...
http://groups.google.com/group/android-porting/browse_thread/thread/9...
On Feb 10, 7:03 pm, Dave Sparks davidspa...@android.com wrote:
Got it. That's a new requirement. Write up a bug and we'll try to
include it in a future release.
On Feb 10, 11:45 am
When you call autoFocus(), you get a callback after focus is completed
with a success or fail indicator. This is where you would display your
in-focus indicator in the view finder. If the user releases the
shutter button, then the app does nothing. If the user presses the
shutter button, you call
You are posting in the android-porting list. You need to be more
explicit about your situation.
Which hardware platform?
Which branch of the Android tree?
Do you have a log?
On Feb 10, 12:40 am, forest forest...@gmail.com wrote:
when set DEBUG=true in
No, SCO output is not routed through AudioFlinger.
On Feb 10, 10:38 am, anand b anand@gmail.com wrote:
Hi,
I have some questions on AudioFlinger working. From the code, it
looks like all mixing is being done in AudioFlinger on user side.
Is is possible to bypass this for some routes?
, Dave Sparks davidspa...@android.com wrote:
Are you using floating point in your codec?
On Feb 2, 1:42 am, Manish Sharma manishsharm...@gmail.com wrote:
Hi Dave,
With software decoder this is not happening. Mostly the issue with our
video
driver.
Regards,
Manish
On Sat, Jan
disabled the h/w decoding and enabled only display then the playback is bit
improved. We are using tasklet to process the interrupts in video driver. We
have observed this is causing the issue. Any pointers?
Thanks and Regards,
Manish
On Tue, Feb 3, 2009 at 3:53 AM, Dave Sparks
davidspa
These all sound like symptoms of the same problem.
We don't have anything internally using the ALSA drivers and I haven't
looked at them myself. Maybe somebody from WindRiver can lend a hand.
On Jan 10, 12:05 am, Neo naveenkrishna...@gmail.com wrote:
On Jan 9, 11:31 pm, Dave Sparks davidspa
Are you using floating point in your codec?
On Feb 2, 1:42 am, Manish Sharma manishsharm...@gmail.com wrote:
Hi Dave,
With software decoder this is not happening. Mostly the issue with our video
driver.
Regards,
Manish
On Sat, Jan 31, 2009 at 7:06 AM, Dave Sparks davidspa
Something is obviously generating a lot of softirq's. Does this happen
when you use the software codec?
On Jan 29, 11:45 pm, Manish Sharma manishsharm...@gmail.com wrote:
Hi All,
We have replaced the video decoder of PV player with our H/w video decoder
and are able to display the video.
It doesn't seem difficult to deal with dynamic changes in preview. The
application will make a request through setParameters() and this may
result in a change in the preview heap.
It's too late to get this change into Cupcake, but we should be able
to work on this in the next release.
On Jan
Hi Steve,
I think your first approach is fine, i.e. having the camera return the
stride through a read-only key/value pair.
I'd like to understand more about dynamic changes in the preview heap.
Is it the case that the preview heap size may change while preview is
active? Also, is the preview
Agreed, this is all technically feasible.
However, as I said before, I don't think Skia currently supports
rendering a portion of a JPEG image to RGB565 for display.
On Jan 21, 6:17 am, anandb anand@gmail.com wrote:
yep, i agree that for the displaying part, a smaller resized image
could
Yes, that works fine for capturing a large image.
But what if you want to look at the image after capture to see if the
subject's eyes were closed? Now you need to render large image to
display (i.e. convert from JPEG to RGB).
On Jan 20, 6:48 pm, hanchao3c hancha...@gmail.com wrote:
If using
, Jan 16, 2009 at 4:14 PM, Dave Sparks davidspa...@android.comwrote:
We are moving to a density independent pixel representation, so the
specific screen resolution is irrelevant.
On Jan 16, 12:10 am, anand b anand@gmail.com wrote:
Hi,
WVGA can refer to displays of the following
SurfaceFlinger is the surface compositor and PixelFlinger is the
blitter.
On Jan 17, 5:08 am, Chen Yang sunsety...@gmail.com wrote:
Hi Mathias:
Just interested to know, what's the relationship between surfaceflinger
and pixelflinger?
Meanwhile, is there some document on
We are moving to a density independent pixel representation, so the
specific screen resolution is irrelevant.
On Jan 16, 12:10 am, anand b anand@gmail.com wrote:
Hi,
WVGA can refer to displays of the following resolutions:
800x480 or 854x480 or 864x480
Can you please clarify what
/dev/eac is only necessary if you are using the generic user space
audio driver. If you are using ALSA driver, /dev/eac should not come
into play at all unless you are running your code in emulation.
On Jan 9, 2:24 am, Anil Sasidharan anil...@gmail.com wrote:
Hi leemgs,
I guess
I'm guessing a bit here because I haven't looked too deeply into it,
but I believe you'll need an FLV parser/demuxer plus a VP6 decoder.
The parser would have to be integrated as a native OpenCore node. You
could integrate the codec as a native OpenCore node or as an OpenMax
codec using the IL
, Say I want to record something, or I want to transmit something, or
even extreme cases such as I want to hear it over bluetooth ?
Am I right in this regard ?
regards,
Pavan
On Wed, Dec 31, 2008 at 12:12 PM, Dave Sparks davidspa...@android.comwrote:
If the radio output is exposed
The abstraction layers in Android are admittedly inconsistent. Audio
and camera both use a C++ pure virtual interface (e.g.
AudioHardwareInterface.h), while other places like LED's and blitter
functions have a C struct of function pointers.
Because we run the actual device image in the emulator,
Video record is not supported in master or release-1.0 and is only
somewhat working in cupcake. What branch are you using? How are you
generating the frames from the camera?
On Dec 18, 12:32 am, forest...@gmail.com forest...@gmail.com
wrote:
when start video recorder,dalvikvm crash,logcat
It's exactly as Andrew described it. There are two codec registries,
one for vendor supplied codecs and one for OpenCore software codecs.
If it is unable to find a vendor codec that satisfies the input/output
requirements, it falls back to OpenCore software codec. The reason for
that is that you
?
Thx
John
On Wed, Dec 10, 2008 at 10:04 AM, Dave Sparks [EMAIL PROTECTED]wrote:
It's exactly as Andrew described it. There are two codec registries,
one for vendor supplied codecs and one for OpenCore software codecs.
If it is unable to find a vendor codec that satisfies the input/output
You could hack up a parser that takes a stream of raw WMA frames. I'd
probably start from the WAVE file parser.
On Dec 8, 9:04 pm, Yogi [EMAIL PROTECTED] wrote:
Hi All,
I have integrated WMA codec onto android i.e compiled successfully but
i want to test that integrated codec without parser.
The AudioHardwareInterface abstraction layer allows for user space
drivers. You need an implementation that matches your driver. Some
want ALSA, some want OSS, still others want some proprietary interface
with user space code that isn't required to be open sourced under GPL.
By abstracting the
it is decided whether something goes under hardware
or not ..
Also, how does one hook up the 2D and 3D hardware acceleration. I saw
GPUHardware under surfaceflinger and Blithardware.cpp under ui.. Is the
interface defined somewhere
thanks
mohan
On 12/2/08, Dave Sparks [EMAIL PROTECTED] wrote
that how we are going to use our h/w codecs in
android?
It will be appreciable if you can provide the solution
while taking the example of G1 mobile itself which you said uses h/w
codecs.
On Nov 26, 8:13 pm, Dave Sparks [EMAIL PROTECTED] wrote:
There are two ways to do this:
1. Integrate
There are two ways to do this:
1. Integrate your codecs into the PV OpenCore framework. You can
either use OpenMax, which is the way that the G1 h/w decoders are
integrated, or you can write your own decoder node class based on PV's
built-in classes. For encoders in the current code, you'll need
You have two choices for taking advantage of your h/w acceleration:
1. Integrate your codecs into the OpenCore framework. You can do this
using the exising OpenMax decoder node, or you can adapt one of PV's
native decoder nodes to work with your hardware.
2. Implement your own media player
below code into init.rc
But, the result is same. T.T
on early-init
device /dev/pcmC0D0p 0666 root audio
Who can help me? :)
On 11/26/08, Dave Sparks [EMAIL PROTECTED] wrote:
AudioHardwareGeneric tries to open /dev/eac. If /dev/eac fails to open
or respond correctly to I/O
52 matches
Mail list logo