You misread the code (although I admit it is confusing). There are 2
different "sequence" variables.
One is LayerBase::sequence, used for disambiguate the ordering of
layers, as you guessed. The other one is LayerBase::State::sequence,
which is entirely different and is used during transaction as
It uses the magnetic north.
There are plans to have a way to compensate for the magnetic declination.
Mathias
On Mon, Feb 9, 2009 at 2:56 PM, sterling...@gmail.com
wrote:
>
> SENSOR_ORIENTATION defines its azimuth rotation around the z-axis with
> 0 degrees equal to North. The tilt compensated
till the kernel obviously OpenGl in
> between. ?
>
> I have struggled with it a lot but still the complete picture is not yet
> clear to me :(
Mathias
>
> Regards
> Nimit
>
> On Mon, Feb 9, 2009 at 2:33 PM, Mathias Agopian
> wrote:
>>
>> On Mon, Feb 9,
an you give me a brief idea what all components / layers / framework
>> needs to modify
>>
>> in android to support h/w acceleration (If i think to modify).
>>
>> On Mon, Feb 9, 2009 at 1:12 PM, Mathias Agopian
>> wrote:
>>>
>>> On Sun,
n't know when it'll
be ready. "As soon as possible" is the best answer I can give.
Mathias
> Thanks & Regards
> Nimit
>
>
> On Mon, Feb 9, 2009 at 12:08 PM, Mathias Agopian
> wrote:
>>
>> Android is not ready at the moment to work with a dif
Android is not ready at the moment to work with a different GPU than
that of the G1. We're working on it :-)
Mathias
On Sun, Feb 8, 2009 at 10:31 PM, Nimit Manglick wrote:
> Hi,
>
> I have my android working on 2.6.24 kernel on TI Omap 3530 EVM, now i want
> to
>
> enable hardware acceleration
Hello,
On Fri, Feb 6, 2009 at 5:36 AM, F H wrote:
>
> Hi Mathias!
>
> Can you outline how LayerBuffers are intended to work in SurfaceFlinger.
They are layers that get the content from "somewhere else". The
content is pushed from an external entity as it becomes ready.
> - Are these buffers st
ngles, but the diagonal line is visible for some
>>> frames. This could be possible if the 2 triangles of the displayed
>>> frame are from 2 different video frames.
>>>
>>> So we are suspecting that the screen is not composited completely by
>>> SurfaceF
d
> frame are from 2 different video frames.
>
> So we are suspecting that the screen is not composited completely by
> SurfaceFlinger. Can you give some hints?
I don't know what's going on, but it surely looks like you're /dev/fb
synchronization doesn't work.
mathias
Could give more details? Which hardware is this?
Could be a bug in your display driver or GL driver.
Mathias
On Mon, Feb 2, 2009 at 6:27 AM, anand b wrote:
>
> Hi,
>
> We are seeing some artefacts while running the OpenGL application
> "Translucent GLSurfaceView" on our board. Could you please
On Sat, Jan 31, 2009 at 4:54 AM, gan wrote:
>
> Hi:
>
> I add some log in SurfaceFlinger and according to it I think(Thanks
> for correct me if something is wrong):
> a)Java "SurfaceSession" would create(native side) a
> "mSharedHeapAllocator" for each Surface client,
> This is done at native Cli
it will improve, but we're still not planning to support iostream or exceptions.
mathias
On Fri, Jan 30, 2009 at 5:50 AM, F H wrote:
> Android's STL support seems a bit limited - I can't see things like: list,
> iostream, fstream, string, vector, map, ...
>
> Is this support something that will
Hi Ken,
You can have several "entities" (be it drivers, or userspace code)
write into the *same* /dev/input/event#.
Then, on the application side, the "data" sensor HAL module can read
from that /dev/input node, which it receives through the data_open()
"fd" parameter. There is no need to poll(
t because this is
> hardware specific, tailoring it to our own API's won't upset anything?
Mathias
>
>
> On 1/26/09, Mathias Agopian wrote:
>>
>> On Mon, Jan 26, 2009 at 8:45 AM, F H wrote:
>>> Hi Mathias,
>>>
>>> - It seems to me that
te a pointer to its clients which live in
a different process and even internally it may not always make sense
to use a pointer when using h/w accelerated graphics: a fd+offset is
one step in the right direction in terms of abstraction, but it's not
ambitious enough and as I said we will pr
uh? don't you want instead?
mathias
On Fri, Jan 23, 2009 at 9:31 AM, F H wrote:
> Hi All,
>
> I'm building some stuff and am including linux/string.h for prototyping &
> get errors. The file I'm picking up is pretty empty
> (bionic/libc/kernel/common/linux/string.h). Is it meant to be?
>
> Tha
have to build your own "everything" parallel to the
current mechanisms. I don't have enough information about your system.
Personally I would advise against going this route for now.
Mathias
>
> On 1/22/09, Mathias Agopian wrote:
>>
>> On Thu, Jan 22, 2009 a
Can you elaborate? What information?
Mathias
>
> Fred.
>
> On 1/22/09, Mathias Agopian wrote:
>>
>> Hi,
>>
>> On Tue, Jan 20, 2009 at 3:37 AM, F H wrote:
>>> Hi Mathias,
>>>
>>> Android typically creates two buffers per surface. Presumably
Hi,
On Tue, Jan 20, 2009 at 3:37 AM, F H wrote:
> Hi Mathias,
>
> Android typically creates two buffers per surface. Presumably this is so
> that one of them can be locked by an application for rendering while the
> other (complete buffer) is available to SurfaceFlinger for compositing?
correct
llowed to "see" only the pages they need.
You would have to replicate this mechanism if your kernel allocator
cannot create multiple heaps. The easiest way would be to start from
our pmem driver and modify it to suit your needs.
Mathias
>
> Thanks,
> Fred.
>
>
> On
ment such a sensor.
Mathias
>
> Thanks,
> Anand
>
> On Fri, Jan 16, 2009 at 3:43 AM, Mathias Agopian
> wrote:
>>
>> On Thu, Jan 15, 2009 at 1:04 PM, Ken Schultz wrote:
>>>
>>> Mathias,
>>>
>>> I am looking to use the cupcake version of
Hi,
On Thu, Jan 15, 2009 at 9:53 AM, F H wrote:
> I have a few questions regarding integration of an accelerated capability
> into Android.
Before we go further, I would like to point out that there are no
OpenGL ES driver model in Android 1.0. We're trying very hard to
finalize it for the cupc
On Thu, Jan 15, 2009 at 4:09 PM, Dianne Hackborn wrote:
> Multiple screen support has really not been implemented at all, so you
> probably have a fair amount of work ahead of you. Mathias can probably help
> you some more, but one thing to think about is you probably want to have the
> window m
s.h currently *is* the documentation. I'll be happy to
answer questions here if need be.
Mathias
>
> Thanks,
> Ken
>
> On Jan 9, 1:26 am, Mathias Agopian wrote:
>> Hi,
>>
>> Android doesn't "require" yaw / pitch / roll per se. In theory, well
&g
On Tue, Jan 13, 2009 at 12:41 AM, pramod gurav wrote:
> On Sat, Jan 10, 2009 at 4:11 AM, Mathias Agopian
> wrote:
>>
>> On Fri, Jan 9, 2009 at 5:31 AM, Sean McNeil wrote:
>>>
>>> What you are trying to do cannot be done. Accelerometers give you a
>>&
drivers according to android.
>> The sensors on my h/w are ak8973(magnetic x,y,z) and lis302dl(acc
>> x,y,z).
>> They do not give the yaw, pitch roll.
>> I could get the values on my android application.
>> But cant get the yaw, pitch and roll. I was expecting that Android can
On Fri, Jan 9, 2009 at 1:26 AM, pramod gurav wrote:
>
> On Fri, Jan 9, 2009 at 12:56 PM, Mathias Agopian
> wrote:
>>
>> Hi,
>>
>> Android doesn't "require" yaw / pitch / roll per se. In theory, well
>> written applications should check f
Hi,
Android doesn't "require" yaw / pitch / roll per se. In theory, well
written applications should check for the presence of these sensors.
Unfortunately, in Android 1.0 there wasn't an easy way to integrate a
new sensor h/w.
I think it is more sane to target the "cupcake" release of Android,
Hi,
On Tue, Dec 30, 2008 at 3:52 PM, camako wrote:
>
>
> I've seen the messages about Android not being ready for h/w
> accelerated OpenGL|ES, etc.. We have an existing OpenGL|ES driver
> (with an EGL implementation that supports full screen apps only) which
> we've enabled Android's surfacefli
Hello,
this is a very crude mechanism (which is probably temporary) to load
an alternate debug version of the OpenGL ES library. This can be used
by certain vendors for debugging applications or their library.
The debug library is optional.
Mathias
On Mon, Dec 22, 2008 at 6:25 PM, Alan wrote
On Mon, Dec 15, 2008 at 11:50 PM, Luca Belluccini
wrote:
>
> If I want to create a new device, conforming to Android HAL, I should
> use one of the following approaches:
> 1. App - Runtime Service - lib
> 2. App - Runtime Service - Native Service - lib
> 3. App - Runtime Service - Native Da
On Tue, Dec 16, 2008 at 2:47 AM, anandb wrote:
>
> I had a few questions on graphics and video ..
>
> 1. Is the blitter on G1 aware of mmu and works with virtual memory?
no.
> 2. Is openGL mainly used for composing textures which have been
> created by app/blitter? And as a fallback when blitte
You cannot replace libGLES_CM.so; this is part of the system. Instead
you need to rename your library "libhgl.so" and it will be
automatically loaded by the system.
I doubt it will work though unless you are using the correct NativeWindowType.
Android is not ready at this point to use a different
format's description there (just swap the values for R and
B). Make sure EGLDisplaySurface.cpp returns your new format.
And it should just work :-)
Mathias
?
> Thanks
>
>
> On 12月8日, 下午3时50分, Mathias Agopian <[EMAIL PROTECTED]> wrote:
>> Hi,
>>
>> I talked
that it would
> break GL_RGB? So if LCDC needs a BGR framebuffer, but android gives an
> RGB. What can I do to work it out? Could android give BGR display?
>
>
>
> On 12月5日, 上午8时44分, David Given <[EMAIL PROTECTED]> wrote:
>> Mathias Agopian wrote:
>>
>> [...
On Sun, Dec 7, 2008 at 7:20 PM, Ben Leslie <[EMAIL PROTECTED]> wrote:
>
> Hi Brian,
>
> I was wondering if this is something that can be changed to make the
> logs more easy for people to parse. i.e: Can we set things up to not
> try and load libhgl when it is not required, and same for pmem? This
pixelflinger won't work on MIPS. It generates only ARM code at the
moment, fixing that won't be an easy task. It will actually revert to
the C reference implementation which will be too slow (I mean it).
mathias
On Sat, Dec 6, 2008 at 6:14 PM, Jean-Baptiste Queru <[EMAIL PROTECTED]> wrote:
>
> W
On Fri, Dec 5, 2008 at 4:19 AM, Anson <[EMAIL PROTECTED]> wrote:
> Hello all:
>
> I Noticed that in fakeCamera.cpp, the fakeCamera generate some so called
> 'YUV422SP' image data,
> form the program for data generating,
> the format is something like YUV422P format,
>
> I don't know why the fakeC
= tmp2|(tmp0<<5);
>>
>> > > rgb |= (tmp0<<16);
>>
>> > > *( (uint32_t*)(pDst+dst_pitch) )= rgb;
>>
>> > > //load the top two pixels
>> > > Y = *pY++;
>>
>
On Thu, Dec 4, 2008 at 3:55 PM, David Given <[EMAIL PROTECTED]> wrote:
> Mathias Agopian wrote:
> [...]
>> The notion of "high" byte and "low" byte in a 24-bits frame buffer is
>> a little odd. Since these are not 32 bits or 16 bits numbers, I'm no
in that order. One byte
of red, one byte of green, followed by one byte of blue. R,G,B... btw,
this order happens to be OpenGL's GL_RGB internal format for textures.
You *cannot* change load_store.cpp like that, you have now broken
OpenGL's glTexImage2D() :-(
Mathias
>
> On 12月3日, 下
outines in pixelflinger/codeflinger/load_store.cpp
mathias
On Wed, Dec 3, 2008 at 12:39 AM, FlyCry <[EMAIL PROTECTED]> wrote:
>
> Yes, red and blue.
> Thanks!
>
> On 12月3日, 下午4时20分, Mathias Agopian <[EMAIL PROTECTED]> wrote:
>> On Tue, Dec 2, 2008 at 11:45 PM, FlyCr
t even the configs), but do as efficient a conversion as possible on the
>> final swap - potentially easy to do if that final swap can be singled out.
>>
>> Mathias Agopian
>> <[EMAIL PROTECTED]
>> gl
On Tue, Dec 2, 2008 at 2:55 PM, <[EMAIL PROTECTED]> wrote:
>
> We are considering rendering video into video planes/layers supported
> by the OMAP3430. Does Android have infrastructure in place for
> supporting hardware video planes separate from the graphics plane?
> I've looked around SurfaceFl
ed, just the
implementation of the EGLNativeWindowType.
Unless the panel cannot be configured to 565 (that would be crazy), I
wouldn't go down that road, if it's not going to improve visual
quality.
>
>
>
>
>
>
> Mathias Agopian
> <[E
ld be able to handle). We just need to be
absolutely sure that whichever value we pick won't conflict with
future version of the platform (I already added a few formats post
1.0).
Mathias
>
>
>
>
>
>
> Mathias Agopian
> &
n't believe the framebuffer cannot be configured to 32-bits like this:
bb00gg00rr00
this wouldn't cost anything more in h/w (just more address space, but
who cares?), and it would be a lot more efficient from a software
point of view.
mathias
> Thanks.
>
Hi,
On Mon, Dec 1, 2008 at 8:22 PM, FlyCry <[EMAIL PROTECTED]> wrote:
>
> My board has an lcd of 18 bpp, but android UI is 16 bpp. So the
> display is abnormal when andriod runs. Could android be configed to 18
> bpp? And how to do it?
> Thanks for anyone attention to this topic.
What's the for
On Wed, Nov 26, 2008 at 2:10 AM, Phil HUXLEY <[EMAIL PROTECTED]> wrote:
>
> Thanks Mathias,
>
> - So in the world of enabling GL rendering and software rendering to the
> same surface (and related copies), whet are the points that require the
> buffer to be copied and does the copy need to go both
Hi,
On Wed, Nov 26, 2008 at 2:20 AM, Pivotian <[EMAIL PROTECTED]> wrote:
>
> sorry Mathias for repeating my question but every time I put this
> question, its overridden by some other questions and it didn't got
> your attention. I have some doubt regarding video codecs:
>
> I suppose that Androi
ly encourage you to try to make your h/w work with the
"trick" above where you make a copy of the framebuffer upon
eglSwapBuffers(), this will require you to do at least half of the
work needed for the "real thing".
Mathias
>
>
>
>
>
>
>
opencore provided by Android.
>> So you mean to say that with above configuration of hardware its not
>> currently possible to enable Android to use hardware accelerator
>> instead of android software codecs.
>>
>> On Nov 26, 10:42 am, Mathias Agopian <[EMAIL PRO
no board, so with out the real hardware, how
>> i am going to know that? As i specified earlier do i have to make any
>> changes anywhere in the android code to support hardware acceleration
>> apart from adding the "libhgl.so" file?
>>
>> On Nov 26, 3:03 am,
Hi,
You need to supply "libhgl.so", which must be a regular OpenGL ES
library. libGLES_CM.so, will load it at runtime and defer to it for 3D
h/w acceleration.
implementing your own h/w accelerated libhgl.so right now is very
difficult because Android is not ready for this just yet. this is an
ar
Hi,
On Wed, Nov 19, 2008 at 3:31 PM, rd.jones <[EMAIL PROTECTED]> wrote:
>
> Does bionic's C++ library support RTTI (eg. for dynamic_cast)? I did
> not see anything in the CAVEATS file about RTTI. Then again, I didn't
> see anything about the lack of STL support either, but found out the
> hard
Hi,
On Tue, Nov 18, 2008 at 11:15 PM, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>
> the camera provide yuv422 data,but openGL "just show the Y plane of
> YUV buffers"(frameworks\base\libs\surfaceflinger\LayerBase.cpp
> 624),must convert yuv422 to rgb565,camera preview is ok,a little
> slow,why
On Mon, Nov 17, 2008 at 12:11 AM, Sean McNeil <[EMAIL PROTECTED]> wrote:
>
> Benno wrote:
>> On Nov 4, 1:01 pm, Mathias Agopian <[EMAIL PROTECTED]> wrote:
>>
>>> Hi Ben,
>>>
>>> On Nov 1, 8:11 pm, "Ben Leslie" <[EMAIL PROTECTE
Hi,
There is a bug in the current version of SurfaceFlinger that prevents
it to work in systems which don't have page-flipping capabilities.
This should be fixed soon. Note that even when it is fixed, it will
degrade performance a bit and may cause some tearing.
Mathias
On Tue, Nov 11, 2008 at
Hello,
The errors that matter here are:
E/GLLogger( 1553): validate_display_surface:779 error 300d (EGL_BAD_SURFACE)
E/SurfaceFlinger( 1553): not enough memory for layer bitmap size=234668032
D/MemoryDealer( 1553): LayerBitmap (0x870d8, size=8388608)
D/MemoryDealer( 1553): 0: 00087110 | 0x
59 matches
Mail list logo