Re: [MeeGo-dev] Fwd: [pulseaudio-discuss] [Alsa-user] Pops/Crackles Messing up my audio...
On 08/08/11 14:32, Nasa wrote: Hi, Does anyone have any insight into the question asked by Col? (See below). Thanks, Nasa - Original Message - 'Twas brillig, and Nasa at 07/08/11 21:26 did gyre and gimble: Hi, I was hoping I could get some help troubleshooting some audio quality issues I am running into. Specifically, I am getting a lot of *pops/crackles* when I play audio files. It shows up when I move from 2-channel to 4/5 channel audio (via pauvcontrol) - I am using a USB X-FI sound-card with the Meego IVI. I have tried changing the resample method (going from ffmpeg to high-quality) and setting tsched=0, neither making any noticeable difference. The only things I saw in /var/log/messages that seemed relevant was: messages:Aug 6 16:59:26 localhost pulseaudio[570]: alsa-sink.c: ALSA woke us up to write new data to the device, but there was actually nothing to write! messages:Aug 6 16:59:26 localhost pulseaudio[570]: alsa-sink.c: Most likely this is a bug in the ALSA driver 'snd_usb_audio'. Please report this issue to the ALSA developers. messages:Aug 6 16:59:26 localhost pulseaudio[570]: alsa-sink.c: We were woken up with POLLOUT set -- however a subsequent snd_pcm_avail() returned 0 or another value min_avail. I would follow the advice and talk to the alsa devlopers. Ensure the driver is up-to-date. Stefan And this was spout out of dmesg [ 21.857184] ALSA sound/usb/mixer.c:2110: status interrupt: c0 00 [ 21.897058] ALSA sound/usb/mixer.c:2110: status interrupt: c0 00 [ 21.961062] ALSA sound/usb/mixer.c:2110: status interrupt: c0 00 I have already eliminated the speakers and the amp (played music from a different source and they sounded fine). Let me know what else I should post (I could record the sound, if that would be useful). Thanks in advance, While this could be an issue in the alsa driver itself, can I ask if you're applying any volume changes to your streams or sink? I personally am not, however, the meego project maybe. There are over 250 patches against the base pulseaudio package (9.22) of which some seem to have something to do with volume settings... 0249-bluetooth-Fix-HSP-volume-handling.patch 0248-bluetooth-restore-original-sco_-sink-src-set_volume-.patch 0247-bluetooth-fix-set_volume_cb-on-sco-over-pcm.patch 0240-volume-proxy-small-lib-to-allow-communicating-volume.patch 0224-pactl-Accept-more-volume-specification-formats.patch 0223-sink-input-Add-volume_writable-to-pa_sink_input.patch 0190-alsa-mixer-Refactoring-merge-element_mute_volume-ele.patch 0189-alsa-mixer-Implement-constant-volume.patch 0140-dbus-Always-accept-mono-volumes-when-setting-device-.patch 0133-volume-Add-Orc-based-optimised-volume-scaling.patch 0132-volume-Fix-sample-array-size-for-tests.patch 0131-volume-Make-tests-use-only-valid-volumes.patch 0130-alsa-mixer-Fix-a-git-am-cockup-in-b0f72311.patch 0129-volume-Add-a-PA_VOLUME_UI_MAX-define-for-the-recomme.patch 0124-introspect-Client-side-implementation-for-has_volume.patch 0118-Revert-Add-volume-ramping-feature-envelop-fix.patch 0117-Revert-Add-volume-ramping-feature-sink-input-modific.patch 0116-Revert-Add-volume-ramping-feature-sink-modification.patch 0114-Revert-core-volume-ramping-fix.patch 0107-virtual-sink-Add-a-modarg-for-forcing-flat-volume.patch 0106-virtual-sink-Add-a-modarg-for-enabling-volume-sharin.patch 0105-Implement-the-volume-sharing-feature.patch 0090-Allow-read-only-or-non-existing-sink-input-volume.patch 0042-core-Use-volume_change_safety_margin-when-rewinding-.patch 0022-volume-Trivial-cosmetics-remove-a-space.patch 0011-alsa-sink-take-base-volume-into-account-when-applyin.patch 0001-volume-Add-explicit-checks-for-ARMv6-instructions.patch 0001-fix-the-assumption-that-volume-is-always-positive.patch So can see all the patches and what's in them here: https://build.pub.meego.com/package/files?package=Pulseaudioproject=home%3Anasa (not that I'm expecting and/or requesting you do that -- I just put this here for reference) Nasa BTW: I will try your suggestion when I get home this evening. There could be a problem with optimized paths for software volume adjustments. You can disable these optimisations via a special environment var: PULSE_NO_SIMD=1 It's worth checking this to see if it's that area that's at fault. Col ___ pulseaudio-discuss mailing list pulseaudio-disc...@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] play tcp src in qml video element
On 07/20/11 21:14, Rathi, Somya wrote: How can I play the tcp video src in qml? Sorry, that question lacks a lot of detail. In any way, if you want to play a video, just use the uri pointing to it. gstreamer will figure out how to play it. Add error handling to your application to understand why it won't play if it doesn't. Stefan Thanks, Somya ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines, ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] missing dist-upgrade path on major releases
Hi, On 07/18/11 17:58, Kok, Auke-jan H wrote: On Mon, Jul 18, 2011 at 6:38 AM, Stefan Kost enso...@hora-obscura.de wrote: hi, there is a long standing bug https://bugs.meego.com/show_bug.cgi?id=9205 regarding a missing upgrade path for new major releases. Why does MeeGo want to be special here (technical/legal/other issue)? I'd really like to know, the current status is quite embarrassing. Is there a practical intermediate solution available? Like installing all updates and then updating the repositories to change the 1.1. to 1.2 in the repo url and running a dist-upgrade?. Has anyone tried that? We're working on bringing solutions in the long term. The focus for 1.3 will be on utilizing the btrfs features we can easily enable: Device reset (factory reset) will be possible, and the shell installer will be able to install into a new btrfs subvolume on an existing installation - effectively saving all your data (and potentially even your OLD os install as well as a bonus). Doing a in-place OS upgrade is probably something we'll not work on, as there are too many risks involved. zypper dup works good, but, the nature of rpm updates outside the tested update path is almost always guaranteed to give problems. Auke What are the problems in particular you see here. I am running an (rpm based) opensuse installation on my home computer, which I upgraded since version 6.2. I know that it can cause issues, but in that case one could still reinstall. So far I have been always able to resolve the conflicts. Stefan ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
[MeeGo-dev] missing dist-upgrade path on major releases
hi, there is a long standing bug https://bugs.meego.com/show_bug.cgi?id=9205 regarding a missing upgrade path for new major releases. Why does MeeGo want to be special here (technical/legal/other issue)? I'd really like to know, the current status is quite embarrassing. Is there a practical intermediate solution available? Like installing all updates and then updating the repositories to change the 1.1. to 1.2 in the repo url and running a dist-upgrade?. Has anyone tried that? Stefan ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list
hi, On 25.03.2011 18:28, Cory T. Tusar wrote: On 03/25/2011 11:23 AM, Edward Hervey wrote: On Fri, 2011-03-25 at 10:58 -0400, Cory T. Tusar wrote: On 03/17/2011 06:57 AM, Stefan Kost wrote: snip In 7 Transparency you need to highlight what your proposal adds to the existing features. * Transport protocol: handled e.g. by gstreamer already, standarts like DLNA specify subsets for interoperability already * Transparent Encapsulation and Multiplexing: could you please elaborate why one would need the non-automatic mode. I think it does not make sense to let the application specify what format the stream is in, if the media-framework can figure it (in almost all of the cases). In some corner cases one can e.g. use custom pipelines and specify the format (e.g. a ringtone playback service might do that if it knows the format already). As a possible example (pulled from recent experience), automagic determination of stream parameters takes time (and CPU cycles). A non-automatic mode would be (was) helpful in instances where the application knows exactly what type of stream to expect, and there is a requirement for an absolute minimum of start-up time between the user pressing the Play button and video appearing on the screen. A lot of improvement has gone into GStreamer over the past year to speed up the pre-roll/typefinding/setup of playback pipelines. This was mainly to get gst-discoverer to be faster than exiftool to get information about media files, which it now is ... considering it also decodes the first audio/video frame(s). The only case I can think of where you would gain time would be for live mpeg-ts streams where you could provide the PAT/PMT information which you would have cached previously (in order not to have to wait for the next occurence). But that would still require you to wait for the next keyframe to appear unless you already have a few seconds live back-buffer on the machine (in which case you would also have cached PAT/PMT). Did you have another use-case in mind ? Pretty much the above, or slight variations thereof. Short version: there were product requirements regarding startup time and display of the first keyframe received over the network within N milliseconds. Explicit knowledge of stream type when constructing the decode pipeline proved helpful in meeting those requirements (this particular case was with a GStreamer pipeline on Moblin). I'm not arguing against automatic detection - it's what works and works well in a vast majority of cases - just leave the power-user option of explicitly specifying codec use / buffer sizing / etc. available for times when it's needed. Maybe we could progress by having a requirement in featurezilla? Also I wonder how much we are off the target. I believe before changing things it would be good to have a test case at hand that shows how much the target is missing and that avoiding auto-detection would meet the target (by saving enough time). * Transparent Target: Whats the role of the UMMS here? How does the URI make sense here. Are you suggesting to use something like opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where a local renderer would render well locally and one could e.g. have a UPnP DLNA renderer or a media recorder. * Transparent Resource Management: That makes a lot of sense and so far was planned to be done on QT MultimediaKit * Attended and Non Attended execution: This sounds like having a media recording service in the platform. 8 Audio Video Control This is a media player interface. Most of the things make sense. Below those that might need more thinking * Codec Selection: please don't. This is something that we need to solve below and not push to the application or even to the user. Agreed, in part. As a general rule, the underlying detection and codec selection should be transparent to an application, however there are corner cases where this may not be desirable, and specific selection of a codec may be necessary. Consider a system which has an external (to the main CPU) PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4 decode. If the system is in a commanded low-power state, it may be more prudent to decode standard-definition MPEG-2 content in software on the main CPU and leave the external video processor powered-down. However, when decode of MPEG-4 content is desired, soft-decode may not be feasible and the external video hardware needs to be used. In instances, as above, where the system has multiple codecs (hardware and software) capable of decoding given content, is there envisioned some method of specifying codec priority so that a given method of decode is used preferentially? Yes, with playbin2/decodebin2 you can change the order of codecs/plugins being used. By default it will use the one
Re: [MeeGo-dev] how to run meego-ux's meego-app-camera on MeeGo 1.2 Netbook?
On 04.04.2011 02:27, Niels Mayer wrote: After doing $ zypper ar http://download.meego.com/live/devel:/meego-ux/Trunk/devel:meego-ux.repo $ zypper zypper clean --all ; zypper --gpg-auto-import-keys refresh $ zypper in meego-app-camera meego-app-video meego-app-photos meego-app-notes meego-app-im meego-app-email meego-app-conacts meego-app-calendar meego-app-calculator meego-app-browser meego-app-clocks meego-ux-panels meego-ux-panels-photos meego-ux-panels-friends meego-ux-panels-music meego-ux-panels-mytablet meego-ux-panels-video meego-ux-panels-web meego-ux-panels-meta-tablet meego-ux-appgrid meego-ux-content meego-ux-content-socialweb which also installs dependencies: meego-handset-sound-theme meegolabs-ux-components meego-qml-launcher meego-ux-media meego-ux-media-models meego-ux-theme mkcal mlite qtgst-qmlsink telepathy-farstream I attempt to launch these apps with $ meego-qml-launcher --app appname which looks for /usr/share/appname/main.qml e.g. meego-qml-launcher --opengl --fullscreen --app meego-app-calculator meego-qml-launcher --opengl --fullscreen --app meego-app-calendar meego-qml-launcher --opengl --fullscreen --app meego-app-camera meego-qml-launcher --opengl --fullscreen --app meego-app-clocks meego-qml-launcher --opengl --fullscreen --app meego-app-contacts meego-qml-launcher --opengl --fullscreen --app meego-app-email meego-qml-launcher --fullscreen --opengl --app meego-app-im meego-qml-launcher --opengl --fullscreen --app meego-app-music meego-qml-launcher --opengl --fullscreen --app meego-app-notes meego-qml-launcher --opengl --fullscreen --app meego-app-photos meego-qml-launcher --opengl --fullscreen --app meego-app-tasks meego-qml-launcher --opengl --fullscreen --app meego-app-video meego-qml-launcher --opengl --fullscreen --app meego-ux-appgrid meego-qml-launcher --opengl --fullscreen --app meego-ux-app-photos meego-qml-launcher --opengl --fullscreen --app meego-ux-panels meego-qml-launcher --opengl --fullscreen --app meego-ux-settings Although many of the simple tablet-ux apps work on the Lenovo s10-3t running 1.2 netbook alpha, the one I really want to re-use (for http://code.google.com/p/ytd-meego/wiki/CitizenJournalismWithYoutubeDirectForMeego ) is meego-app-camera. Unfortunately, when I run it, I get the following: -- is this a bug or am I dong something wrong ? ... $ meego-qml-launcher --opengl --fullscreen --app meego-app-camera Adding Master Pointer: Virtual core pointer ( 2 ) Skipping non-Touch device: Virtual core XTEST pointer ( 4 ) Adding ATTACHED touch device: Cando Corporation Cando 10.1 Multi Touch Panel with Controller ( 11 ) Skipping non-Touch device: SynPS/2 Synaptics TouchPad ( 14 ) loaded the Generic plugin Loaded the MeeGo sensor plugin Request for interface not granted... Request for interface not granted... Warning: Object::connect: No such signal QXIMInputContext::inputMethodAreaChanged(QRect) Warning: Object::connect: No such signal LauncherApp::localeSettingsChanged() Warning: Object::connect: (sender name: 'meego-qml-launcher') Warning: Object::connect: No such signal LauncherApp::windowListUpdated(QListWindowInfo) Warning: Object::connect: (sender name: 'meego-qml-launcher') Debug: Instantiating VolumeControlPrivate Debug: Settings* Debug: Flash Mode: 0 Debug: Capture Mode: 0 Debug: /dev/video0 Lenovo EasyCamera Debug: /dev/video0 Lenovo EasyCamera Debug: Setting camera to /dev/video0 Debug: Camera caps: 64 Debug: Supported maximum optical zoom 1 Debug: Supported maximum digital zoom 10 Debug: Metadata is not available Debug: Audio input: alsa:null - Discard all samples (playback) or generate zero samples (capture) Debug: Audio input: alsa:pulse - PulseAudio Sound Server Debug: Audio input: alsa:default - Default Debug: Audio input: alsa:front:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog Front speakers Debug: Audio input: alsa:surround40:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog 4.0 Surround output to Front and Rear speakers Debug: Audio input: alsa:surround41:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog 4.1 Surround output to Front, Rear and Subwoofer speakers Debug: Audio input: alsa:surround50:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog 5.0 Surround output to Front, Center and Rear speakers Debug: Audio input: alsa:surround51:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers Debug: Audio input: alsa:surround71:CARD=Intel,DEV=0 - HDA Intel, CONEXANT Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers Debug: Audio input: pulseaudio: - PulseAudio device. Debug: Default source: alsa:null Debug: Using default resolution Debug: Using default FPS Debug: Codec: video/theora Debug: Codec: video/mpeg2 Debug: Codec: video/mpeg1 Debug: Codec: video/mjpeg Debug: Codec:
Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list
Hi, On 24.03.2011 23:57, Dominig Ar Foll wrote: Apologies for responding in dual posting. I would like as much as possible to concentrate that type of chat on the TV mailing list but I would not enjoy to leave a question open. Regarding the already existing projects. Are you aware of MAFW on Maemo5 http://www.grancanariadesktopsummit.org/node/219 The implementation might not be perfect, but the concept behind is sane. No I did not know it and I think you for the link. I have so far only written a requirement specification and any idea to move toward a goo implementation specifications are welcomed. I will dig in their documentation. MAFW on Fremantle was mostly done by Igalia. Some of the people now work on Grillo open source project where you might be able to talk to them (on irc). The picture in 6 Position in MeeGo looks quite arbitrary to me. Do the colors have a special semantics (meybe add a small leged below). No the colour are just imported from a slide and where just there to help to identify the blocks. The main idea of that graph is to make very clear that the proposed concept does not pan to create a universal audio/video pipeline but has the goal to be able to integrate multiple video pipeline under a unified umbrella. In particular it aims at enabling to get non open source pipeline to coexist with public pipelines. In 7 Transparency you need to highlight what your proposal adds to the existing features. The chapter 7) Transparency regroup the need to provide certain type of services in a transparent manner to the the application. My goal is to enable applications to play multimedia content which knowing much about that content. e.g. if you write an application which need to access a live TV service but you live in US, you will have different pipeline once that same application is run in Europe.The requirement of transparency is applied to the typeof source and target. In a very similar manner as when you print on Linux today. Your application knowns very little about the printer but still can print. Which part of the pipeline your are thinking is not well handled right now. I you have concrete examples for illustration, I would encourage you to add them. I believe architecturally we don't miss anything major here. * Transport protocol: handled e.g. by gstreamer already, standarts like DLNA specify subsets for interoperability already I am afraid that GStreamer cannot do today everything that I would love it to do. It does pretty well on most of internet format but Playbin2 has a very limited set of supported services when it come to Broadcast or IPTV. Furthermore by default it does not support any smoth streaming feature or protection. The gstreamer people already have smooth streaming implementation. There are two things: 1) missing architecture to implement a feature 2) missing implementation for a certain feature I believe the gstreamer architecture is pretty solid for adding extra streaming protocols, container, codecs etc. Regarding content protection, I believe it should be done outside of gstreamer. As I said it is not media specific. One idea would be to implement a virtual file system with the related access rights and process isolation. This would allow to run an unmodified media pipeline. But I agree that GStreamer is a great tool and I would certainly see it as one of the strong candidate to implement the first open source audio video pipe line under a UMMS framework. Just to be clear - I am not saying that gstreamer is the tool for everything. But integrating two many thing in parallel might not be beneficial either. Thus your document needs to improve pointing out the missing parts (explicitly). Then people can help you to identify existing implementations (or where they believe the feature should be added). Then we can also identify things that are completely missing. We also have to keep in mind that people need to be able to understand our multimedia stack. Right now I think it makes sense: QtMulitmediaKit * high level qt/c++ api that focuses on particular use cases * might apply constraints to keep the api simple QtGStreamer * the full feature set of gstreamer bound to a qt style api GStreamer * high level api (playbin2, camerabin(2), decodebin2, gnonlin, rtpbin, ...) * open and closed multimedia components Kernel * audio/video i/o, network, accelerated codecs, ... * Transparent Encapsulation and Multiplexing: could you please elaborate why one would need the non-automatic mode. I think it does not make sense to let the application specify what format the stream is in, if the media-framework can figure it (in almost all of the cases). In some corner cases one can e.g. use custom pipelines and specify the format (e.g. a ringtone playback service might do that if it knows the format already). Multimedia
Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list
Hi again, On 25.03.2011 12:07, Stefan Kost wrote: Hi, On 24.03.2011 23:57, Dominig Ar Foll wrote: snip There is no architectural issue regarding blueray in the our stack. It is more a legal mess. I belive given you have the time and money, you could write blueray support to gstreamer in a similar fashion as we have the dvd support now. The idea presented here is that the UMMS can decide which pipe line to call on depending of URL or the detected stream type without requiring a prior knowledge from the application about the pipeline configuration/selection. That means you would like to have a bunch of different multimedia frameworks on a device and then use an appropriate one depending on the URI. E.g. use gstreamer for some formats and use mplayer for some others. While that might sound like a good idea, I don't think it is one for several reasons: - you will needs to abstract the the different apis (well thats actually your proposal) - you increase size and complexity of the multimedia stack - more difficult to debug (e.g. differnet tools needed) - testing is a lot more difficult - users might get annoyed by small incompatibilities (seeking works differently depending on the media) - you need to do base adaptation several times (integrate codecs, rendering etc. in several frameworks) There might be more reasons that speak against such an approach, but already the last one would be major enough for me. Can't resist, while speaking with a colleague he came up with a good metaphor for the above. Is there an idea car? No, there isn't. Thus instead of fixing the car to be what we want, we have a simpler solution - we take 3 cars, string the together, put an extra seat on the roof and proxy the controls. Then we can drive using the Fiat in the city, using the Porsche on the motorway and using the van when we need more space for transportation. Of course finding a parking space gets tricky ... Stefan ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] Meego Video API's
On 18.03.2011 14:16, Dominig ar Foll wrote: What is the kernel that will be on the future MeeGo tablets? This will depend on what hardware platform you are talking about. For Intel devices we have people creating Moorestown, Oaketrail, and Pinetrail based devices with Moorestown and Oaktrail having the same hardware decode capabilities but Pinetrail having a very different solution (or no solution, depending on the OEM.) Moorestown/Oaketrail is enabled via vaapi (with gstreamer elements wrapping vaapi.) Most app developers will have no need to do anything other then use the Qt/Mobility API. If the app needs to build an exotic pipeline then they can directly work with gstreamer, and if the app already has it's own concept of a pipeline infrastructure then the app could choose to directly talk to VAAPI. Some pinetrail based tablets (like the WeTab/ExoPC) have a third party hd video decoder. There is no such support for this in meego. I have no idea if somebody is planning to add support, but some kind of linux solution exist since this is integrated into the Linux that comes on WeTab. For your info, in the TV project we have a serious need to treat video well and for that reason we have started a tread discussion on the need for a unified video player which can hide the complexity of sourcing the video and managing special the hardware (including overlay). Currently a functional spec is posted and soon will follow an implementation spec. If you are not followoing the meego-tv mailing list you can get the archive from here. http://lists.meego.com/pipermail/meego-tv/2011-March/12.html Please also follow up on the replies on this to your proposal. Stefan -- Dominig ar Foll MeeGo TV Open Source Technology Centre Intel SSG ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] Meego Video API's
hi, On 04.03.2011 21:12, ext Soussi, Slim wrote: Hi All, I would like to know what APIs are available for Video decoding acceleration on MeeGo Tablets? Can we access VAAPI? Can we access OpenMax IL? For application developers this all should not matter. We are encouraging adaptation vendors to provide gstreamer plugins. gstreamer provides a stable plugin api that they can build their plugins against. This way the plugins will work in various meego versions. The licensing model would allow the to ship the plugins as binary only if needed. Its up to the vendor if they talk to their video accelerator solution directly via a kernel driver (v4l2 is making interesting progress here) or if they use extra layers as openmax or vaapi inbetween. Stefan What is the kernel that will be on the future MeeGo tablets? Thanks a lot for your answers. Cordialement, Best regards, Mit freundlichen Grüßen, Distinti saluti, Saludos, Vennlig hilsen, Med vänlig hälsning Vriendelijke groet, Lämpimin terveisin Atentamente, С уважением, 所有最好, ברכות חמות, Saygılarımla, لكم وافر الإخلاص, Fáilte romhat, Pozdrawiam. S l i m | S O U S S I EMEA SSG Scale Programs Community Manager Intel AppUpSM developer program iNet: 8-283-7262 Tel: +33-1-58-87-72-62 Mobile: +33-6-20-62-78-61 - Intel Corporation SAS (French simplified joint stock company) Registered headquarters: Les Montalets- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ___ MeeGo-dev mailing list MeeGo-dev@meego.com http://lists.meego.com/listinfo/meego-dev http://wiki.meego.com/Mailing_list_guidelines
Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list
Hi Dominig, On 16.03.2011 11:08, ext Dominig ar Foll wrote: Hello, As many of you may not follow the TV mailing list. I post you an information about a proposal which might also present interest outside of the TV world. when you start to play video on Linux you realise quickly that no solution can cover the full need of playing video for a real full TV application. If you try to create a Consumer Electronic (CE) device, you likely use a System on Chip (SoC) which provides a concept of video overlay which provide a great help on performance but make any integration very specific. The propose specification has been written for TV but cold be valuable to any device which either wants to be fully compliant with TV requirement when playing videos or use SoC with hardware acceleration. The proposed specification aims at creating a generic service for Linux (I will start by MeeGo TV) which could unify the play out of a video and make transparent the support of various hardware helper provider by some chip and in particular the support of overlay video common on SoC. Thanks to let me know your feedback, preferably on the MeeGo TV mailing list.. File can be found here : http://wiki.meego.com/File:Meego_Unified_MultiMedia_Service_V0.4.odt Regarding the already existing projects. Are you aware of MAFW on Maemo5 http://www.grancanariadesktopsummit.org/node/219 The implementation might not be perfect, but the concept behind is sane. The picture in 6 Position in MeeGo looks quite arbitrary to me. Do the colors have a special semantics (meybe add a small leged below). In 7 Transparency you need to highlight what your proposal adds to the existing features. * Transport protocol: handled e.g. by gstreamer already, standarts like DLNA specify subsets for interoperability already * Transparent Encapsulation and Multiplexing: could you please elaborate why one would need the non-automatic mode. I think it does not make sense to let the application specify what format the stream is in, if the media-framework can figure it (in almost all of the cases). In some corner cases one can e.g. use custom pipelines and specify the format (e.g. a ringtone playback service might do that if it knows the format already). * Transparent Target: Whats the role of the UMMS here? How does the URI make sense here. Are you suggesting to use something like opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where a local renderer would render well locally and one could e.g. have a UPnP DLNA renderer or a media recorder. * Transparent Resource Management: That makes a lot of sense and so far was planned to be done on QT MultimediaKit * Attended and Non Attended execution: This sounds like having a media recording service in the platform. 8 Audio Video Control This is a media player interface. Most of the things make sense. Below those that might need more thinking * Codec Selection: please don't. This is something that we need to solve below and not push to the application or even to the user. * Buffer Strategy: same as before. Buffering strategy depends on the use-case and media. The application needs to express whether its a media-player/media-editor/.. and from that we need to derive this. 9 Restricted Access Mode Most of those are needed as platform wide services. E.g. Parental Control would also be needed for Internet access. 11 Developer and Linux friendly * Backwards compatible ...: My suggestion is to take inspiration in existing components, but only do any emulation if someone really needs that. It is usually possible to some extend, but whats the point? * Device and Domain independence: Again, how does UMMS improve the situation here? 12 Typical use cases I think it would be helpful to have before and after stories here to highlight the benefits of your concept. 13 D-Bus Be careful with generic statements like D-Bus can be a bit slow Stick with facts and avoid myths. 14 QT-Multimedia Seriously, don't even consider to stack it on top of qt-multimedia. We're still embedded. You could propose to implement it as part of QT multimedia though (or having it at the same level). 15 GStreamer It is GStreamer (with a upper case 'S') :) In general please spell check the section. Regarding the three weak points: * smooth fast forward is a seek_event with a rate1.0. There might be elements not properly implementing that, but I fail to understand how you can fix that on higher layers instead of in the elements. It might make sense to define extended compliance criteria for base adaptation vendors to ensure consistent behavior and features. * DRM can be implemented outside of GStreamer. still I don't fully understand what the issue here might be. * Push/pull: gstreamer is a library. you can do lots of things with it. If you want to use it to broadcast media you can do that very well. Some known examples: rygel (upnp media server), gst-rtsp-server. Just to clarify on the terminology - media