On Tue, Nov 15, 2011 at 10:59 PM, Carsten Haitzler <ras...@rasterman.com> wrote:
>
> On Tue, 15 Nov 2011 11:28:28 -0200 Gustavo Sverzut Barbieri
> <barbi...@profusion.mobi> said:
>
> > On Tue, Nov 15, 2011 at 6:42 AM, Carsten Haitzler <ras...@rasterman.com>
> > wrote:
> > > On Tue, 15 Nov 2011 04:13:32 -0200 Gustavo Sverzut Barbieri
> > > <barbi...@profusion.mobi> said:
> > >
> > >> I really don't know why I bother to explain these things I know will
> > >> get nowhere.
> > >
> > > i've already said we can support it. i just disagree that that is a first
> > > port of call or the only port of call. i disagree that we totally reply on
> > > PA for everything we need audio-wise.
> >
> > As I said, maybe not clear enough: "we can't rely on PA" is the worse
> > part. We use that to motivate us to create something new out of
> > nowhere instead of helping other freesoftware projects. The "excuse"
> > is often no time and higher priority things.
> >
> > I know the history, if it was not this behavior then E would never exist, 
> > etc.
> >
> > But this is a bad practice:
> >
> >     - what first looks like simple (20% of the work that maps to 80%
> > of requirements), will end consuming our already scarce time (the 80%
> > of the time that maps to 20% of requirements). This results in more
> > work to do in the long run for a minimal initial save. If you stop to
> > think, Embryo is an example of this, now we figured out it was better
> > to use Lua. :-S
>
> actually i've spent more time on lua than embryo. much more. embryo still is
> massively smaller and leaner too. lua also still can't abort execution with
> infinite loops. embryo can. and let me be clear... EMBRYO WAS BASED ON ANOTHER
> OPEN SOURCE PROJECT... EXACTLY YOUR ARGUMENT. i didn't write it. the compiler
> is sill 99% the same as where it came from. i did almost rewrite the entire
> runtime vm though. and it got a lot smaller in the process. my REQUIREMENTS
> were "i need something that does: if (x) then do y else do z." that was it.
> embryo met and exceeded that by a vast vast vast margin. at the time it was a
> choice of write my own mini logic engine inside edje or re-use another one. i
> re-used an open source project. i chose the one that had the lowest footprint
> and least intrusiveness as i wasn't in the mood for having a big fat execution
> environment.

if you took Lua at the time, as you wished, maybe they would already
have added the preemptive requirement you need, no?

Same for JS engines, AFAIR these things were implemented in JS because
browsers wanted to avoid bad scripts broking them. You could have
saved everyone's time :-)


> >     - this mindset plays against out own project. If we stimulate
> > people to play Not-Invented-Here syndrome, we suffer as one day we'll
> > be the other peer. We need more people to collaborate on our code
> > base, right? But we keep telling people it's better to start something
> > from scratch instead of helping others! Then we have examples like
> > turran's enesim/eon, instead of incrementally helping Evas he decided
> > to go a different new route and we've lost a developer. :-S
>
> that has nothing to do with this topic at all.

it does, but let's ignore it


> >     - relations with other projects. If you go to conferences, many
> > developers hates us for multitude of ways (when they care to know what
> > is E/EFL). One of the reasons is that we play the bitch and do not
> > report or send patches, instead recreating stuff. This keeps away
> > possible contributors as well "they're not helping me, I'm not helping
> > them". Maybe it was the case with Xrender, I don't know. Maybe it was
> > with glib? But I'm seeing it now with PulseAudio/Canberra and I'm
> > saying it loud :-)
>
> and for audio we're using another open source projct that actually DOES have
> the features we want. in fact using 4 of them. libogg, libflac, libsndfile and
> libremix... if you didn't notice. maybe you're just upset we're not using your
> project of choice.

It's not a choice or not, it's more like the correct layer or not.
Just come to mind all the problems you'll have to deal like
enumerating the sound cards, being able to select where to play, how
to handle bluetooth speakers, allowing user to selectively disable
sound classes, etc.


> as for xrender... that'd involve a massive detour into the internals of x - at
> the time pixman wasn't even a library. making any changes would involve
> recompiling and restarting xservers. no better way for e to have been delayed
> by years. not to mention having to work on multiple drivers too.

ok, fair.


> > Particularly about the last point: I know we don't live in the
> > wonderland. Some project maintainers are very hard to work with and
> > changes are just rejected for no reason (hummmm... reminds me of our
> > last behaviors?) and in this cases it may be worth to fork, do and
> > prove it's right, having the possibility to merge back someday, or at
> > least get more developers on bandwagon.
>
> and if we fork pulse and then require the new features we have - we have a
> competing pulse that people have to replace their existing installation of,
> which invariably no one will do, so we're stuck with the lowest common
> denominator. i'm sitting and staring at a library that already meets our 
> needs.
> you just don't like that.

using your own terms "I've been burned by it before" :-D


> > Technically (I'm ignoring bureaucratic and personal reasons) maybe if
> > it was done this way, we'd be using glib and had avoided all the work
> > on ecore/eina, with a faster glib that could be speeding up gnome apps
> > as well?
>
> after having fought day in and out within the gnome dev world i wasn't going 
> to
> touch it. i spent a year of my life being told i was wrong only in the end to
> be told "shit. you're right. help!". too late. i wasn't going to use glib at
> all because i already saw things i didn't like - i didn't like the timers 
> using
> sec+msec vals - a double was much more convenient. there ae no constructs
> like idle enterers in ecore. there were no animators. i disagreed with other
> parts of glib i know i'd have to ignore, wrap, or change. i didn't want to
> bother dealing with that.

ok, I pre-explained this already... everyone knows that story by now.
Maybe the same communication problems we have now?


> this has nothing to do with NIH. this has to do with me wanting specific
> capabilities and the best way of getting them is not depending on PA as of
> today. it is easier to use a library that already has them. like remix. 
> remix's
> output can go to PA. it happens to via alsa anyway.
>
> > But as the world is not 100% technical, we have to deal with persons
> > and the line is blurry. But if we always use the excuse "I'be been
> > burned before" and applying the same old rules to different people,
> > we'll suffer.
> >
> > At least for Lennart, he is bit like you raster. He is hard to get
> > along, but he does listen and will accept help. :-)
>
> lennart is a good guy. i actually like him. what i DON'T want to do is go
> implement a higher level feature DEPENDING on changes in a lower level library
> that may take months or years before everyone has it. not only that we have to
> convince him of a feature that has yet to be used by anyone. we implement it
> high up first and THEN push it down. THEN convince lennart these features are
> worth it - provide patches to make his life easy. then hope he agrees. in the
> meantime we still have a working feature anyway - even without PA... just on
> good old alsa ... or on jack.. or OSS... or whatever other audio layer is
> required.

And that's life, isn't it beautiful? Using the same logic as you did
you'd not have me in the project, or Cedric or many other devs :-P

See, I had to do graphics for maemo. I had 2 options:
    - write my own canvas
    - use some existing canvas
The first option was very simple, all I needed I could do in a week.
I'd take more than one week to EVALUATE all the existing canvas.
    I could use as my own excuse go write my own canvas, it was nice,
pleasant and would make by job even more "assured" as I'd be required
to maintain my work! Triple awesome.
    But no, I wasted time to research canvas... found evas! Great!
Right? No, it was like a snail on Nokia devices! Give up? What an
excuse to go back to my will to write my own canvas...
    But no, I wasted time again and profiled. Found what was slow and
fixed it. Benefited everyone, proved EFL would run fine one Nokia,
that prompted people on OpenMoko! Guess what, that even benefited
you... that got you back to the E road, working full time with it and
all.
    But it also benefited me. While the initial requirements were low
on my side, the extra work paid itself when I got Edje and others as
"bonus".

That said, if I look back I could see that as a huge waste of time. Or
as a great investment :-)


> > >> Saying that what you want could be easily worked with them is also out
> > >> of question, there is always the "no time" and "bigger fish to fry", I
> > >> know the drill...
> > >
> > > i've been burned before. i waited so long for xrender to go nowhere. 
> > > luckily
> > > i didn't make it a core required rendering back-end. not going to depend 
> > > on
> > > something like that again. if PA doesn't have the feature today - i think
> > > it's unwise to depend on it maybe having it some time in the future.
> >
> > PA does what a generic sound system is supposed to do. The track
> > programming and sequencing should be done on your side, otherwise
> > you'll be increasing complexity. But loading the samples there and
> > requesting properties should be fine. If not, then why not help there?
> > It is not more work, it is less. It's like sound loading. We could
> > submit loading samples from eet.
>
> it is being done "our side". it just so happens the output from an existing
> open source library is an audio stream already mixed.

Then you're also mixing even when not required, got a phone call and
was corked? Keep consuming CPU while the call is in place, etc. There
are problems you're overseeing, believe me. You only got the 20% of
time in this, who will do the 80% left?

As I said before, "been burned by it".


> > It would be nice, more projects supporting and indirectly marketing
> > Eet! Maybe being aware of it people will use it more?
>
> do it AFTER we have it working to demonstrate why we need it.

THERE you have a saying. I find this one a good thing, really. If this
proves to be true, then it will be awesome.

But don't believe you'll throw your idea at mail list and run away and
it will solve itself. We all hate when people do it with us, never
works :-)


> > >> that we don't provide sound feedback and he is waiting it since forever.
> > >
> > > oh that's all - yes. because no one has stepped up and done it. someone
> > > did. i said i wasn't going to do it before e17 release because i didn't
> > > want to be distracted by it. :)
> >
> > I did and was immediately rejected because it was not a dream-way.
> > Maybe it is the reason it was never done before?
>
> by the same token if i didn't reject what i thought was not what i wanted, e
> would not be what it is. it'd be gtk or qt based. so.. shall we now move to 
> qt?
> it'd also by in python now if we did everything your way. it's my job to make 
> e
> what it is.

WTF is this argument? Really?!

and python, I know you hate it, but if it weren't Python-EFL things
would be very different over here. For sure if it wasn't python-efl we
would not have canola, bluemaemo or most of openmoko. It was the
biggest selling point of EFL, it was what promoted EFL to its public
awareness of today.

You dislike python, you've killed python-efl development (no point
doing something will always be depreciated)... but really, you
shouldn't overlook the impact of having python in efl.



> > >> > that is true, but we have a much bigger problem already with that and
> > >> > images.
> > >>
> > >> having a problem does not justify to introduce another ;-) Before you
> > >> had 1 problem, now you'll have two.
> > >
> > > unless PA is going to get sequenced multi-track audio... we can't do
> > > everything via PA. we have a requirement for a more general solution.
> > > well.. i have a requirement. we can use PA when and where appropriate. we
> > > can use canberra when and where appropriate. i'm not going to limit 
> > > designs
> > > to just what these happen to do. well not limit, because i care a lot 
> > > about
> > > audio.
> >
> > Not asking you to limit. Just get the required bits at the correct
> > places. PA already does sample loading, playback and mixing. All you
> > need in your side is control of such playback. Likely you can specify
> > a sequencing playback, but that may be more subtle to discussions and
> > can be left out for now, merging it when you prove you're right.
>
> and PA does not do:
>
> * sequencing (timing N channels to have synchronous playback - PA's api can 
> only
> play a sample at request, not at a timepoint).

should be simple and generally useful. doubt it wouldn't be accepted


> * variable playback speed (only can play at samples samplerate)

should be simple and generally useful. doubt it wouldn't be accepted


> * envelopes (volume, pan)

can't this be controlled from outside? need to discuss this with
people. I'll meet some people tomorrow, can discuss this with them.


> * compression for large samples

AFAIU PA supports compressed samples, particularly useful for HW
accelerated decoders and for low-bandwidth sinks (think bluetooth's
a2dp). So if I understand you correctly, it should be supported.


> > >> for instance PA allows for sound what you'd like to have with images
> > >> (central daemon to load stuff), but we're not using it as there is no
> > >> time. Then we create something else that then we need to create
> > >> something else again to match. That rule of "we can always solve a
> > >> problem by creating another abstraction layer". PA would not work
> > >> everywhere, so create a layer to abstract it away, but that would be
> > >> the role of PA :-S
> > >
> > > but it doesn't do what i want from an audio subsystem - not everything. so
> > > either i decide to limit what edje does to just what canberra does... or 
> > > PA
> > > does, or i can do more if i just deal with the audio mixing locally in 
> > > edje
> > > and just punt out audio stream data. this is moot if we support both 
> > > paths -
> > > powerful/complex path and simple one, so where is the argument? i want to 
> > > do
> > > the powerful one first as the simple one is a subset case.
> >
> > Fortunately you're only into graphics and audio. If you were into
> > kernel, bluetooth, networking... we'd never be able to run in on Linux
> > :-D
>
> if i was into networking ... we'd have a network control gadget that could set
> static ip's, auto-configure proxies, vpn's, etc. :) and bluetooth would 
> actually
> be able to not just pair devices but do obex transfers and more. :)

bla bla bla...

  - static ip, proxy = being done, will be supported by e17. I have
part of the code written, but I used some time to fix my btrfs /home
:-D  The recent commits from Lucas were the missing infrastructure.
Now just need dialog to be fixed.

  - vpn = I'd need to check how is the work in connman itself. Likely
will not be done as I don't need it. If you need, easier to drop a
file at /var/lib/connman with the configuration and use it :-)

  - bluez = it was an experiment from gustavo padovan, just committed
because it was better than nothing.

anyway if you were in networking, these things would be likely in the
same state: started but not finished, missing bits here and there --
bigger fish to fry! ;-)


> are you volunteering to add all these features to PA? you're going to find 
> that
> a hard push - pa as an api itself doesn't handle file containers at all. it
> simply allows you to provide "pcm" audio (varying pcm/ulaw formats. to do this
> through client uploads, the pa server would now have to load them back-door
> from the file directly. will this be acceptable to lennart? i suspect it won't
> without a lot of convincing and examples. or then you have to add it to the pa
> library client api.. and for the same reasons it probably will get rejected
> too. then you have to do synchronised playback - eg play sample x at timepoint
> y and have the pa server handle that. in addition you'll have to add playback
> speed control on play start as well as envelopes per sample. again - it'll add
> more api and will lennart accept? and at this point there still isn't a 
> working
> example of why it's needed. i can go on... are you willing to do this though?
> remix already does most of this. maybe pa should use remix to accomplish this
> and then o a lot of the code done that uses remix can be transplanted?

sa...@profusion.mobi - we're looking for projects in this front, would
LOVE to do it ;-)

as for your technical points, PA already uses libsamplerate, they have
multiple properties for playback and I don't see why having few more
useful would be bad. And yes, if PA is unable to do this due lack of
code, linking libremix in would not be that bad.


--
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--------------------------------------
MSN: barbi...@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to