Re: [PD-dev] from t_symbol to t_class

2013-01-04 Thread Charles Henry
On Fri, Jan 4, 2013 at 11:36 AM, Jonathan Wilkes  wrote:

> - Original Message -
> > From: IOhannes zmölnig 
> > To: pd-dev@iem.at
> > Cc:
> > Sent: Friday, January 4, 2013 4:43 AM
> > Subject: Re: [PD-dev] from t_symbol to t_class
> >
> > On 01/04/2013 07:19 AM, Miller Puckette wrote:
> >>  I think you're safe calling vmess() to pass no arguments to clip_new
> >>  (for example) - the worst that can happen is the "return value"
> > (the
> >>  global "newest" is zero.  If not it's a proper Pd object you
> > can use zgetfn()
> >>  on to test it for messages.
> >>
> >>  Main problem I see with this is that some classes like "select"
> > and "list"
> >>  are actually several classes that share a name (and which one gets
> created
> >>  depends on the arguments sent to vmess())
> >
> >
> > also, what happens if the object in question accesses some hardware
> ressource?
> > e.g. [pix_video] will try to grab an available video-source, potentially
> locking
> > a hardware device.
> > in theory this should not be aproblem, as the object will hopefully be
> freed by
> > [classinfo] asap, but in practice it might have all kinds of
> side-effects,
> > starting from short lockups of Pd to launching rockets.
>
> I'd prefer to just inspect the class without creating a new instance but I
> can't
> figure out how.
>
> Are all t_gobj linked together in one big list?
>
> -Jonathan
>

I think the t_gobj are in a list per canvas--see the canvas_dodsp method in
g_canvas.c to see how it's done.

Classes in Pd exist without having an instance of the class.  Once the
setup method is called (which for externals will require you to create an
instance), the class data structure is created and loaded with all its
methods.
However, I don't know how to get a pointer to the class, when all you know
is the symbol.

Within a class, the methods are stored in an array "c_methods".  See the
class definition in m_imp.h.  You can dump out all the symbols that have
methods by looping over the array from 0 to (c_nmethod-1) and accessing the
method's "me_name".

Chuck
___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] shared class data and functions

2012-11-14 Thread Charles Henry
On Wed, Nov 14, 2012 at 1:17 AM, Jonathan Wilkes  wrote:

> I have three classes:
> foo, bar, bow
>
> Foo has a function:
>
> void foo_blah(t_foo *x, t_symbol *s, t_int argc, t_atom *argv)
> {
> if(x->x_member == 1) do_something...
> }
>
> Bar and bow both have x->x_member, too, and I want all three
> to use the same function so I don't have to copy it two more times.
> Is there a way to do this:
>
> void foo_blah(t_pd *x, t_symbol *s, t_int argc, t_atom *argv)
> {
> if we can cast x to t_foo, t_bar or t_bow then
> check if x has x->member equal to 1, and if so then do_something...
>
> }
>
> which I can call by sending t_foo, t_bar or t_bow as the first
> arg to that function?
>

Pd classes are nested data structures.  To be consistent and use this trick
to your advantage, define your classes' data structures to have a parent
data structure.  Note that t_object is another name for t_text.  This is
all in m_pd.h.

t_pd<-t_gobj<-t_text

You define the first element of your class struct as a t_object or t_text.
Then, you can cast any pointer to an instance of your class as a t_text *.
Likewise, every t_text pointer can be cast as a g_obj *.  Same for t_gobj *
to t_pd *

Now, in order to have foo, bar, and bow have the same data structure
element "member", create this class:

struct _parent {
t_object my_object;  //Does this name matter?
t_int member;
} t_parent;

Then, your other classes work the same way: pointers to foo, bar and bow
can be cast as pointers to t_parent.  Then, you're absolutely sure that
((t_parent *)x)->member exists and can be read/written.

If you don't like that approach--just make sure the "t_int member" occurs
first after t_object in your class definitions to all three.  The compiler
turns accessing member into pointer arithmetic.  For example,
struct xyz {
int x;
int y;
int z;
} t_xyz;

t_xyz data;
t_xyz *instance=&data;

The compiler turns
instance->x into *((int *)instance)
instance->y into *((int *)instance + 1)
instance->z into *((int *)instance + 2)

So you see why member needs to be in the same location in each class.

Chuck
___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] non-2^n blocksizes (was Re: [ pure-data-Feature Requests-3578019 ] I'd like to...)

2012-10-23 Thread Charles Henry
On Tue, Oct 23, 2012 at 1:02 PM, Miller Puckette  wrote:
> all the 'ugens' actually look at the allocated
> size of input/output signals to determine the number of points to calculate.

Okay--I see where this goes now.  You just pass the signal data
structure to the "dsp" method and the "dsp" method is responsible for
putting the perform routine on the chain with s_vecsize

The code in the block_set() function is pretty much the same as what
happens in signal_new() when you feed it a non-power-of-two vector
size, except it doesn't get stored in the signal data structure:
if (calcsize)
  if ((vecsize = (1 << ilog2(calcsize))) != calcsize)
vecsize *= 2;

So, to make it work--you'd have to add s_calcsize to the signal data
structure, and then, each compatible "dsp" routine would need to use
s_calcsize in place of s_vecsize.

But it seems to be practically useless.  It's misleading to users to
think they're getting a non-2^n blocksize.  The calcsize is after all
set by the block~ and switch~ objects in the argument we think of as
blocksize.

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] non-2^n blocksizes (was Re: [ pure-data-Feature Requests-3578019 ] I'd like to...)

2012-10-23 Thread Charles Henry
That's not completely unusual for things in the dspcontext struct.
Some of them (I think "dc_toplevel" is another one) get stored there
but not used--because the value gets set and used in the very same
function.
Over the weekend, I went digging for calcsize but gave up (I also
wanted to prove or disprove it gets used).  I think it must be in
"ugen_doit" where N gets set for the ugen being scheduled.

Chuck


On Tue, Oct 23, 2012 at 1:02 PM, Miller Puckette  wrote:
> Sure enough... a quick search for dc_calcsize verifies that it's not
> used anywhere although set - all the 'ugens' actually look at the allocated
> size of input/output signals to determine the number of points to calculate.
>
> I'm not sure how to fix this - and anyway I don't have any real patches that
> use this 'feature' that would permit me to test it :)
>
> M
>
> On Tue, Oct 23, 2012 at 07:41:08PM +0200, IOhannes m zmölnig wrote:
>> On 10/23/2012 06:30 PM, Miller Puckette wrote:
>> >Hi all -
>> >
>> >block sizes in subpatches are restricted to being a power of two multiple
>> >or submultiple of the containing patch.  So the only context in which a
>> >non-power-of-two blocksize is allowerd is if it's specified in the top-level
>> >patch.
>> >
>> >Then, of course, dac~ and adc~ will no longer work as they need block size 
>> >of
>> >64.
>> >
>>
>> yes, i'm aware of that.
>> i only wanted to say that in real live, i haven't been able to
>> construct a patch that would do non-power-of-two processing, even if
>> it was in a top-level patch without any fancy input/output.
>> e.g. claude's example patch doesn't do non-power-of-two
>> block-processing but instead falls back to the next greater 2^n
>> blocksize.
>>
>> so if somebody (miller?) could post a simplistic patch that really
>> does block-processing with an odd number of samples, that would be
>> great.
>>
>>
>> fgmadsr
>> IOhannes

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] [OT] spambots that deliver compliments?

2012-04-03 Thread Charles Henry
On 4/3/12, András Murányi  wrote:
> Maybe it tries to inject javascript for xss
> (http://en.wikipedia.org/wiki/Cross-site_scripting) or php or mysql to
> be eventually executed on the server? Or we're just being malicious ;)
>
> András

LOL--I get it now.  Probably, sourceforge has some intelligence (more
than me) to strip out script tags.  When I view the page source,
there's no additional text.  So, that seems likely--as long as the
motivation is to hack websites.
Thanks for the explanation.

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


[PD-dev] [OT] spambots that deliver compliments?

2012-04-03 Thread Charles Henry
I'm somewhat confused by these recent comments to the bug tracker
(tickets 3514520, 3514538, 3514563).  There appears to be a spambot
out there that just enters non-descript compliments into web forms.

Who would write such a thing?  I thought spambots were created for
phishing scams, selling fake products, or boosting link counts to
unscrupulous websites (which are motivations I can understand even if
I disagree with them).  This one appears to have no purpose at all
except to provide encouragement to bloggers.  It's just weird.

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] per-thread storage in Pd in support of pdlib - discussion?

2012-02-14 Thread Charles Henry
On 2/13/12, Mathieu Bouchard  wrote:
> Do you understand what I say, or you just repeat what I was replying to ?

I thought I understood--was there something I missed?  The point of
the original remark is that you always lose some of your potential
computing power when trying to use multiple resources.  You contrast
with the capability of parallel computing to accomplish a certain
amount of work in less time.  I don't want to argue with you--these
are just the two sides of the coin.

> If it's not going to be read, I may as well not write it.

I had actually typed out and then deleted some more things about a
successful project that accelerated computing time from weeks to
hours, but I thought they were boring.  And then I was late for a
meeting with the same group I was about to write about!  Succeeding at
wasting my time indeed! -- Let's not make it parallel after all.

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] per-thread storage in Pd in support of pdlib - discussion?

2012-02-13 Thread Charles Henry
On 2/11/12, Mathieu Bouchard  wrote:
> Le 2012-01-26 à 14:45:00, Charles Henry a écrit :
>
>> When talking about cluster computing, I had someone once ask: "Is that a
>> case where the whole is greater than the sum of its parts?" "It's less.
>> Always less."
>
> Depends on how you count it. You may also see it as a bunch of computers
> in which 0 computer can do task T in time N, but they can join together to
> form 1 (or more) computer(s) that can do task T in time N or less. In that
> sense, it's infinitely more powerful. This way of seeing it is much more
> important in realtime apps than in batch-compute-over-the-weekend apps.
>
> It's like how one ninja turtle alone can't beat a certain evil monster,
> but with teamwork, they can. ;)

You just always lose on efficiency whenever you use several threads or
multiple nodes.  Best case is "less than or equal" to the sum of its
parts, and equal only when all the things you want to do are
independent.

It's easy to see that potential for doing fast calculations building a
cluster... and then get disappointed to see how much of it gets
wasted.  Look at that: a user just put 64 threads on one node and it
spends all its time switching contexts.  erm... the /home filesystem
is where one user was just trying to write 500 output files at once,
and no one has been able to login for hours.

I'll go back to wasting my time, and see if I can make it parallel ;)

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] per-thread storage in Pd in support of pdlib - discussion?

2012-01-26 Thread Charles Henry
On Wed, Jan 25, 2012 at 5:32 PM, Peter Brinkmann
 wrote:

> I don't think users have anything to gain from fine-grained control of
> threads.  That seems like an optimization hint that may or may not be
> helpful, depending on a lot of factors that are not obvious and will differ
> from machine to machine.  In any case, I don't want to have to think about
> threads when patching any more than I want to think about, say, NEON
> optimizations.

I'm still making the case here:
Suppose you're writing a patch and you run up against the limitations
of a single-threaded process.  Then, you take some portion in a
sub-patch and drop in a "thread~" object.  You're able to selectively
add the functionality where it matters to you *and* only when you
actually need it.

The generalizable case is much more preferrable, I agree, but as you
say further on, you might develop an application that incurs
significant overhead--and may not be appropriate for all applications.

The design rationale for PdCUDA (in progress..grumble) is to expose
the programming costs through benchmarks and measurement tools, while
providing user-level control.  It's a different sort of problem--where
mixtures of different kinds of processors are concerned, the
application being designed may not be appropriate for one kind (e.g.
recursive filters on CUDA will run slowly, so just don't put them
there!).

Again... my head's in audio.  I'm still puzzling over the other ideas on topic.

>> > I believe it's much simpler than that.  It should be enough to just do a
>> > topological sort of the signal processing graph; that'll tell you which
>> > objects are ready to run at any given time, and then you can parallelize
>> > the
>> > invocation of their perform functions (or not, depending on how many
>> > processors are available).  I don't think there's any need to explicitly
>> > synchronize much; tools like OpenMP should be able to handle this
>> > implicitly.
>> > Cheers,
>> >  Peter
>>
>> For that--the dspchain (an array of int*) makes a very bad structure.
>> So, you'll want to re-write a handful of functions and data structures
>> around having multiple concurrent branches of computations.  I
>> actually really like this problem :D  I can picture a linked list of
>> dspchains to do this.  But... the description of the sort algorithm
>> really will determine what the data structure ought to be.
>
>
> Hmm, I think that's a pretty standard scheduling problem.  All the necessary
> information is already in Pd, and it's being used when dsp_chain is
> computed.  It's just a matter of representing it as a dependency tree rather
> than a list, and then traversing the tree in the correct order.

I'm going to expand on this a bit here (I think we're on the same
page, but that just doesn't say enough about it).

For clarification/scope and correct me if I'm wrong here.  The
algorithm in d_ugen.c works this way within each dspcontext (see
functions ugen_done_graph and ugen_doit):
Create a list of all ugens in the current context.  Loop over the
list, and for all ugens with no inlet connections, call
ugen_doit--which calls the "dsp" function and proceeds depth-first
scheduling other connected ugens with no unfilled inlets.
If any ugens aren't scheduled, throw an error, make sure outlet
signals are new and set to zero in the parent context.
Delete all the ugens in the graph.

The dependency information is currently just being used as an
intermediate state while creating the dsp_chain structure (upon which
there is no strict dependency information).

The tree structure needs to be generated in place of a single
dsp_chain.  It would just be made up of short dsp_chain structures,
each one representing a depth-first branch (within which the
dependency is essential--these are strictly serial portions, run in a
single thread.  Yay!-code from dsp_tick goes here!).  Then, on a
higher level, you dispatch/schedule those branches.  At the higher
level (the tree), you need to represent the dependencies between
branches, which (at first guess) is a doubly-linked list with
breadth-wise links being potentially concurrent.

hmm... I guess that won't work for block_prolog and epilog as
written but it would be fine for most perform routines.

You wouldn't be using the dependency information in real-time to try
to make decisions what to schedule--just generate the new data
structure that tells how the program runs.

Now, that description neglects other concerns I know Miller has
wrestled with--threadsafe delwrite/delread, tabwrite/tabread,
throw/catch, more?

> The real question is whether there's anything to gain from this at all, or
> whether the overhead from parallelization will destroy any gains.  I always
> remember the cautionary tale of a complex system that was carefully designed
> to work with an arbitrary number n of threads, until it was profiled and the
> designers found that it works best when n == 1.

When talking about cluster computing, I had someone once 

Re: [PD-dev] per-thread storage in Pd in support of pdlib - discussion?

2012-01-25 Thread Charles Henry
On Wed, Jan 25, 2012 at 11:46 AM, Peter Brinkmann
 wrote:
>
> Hi Chuck,
> Check out the early bits of this thread --- various use cases already came
> up along the way:
> http://lists.puredata.info/pipermail/pd-dev/2012-01/017992.html.  The short
> version is that libpd is being used in such a wide range of settings that
> you can come up with legitimate use cases for pretty much anything (single
> Pd instance shared between several threads, multiple Pd instances in one
> thread, and anything in between).  At the level of the audio library, it's
> impossible to make good assumptions about threading.

Hi Peter

That's the part I really don't understand, and I don't really have a
clear picture of how you want to be able to control/choose between
those cases.
I can also see how there could be more capabilities tied to having
multiple threads generally.    But specifically, I can't say.  I have
no clue.

>> I remember a conversation with IOhannes in August about
>> multi-threading audio via sub-canvas user interface object (propose
>> thread~ akin to block~).  If all you're after is audio
>> multi-threading--there's no need for multiple instances of Pd.
>> Threads could be used to start a portion of the dsp chain, running
>> asynchronously, and then join/synchronize with Pd when finished.
>
>
> I don't think a patch is the place where decisions about threading should be
> made.  Threading is an implementation detail that users shouldn't have to
> worry about, and besides, whether you have anything to gain from threading
> will depend on a number of factors that users won't necessarily be able to
> control or even know about.

I have a different view.  Every sort of use for Pd is like writing a
program--you should assume Pd users are writing programs with every
sort of tool you give them--the flipside to having to control
threading explicitly is that you get to control how finely grained the
threading is.  Putting it on the patching level is just the user
interface--and it can work out nicely for grouping.  Even if you have
some automatic tools, you may still want to have explicit control
through another available interface (e.g. for debugging).

>> What this would look like:  Add a thread_prolog, thread_epilog, and
>> thread_sync function. The thread_prolog function that occurs before
>> block_prolog, starts a thread running the portion of dsp chain
>> cointained within, and returns the pointer to the function following
>> the thread_epilog.  The thread_epilog function that occurs after
>> block_epilog--waits for synchronization and returns.
>>
>>
>> What's the difficult part: You would need to have a good ordering of
>> the dsp chain to take advantage of concurrency--each subcanvas having
>> a thread~ object needs to kick off as early as possible, followed by
>> objects that have no dependence on its output.  Secondly, you'd need
>> to put thread_sync on the dsp chain immediately before you will
>> encounter functions with data dependencies.
>
>
> I believe it's much simpler than that.  It should be enough to just do a
> topological sort of the signal processing graph; that'll tell you which
> objects are ready to run at any given time, and then you can parallelize the
> invocation of their perform functions (or not, depending on how many
> processors are available).  I don't think there's any need to explicitly
> synchronize much; tools like OpenMP should be able to handle this
> implicitly.
> Cheers,
>  Peter

For that--the dspchain (an array of int*) makes a very bad structure.
So, you'll want to re-write a handful of functions and data structures
around having multiple concurrent branches of computations.  I
actually really like this problem :D  I can picture a linked list of
dspchains to do this.  But... the description of the sort algorithm
really will determine what the data structure ought to be.

Re-writing dsp_tick() is nearly sacrilege to me... beautiful bit of
code there, but that would have to be done according to whatever you
do to dspchain.

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] per-thread storage in Pd in support of pdlib - discussion?

2012-01-25 Thread Charles Henry
On Sat, Jan 14, 2012 at 3:04 PM, Miller Puckette  wrote:
> To Pd dev -
>
> For some time the good folks who brought us pdlib have been asking how
> one could make it possible to run several instances of Pd in a single
> address space.

Maybe I have on my audio-colored glasses--but that's just where I see
things happening.

I remember a conversation with IOhannes in August about
multi-threading audio via sub-canvas user interface object (propose
thread~ akin to block~).  If all you're after is audio
multi-threading--there's no need for multiple instances of Pd.
Threads could be used to start a portion of the dsp chain, running
asynchronously, and then join/synchronize with Pd when finished.

What this would look like:  Add a thread_prolog, thread_epilog, and
thread_sync function. The thread_prolog function that occurs before
block_prolog, starts a thread running the portion of dsp chain
cointained within, and returns the pointer to the function following
the thread_epilog.  The thread_epilog function that occurs after
block_epilog--waits for synchronization and returns.

What's the difficult part: You would need to have a good ordering of
the dsp chain to take advantage of concurrency--each subcanvas having
a thread~ object needs to kick off as early as possible, followed by
objects that have no dependence on its output.  Secondly, you'd need
to put thread_sync on the dsp chain immediately before you will
encounter functions with data dependencies.

What this approach would provide: a user interface to control audio
threading, without having any duplication of global
variables/arrays/symbols, etc.  It would put the threading operations
closer to the calculations to be performed, and so eliminate many
problems as being out-of-scope.

I've been looking at asynchronous dsp chain operations for my PdCUDA
project (which is several months behind where I expected to be)--the
basic problem is the same.  I haven't gotten that far yet, but someday
I'm going to get to the point where improvements will be made by
changing the order in which perform functions are placed on the dsp
chain.

Maybe this doesn't fit what you want--so I'd refer back to problem
definition.  What's the point of threading and the usage case you have
in mind?

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] double precision Pd: .patch files, tests and benchmarks

2011-10-03 Thread Charles Henry
On Mon, Oct 3, 2011 at 10:19 AM, IOhannes m zmoelnig  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 2011-10-03 16:31, Hans-Christoph Steiner wrote:
>>
>>> These all sound like good ideas to try.  My only concern is that we
>>> might let the deployment issues distract from the issues at hand about
>>> getting it actually working first.
>
> i'm definitely with you here.
> what is still missing in terms of "getting it actually working first"?
>
> afaict, katja's patches do make pd itself double-precision ready (the
> patches might have to be reviewed as far as coding-style is concerned
> though)
>
> otoh, i wouldn't start "porting" externals before we have a deployment
> strategy.
>
>
>
> one important thing missing right now, is how to compile Pd in a given
> precision without having to edit m_pd.h
> technically i think that the define stuff and the like should go into a
> separate file "types.h" (probably "m_types.h") which is generated from
> m_types.h.in during configure time, and which is included by m_pd.h
> (which should remain non-generated)
> the question is, what miller would think of such a thing.

Would you prefer to set the types at configure time through a file--or
for example by adding a -DDOUBLE compiler flag?  The affected
locations of code defining the types could just use #ifdef DOUBLE

In either case, the configure option seems necessary.  It still seems
an open question how best to offer the double-precision types to
externals developers.

In some cases, the setup() function allocates memory, which needs to
be aware of the data type size.
Otherwise, memory for signals is allocated through Pd's DSP graph
generation routines, so that only changes to externals is to compile
perform routines with the given data type.

Adding additional methods seems unnecessary--unless specific
performance problems can be avoided.


>
>>> In terms of packaging, I can see having 64-bit distros run
>>> double-precision Pd for all packages, and 32-bit distros run single
>>> precision.  That should cover the bulk of situations, the other
>>> situations can be covered by custom builds.  Having all the 64-bit
>>> packages use double-precision Pd is of course going to happen after a
>>> while, once we have the bugs worked out.  Here I can see an advantage of
>>> the monolithic Pd-extended package: its an easy, self-contained test bed.
>
> definitely, the traditional Pd-extended will have an easier time here.
>
> nevertheless, the advent of ~/pd-externals for the user has made things
> significantly more complicated in terms of "just works".
>
> fgmasdr
> IOhannes

I don't anticipate any problems with running 64-bit Pd on a 32-bit
system, in principle, just as long as the correct data types are set
for pointers (same size as t_int) and signals (size of t_sample)
differently.

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] canvas class polymorphism

2011-03-24 Thread Charles Henry
On Wed, Mar 23, 2011 at 4:40 PM, Hans-Christoph Steiner wrote:

>
> I think its going to be quite difficult to have a single object running in
> the CUDA/GPU while the rest of the patch runs on the CPU in regular Pd.  My
> guess is that the best first step would be to implement a basic Pd in CUDA,
> then work from there about integrating it.  Perhaps then you could use the
> [pd~] model.
>

That's given me something to think about.  How it might be done: functions
need to be declared __device__ to run as serial processes on the GPU.
There's no loss of what you can do in C on the GPU--it's just a lot slower
clock and threading is different.  Then, all the dynamic memory allocation
routines would need to be changed.

The functions to be called from the host would be declared __global__.

It sounds cumbersome, but possible.  Perhaps, it could be done for
libpd--but then I know there's still  ~10,000s of lines of code in libpd.
I'm not a libpd user, so I don't know how I'd test it without the
gui/editor.  That's just conjecture, but I can't see a good way to simplify
it yet.

I think it's better to only apply CUDA to the major vectorizable operations,
which is just signal operations, but also--I couldn't just apply it to *all*
signal operations.  So, I'd try to add the basic compatibility stuff for
managing signals allocated on GPU... and then go back and try to
incrementally add new signal externals from the Pd math tilde objects.


> For examples of reimplmentations of Pd, check out ZenGarden (C++) and
>  Webpd (Javascript) http://mccormick.cx/projects/WebPd/
>
> .hc
>

I've been having fun playing with WebPd--but I'm not needing that tool just
yet.  Soon... soon...



>
> On Mar 23, 2011, at 5:20 PM, Charles Henry wrote:
>
> Hi, hc
>
> Let me explain a little further here.  The end goal is to have an external
> library that allows one to create externals that use memory on GPUs.  Big
> idea here is that once you've got a system for handling the memory
> allocation and dsp sorting in *exactly* the same way as Pd, then you can
> write externals for CUDA or CL in a way that's consistent with existing
> externals.
>
> With Pd handling the signal memory allocation inside of d_ugen.c and called
> from canvas_dodsp, I wanted for my external library to have its own canvas
> class and different methods for handling the memory allocation differently.
> In fact, I think of that as being the key class to create the library.
>
> I worked through it for a while, and I think it's just plain impossible to
> have another canvas class in an external library, unless there's something
> good I don't understand.  And I really want to understand :)
>
> So, since then, I've been thinking that I'd have to modify the pd-vanilla
> source code.  I've got something that loads a CUDA device and gets me to run
> code during canvas_new, canvas_free, and canvas_dsp.  I'm still trying to
> organize around the idea of having an external library to load and make as
> few changes directly in the vanilla source code.
>
> Chuck
>
> On Wed, Mar 23, 2011 at 1:51 PM, Hans-Christoph Steiner wrote:
>
>>
>> What's the end goal here? You want an object that acts like a
>> t_canvas/t_glist?
>>
>> .hc
>>
>>
>> On Mar 13, 2011, at 3:31 PM, Charles Henry wrote:
>>
>>  I've been working through my CUDA Pd project, and I ran into the problem
>>> of making externals that copy the canvas class.
>>>
>>> My first idea was that I wanted a completely separate class with
>>> different methods using glist.  Calls from Pd looking for t_canvas work just
>>> fine, but functions like pd_findbyclass that look for canvas_class fail.  I
>>> started mucking around in the pd src, and I think it's just too difficult
>>> and would make onerous changes that I don't like.
>>>
>>> Is there something I'm not getting about canvas classes and externals?
>>>
>>> My second approach to creating an external library is to modify glist by
>>> adding an "unsigned int gl_hascuda" variable.  I'd still prefer solutions
>>> that make use of entirely external libraries over modifying src, but this
>>> small change gets me half the way there.  Then, I just need to write the
>>> creator functions and the class works.
>>>
>>> Chuck
>>> ___
>>> Pd-dev mailing list
>>> Pd-dev@iem.at
>>> http://lists.puredata.info/listinfo/pd-dev
>>>
>>
>>
>>
>>
>>
>>
>> 
>>
>> "[T]he greatest purveyor of violence in the world today [is] my own
>> government." - Martin Luther King, Jr.
>>
>>
>>
>>
>
>
>
> 
>
> "A cellphone to me is just an opportunity to be irritated wherever you
> are." - Linus Torvalds
>
>
___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] canvas class polymorphism

2011-03-23 Thread Charles Henry
Hi, hc

Let me explain a little further here.  The end goal is to have an external
library that allows one to create externals that use memory on GPUs.  Big
idea here is that once you've got a system for handling the memory
allocation and dsp sorting in *exactly* the same way as Pd, then you can
write externals for CUDA or CL in a way that's consistent with existing
externals.

With Pd handling the signal memory allocation inside of d_ugen.c and called
from canvas_dodsp, I wanted for my external library to have its own canvas
class and different methods for handling the memory allocation differently.
In fact, I think of that as being the key class to create the library.

I worked through it for a while, and I think it's just plain impossible to
have another canvas class in an external library, unless there's something
good I don't understand.  And I really want to understand :)

So, since then, I've been thinking that I'd have to modify the pd-vanilla
source code.  I've got something that loads a CUDA device and gets me to run
code during canvas_new, canvas_free, and canvas_dsp.  I'm still trying to
organize around the idea of having an external library to load and make as
few changes directly in the vanilla source code.

Chuck

On Wed, Mar 23, 2011 at 1:51 PM, Hans-Christoph Steiner wrote:

>
> What's the end goal here? You want an object that acts like a
> t_canvas/t_glist?
>
> .hc
>
>
> On Mar 13, 2011, at 3:31 PM, Charles Henry wrote:
>
>  I've been working through my CUDA Pd project, and I ran into the problem
>> of making externals that copy the canvas class.
>>
>> My first idea was that I wanted a completely separate class with different
>> methods using glist.  Calls from Pd looking for t_canvas work just fine, but
>> functions like pd_findbyclass that look for canvas_class fail.  I started
>> mucking around in the pd src, and I think it's just too difficult and would
>> make onerous changes that I don't like.
>>
>> Is there something I'm not getting about canvas classes and externals?
>>
>> My second approach to creating an external library is to modify glist by
>> adding an "unsigned int gl_hascuda" variable.  I'd still prefer solutions
>> that make use of entirely external libraries over modifying src, but this
>> small change gets me half the way there.  Then, I just need to write the
>> creator functions and the class works.
>>
>> Chuck
>> ___
>> Pd-dev mailing list
>> Pd-dev@iem.at
>> http://lists.puredata.info/listinfo/pd-dev
>>
>
>
>
>
>
>
> 
>
> "[T]he greatest purveyor of violence in the world today [is] my own
> government." - Martin Luther King, Jr.
>
>
>
>
___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


[PD-dev] canvas class polymorphism

2011-03-14 Thread Charles Henry
I've been working through my CUDA Pd project, and I ran into the problem of
making externals that copy the canvas class.

My first idea was that I wanted a completely separate class with different
methods using glist.  Calls from Pd looking for t_canvas work just fine, but
functions like pd_findbyclass that look for canvas_class fail.  I started
mucking around in the pd src, and I think it's just too difficult and would
make onerous changes that I don't like.

Is there something I'm not getting about canvas classes and externals?

My second approach to creating an external library is to modify glist by
adding an "unsigned int gl_hascuda" variable.  I'd still prefer solutions
that make use of entirely external libraries over modifying src, but this
small change gets me half the way there.  Then, I just need to write the
creator functions and the class works.

Chuck
___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] CUDA discussion

2010-02-24 Thread Charles Henry
Sorry for the double post...  I realize I'm not checking the list name
correctly each time--so the thread is a little polluted.

I'm still studying the Pd source code and trying to figure out the
best place to tie in CUDA functions.  I think, What I'd like to do is
create a "special" canvas that owns cuda pd objects, handles their
memory allocation and adds them to the dsp graph.  Is this on the
right track?

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


[PD-dev] help with d_ugen.c

2009-11-12 Thread Charles Henry
Hi, list

I'm reading through d_ugen.c and I mostly understand what it says.  I
was wondering if anybody has already written a paper to tell what is
in the Pd DSP internals, such as scheduling the DSP tree, or data
caching strategies.

I am most curious about the behavior of "borrowed signals" and signals
that are on the signal_freelist and signal_usedlist.  Am I correct in
assuming these are the data structures used to determine caching of
data between DSP perform routines?
Is there a way to specify an external with a "borrowed" signal on the
inlet, but having an output signal that is not borrowed (or
vice-versa)?

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] CUDA discussion

2009-11-03 Thread Charles Henry
> (proposed) incremental milestones:
> 1.  Create an external that checks GPUs and hands back error messages to Pd.
> 2.  Create an external that initializes GPUs.
> 3.  Create an external that performs host<->device memory transfer and
> runs an operation.
> 4.  Create an external that performs an operation and compares the
> time it takes against the same operation on CPU.  At this point, it
> should be possible to identify and hopefully quantify the potential
> speedup on GPU, and decide whether or not it is worth it.

add:
5.  Create an external that accepts *in as a pointer to host memory,
and returns as *out a pointer to gpu memory.
6.  Create an external that accepts *in as a pointer to gpu memory,
and returns as *out a pointer to host memory.
7.  Create an external that performs an operation in gpu memory and
returns a pointer to gpu memory without any host<->gpu memory
transfers.

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] CUDA discussion

2009-11-03 Thread Charles Henry
> top-down design issues:
> 1.  The essential CUDA<->Pd functions should be made separate from
> CUDA based Pd externals, with a separate header file, and compilable
> to shared and static libraries.
> 2.  The set of CUDA<->Pd extensions needs to be able to manage
> multiple devices, including device query, initialization and setting
> global parameter sets per GPU.  Most likely, this means a custom data
> structure and object based method system.
> 3.  Compilation--how to create the build system and handle
> dependencies for a library of CUDA based externals.  Management of
> CUDA libraries, CUDA-rt and CUDA-BLAS especially.
> 4.  Testing and initialization.  At setup time, a CUDA based external
> should be able to find out if it is legal and ready to run.
> 5.  Abstraction of major device and memory operations.  What makes up
> a sufficient and complete set of operations?  This is a list that is
> most likely to be grown through experimentation, but a good
> preliminary list of operations will help get things started on the
> right footing.
> 6.  Performance.  How to profile or benchmark and make comparisons
> between implementations?  The single greatest performance issue that I
> have identified is caching on GPU.  host<->device memory transfers can
> be eliminated in some cases, allowing CUDA based externals to follow
> one another in the DSP tree with faster scheduling and runtime
> performance.

add:
7.  Namespace.  Should be able to duplicate existing objects with
unified variations on names.

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


[PD-dev] CUDA discussion

2009-11-02 Thread Charles Henry
Dear list,

I'd like to start a conversation about CUDA and Pd.

For those of you who don't know, CUDA is a minimal instruction set for
doing single precision floating point calculations on NVIDIA GPUs.
It's a C-based coding paradigm in which blocks of data are copied to
GPU device memory and operations are performed on that data with
thread blocks in increments of 32 threads.  Complete sets of floating
point math functions are available for CUDA.  The CUDA compiler nvcc
works very well alongside gcc.

I've been studying it at work, but have not coded anything for Pd yet.
 There's a whole lot of performance issues based on tiny details in
the documentation--the implementation of cuda based externals could be
made fairly simple for developers if a complete set of CUDA<->Pd
extensions could be coded from the beginning.

Any project worth doing is worth doing right.  So, I want to figure
out if:  a) it's worth doing and b) how to do it right.

I've got a first draft of top-down design issues, and I'd like to make
a list of incremental milestones that would prove the concept is
sound.

top-down design issues:
1.  The essential CUDA<->Pd functions should be made separate from
CUDA based Pd externals, with a separate header file, and compilable
to shared and static libraries.
2.  The set of CUDA<->Pd extensions needs to be able to manage
multiple devices, including device query, initialization and setting
global parameter sets per GPU.  Most likely, this means a custom data
structure and object based method system.
3.  Compilation--how to create the build system and handle
dependencies for a library of CUDA based externals.  Management of
CUDA libraries, CUDA-rt and CUDA-BLAS especially.
4.  Testing and initialization.  At setup time, a CUDA based external
should be able to find out if it is legal and ready to run.
5.  Abstraction of major device and memory operations.  What makes up
a sufficient and complete set of operations?  This is a list that is
most likely to be grown through experimentation, but a good
preliminary list of operations will help get things started on the
right footing.
6.  Performance.  How to profile or benchmark and make comparisons
between implementations?  The single greatest performance issue that I
have identified is caching on GPU.  host<->device memory transfers can
be eliminated in some cases, allowing CUDA based externals to follow
one another in the DSP tree with faster scheduling and runtime
performance.

(proposed) incremental milestones:
1.  Create an external that checks GPUs and hands back error messages to Pd.
2.  Create an external that initializes GPUs.
3.  Create an external that performs host<->device memory transfer and
runs an operation.
4.  Create an external that performs an operation and compares the
time it takes against the same operation on CPU.  At this point, it
should be possible to identify and hopefully quantify the potential
speedup on GPU, and decide whether or not it is worth it.

That's enough for now.  I'd like to know if anyone else has been
thinking along similar lines (CUDA has been out for, like, 2 years or
so now, so I bet that many people know about it), and if you have any
input on the design issues.

Chuck

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] [PD] Feedback discussion

2009-09-08 Thread Charles Henry
I've got a project that I've put on the shelf and haven't finished.  I
wanted to figure out how to create stable acoustic feedback, and I can
show that a single long-length (on the order of RT60 room
reverberation time) FIR filter can be used to equalize the acoustic
feedback path.  I never succeeded in all my experiments except to
create really loud noises that I thought were going to destroy my
loudspeakers, burn down my house, etc...  Perhaps other people have
been more successful with acoustic feedback.

The primary problem I found was that simple inverse and regularized
least-squares methods would produce "ringing" and non-compact
solutions that would result in instability when applied.  I don't have
time to share all the math tonight, but I will help with the
"discussion" part of all this :)

Chuck

On Tue, Sep 8, 2009 at 9:25 PM, Jerome
Covington wrote:
> I'm interested to know who's been working with feedback, and if anyone
> has any patches they've developed, or that others have developed that
> they think is exemplary.
>
> --
> Regards,
> Jerome Covington
>  .  .  .  .   :   .  .  .  .   :
> "define audio development"
>
> ___
> pd-l...@iem.at mailing list
> UNSUBSCRIBE and account-management -> 
> http://lists.puredata.info/listinfo/pd-list
>

___
Pd-dev mailing list
Pd-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] Fwd: Allocating memory in externals

2007-11-08 Thread Charles Henry
Sounds interesting.  You probably want to read m_memory.c to see how
freebytes, copybytes, and the rest work.  See it here:
http://pure-data.cvs.sourceforge.net/pure-data/pd/src/m_memory.c?view=log

Chuck

On Nov 8, 2007 7:01 PM, Mike McGonagle <[EMAIL PROTECTED]> wrote:
> forgot to send to the list...
>
>
> -- Forwarded message --
> From: Mike McGonagle <[EMAIL PROTECTED]>
> Date: Nov 8, 2007 7:00 PM
> Subject: Re: [PD-dev] Allocating memory in externals
> To: Charles Henry <[EMAIL PROTECTED]>
>
>
> I guess my real issue is that I am using an embedded piece of software
> which provides its own set of memory functions, so I was just
> wondering if there are any issues with mixing all these different
> memory models.
>
> Thanks,
>
> Mike
>
>
> On 11/8/07, Charles Henry <[EMAIL PROTECTED]> wrote:
> > Hi, Mike,
> >   There are a few Pd-specific calls, although there is no harm AFAICT
> > (I have used them before, but I am not an expert) with using malloc(),
> > calloc(), or alloca().  In m_pd.h (v. 0.40.2 for example), you will
> > find:
> > /* --- memory management  */
> > EXTERN void *getbytes(size_t nbytes);
> > EXTERN void *getzbytes(size_t nbytes);
> > EXTERN void *copybytes(void *src, size_t nbytes);
> > EXTERN void freebytes(void *x, size_t nbytes);
> > EXTERN void *resizebytes(void *x, size_t oldsize, size_t newsize);
> >
> > for the pd-specific memory calls.
> >
> > Chuck
> >
> > On Nov 8, 2007 6:27 PM, Mike McGonagle <[EMAIL PROTECTED]> wrote:
> > > Hello all,
> > >
> > > I was wondering if there are any guidelines to allocating memory in an
> > > external? Is there any harm to using malloc/free? Or is there some
> > > PD-specific calls that should be used?
> > >
> > >
> > > Mike M
> > >
> > >
> > > --
> > > Help the Environment, Plant a Bush back in Texas!
> > >
> > > "I place economy among the first and most important republican
> > > virtues, and public debt as the greatest of the dangers to be feared.
> > > To preserve our independence, we must not let our rulers load us with
> > > perpetual debt."
> > > -- Thomas Jefferson, third US president, architect and author (1743-1826)
> > >
> > > "Give Peace a Chance" -- John Lennon (9 October 1940 – 8 December 1980)
> > >
> > > Peace may sound simple—one beautiful word— but it requires everything
> > > we have, every quality, every strength, every dream, every high ideal.
> > > —Yehudi Menuhin (1916–1999), musician
> > >
> > > If you think you can, or you think you can't, you are probably right.
> > > —Mark Twain
> > >
> > > "Art may imitate life, but life imitates TV."
> > > Ani DiFranco
> > >
> > > ___
> > > PD-dev mailing list
> > > PD-dev@iem.at
> > > http://lists.puredata.info/listinfo/pd-dev
> > >
> >
>
>
> --
> Help the Environment, Plant a Bush back in Texas!
>
> "I place economy among the first and most important republican
> virtues, and public debt as the greatest of the dangers to be feared.
> To preserve our independence, we must not let our rulers load us with
> perpetual debt."
> -- Thomas Jefferson, third US president, architect and author (1743-1826)
>
> "Give Peace a Chance" -- John Lennon (9 October 1940 – 8 December 1980)
>
> Peace may sound simple—one beautiful word— but it requires everything
> we have, every quality, every strength, every dream, every high ideal.
> —Yehudi Menuhin (1916–1999), musician
>
> If you think you can, or you think you can't, you are probably right.
> —Mark Twain
>
> "Art may imitate life, but life imitates TV."
> Ani DiFranco
>
>
> --
>
> Help the Environment, Plant a Bush back in Texas!
>
> "I place economy among the first and most important republican
> virtues, and public debt as the greatest of the dangers to be feared.
> To preserve our independence, we must not let our rulers load us with
> perpetual debt."
> -- Thomas Jefferson, third US president, architect and author (1743-1826)
>
> "Give Peace a Chance" -- John Lennon (9 October 1940 – 8 December 1980)
>
> Peace may sound simple—one beautiful word— but it requires everything
> we have, every quality, every strength, every dream, every high ideal.
> —Yehudi Menuhin (1916–1999), musician
>
> If you think you can, or you think you can't, you are probably right.
> —Mark Twain
>
> "Art may imitate life, but life imitates TV."
> Ani DiFranco
>
> ___
> PD-dev mailing list
> PD-dev@iem.at
> http://lists.puredata.info/listinfo/pd-dev
>

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] Allocating memory in externals

2007-11-08 Thread Charles Henry
Hi, Mike,
  There are a few Pd-specific calls, although there is no harm AFAICT
(I have used them before, but I am not an expert) with using malloc(),
calloc(), or alloca().  In m_pd.h (v. 0.40.2 for example), you will
find:
/* --- memory management  */
EXTERN void *getbytes(size_t nbytes);
EXTERN void *getzbytes(size_t nbytes);
EXTERN void *copybytes(void *src, size_t nbytes);
EXTERN void freebytes(void *x, size_t nbytes);
EXTERN void *resizebytes(void *x, size_t oldsize, size_t newsize);

for the pd-specific memory calls.

Chuck

On Nov 8, 2007 6:27 PM, Mike McGonagle <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I was wondering if there are any guidelines to allocating memory in an
> external? Is there any harm to using malloc/free? Or is there some
> PD-specific calls that should be used?
>
>
> Mike M
>
>
> --
> Help the Environment, Plant a Bush back in Texas!
>
> "I place economy among the first and most important republican
> virtues, and public debt as the greatest of the dangers to be feared.
> To preserve our independence, we must not let our rulers load us with
> perpetual debt."
> -- Thomas Jefferson, third US president, architect and author (1743-1826)
>
> "Give Peace a Chance" -- John Lennon (9 October 1940 – 8 December 1980)
>
> Peace may sound simple—one beautiful word— but it requires everything
> we have, every quality, every strength, every dream, every high ideal.
> —Yehudi Menuhin (1916–1999), musician
>
> If you think you can, or you think you can't, you are probably right.
> —Mark Twain
>
> "Art may imitate life, but life imitates TV."
> Ani DiFranco
>
> ___
> PD-dev mailing list
> PD-dev@iem.at
> http://lists.puredata.info/listinfo/pd-dev
>

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] copybytes - partial array?

2007-09-16 Thread Charles Henry
It will not work.  Here is our function prototype:
EXTERN void *copybytes(void *src, size_t nbytes);

The thing to see here, is that it doesn't actually copy the data to a
location you specify.  It creates a copy of the data, somewhere in
memory, and returns a pointer to the location.  If you want to have a
continuous array of numbers, you'd have to copy them one at a time
from one array to the other anyhow.

I suggest to just use getbytes() and for loops to do the assignment.

Chuck


On 9/16/07, Ed Kelly <[EMAIL PROTECTED]> wrote:
> Hi again to all devs,
>
> Is there any way to use copybytes to take a chunk out of an array? Say, to
> copy elements 0-10 of an array into another buffer, then concatenate
> elements 20-30 onto the end of it?
>
> B3st
> Ed++
>
>
> Lone Shark "Aviation" out now on
> http://www.pyramidtransmissions.com
> http://www.myspace.com/sharktracks
>
>  
>  Yahoo! Answers - Get better answers from someone who knows. Try it now.
> ___
> PD-dev mailing list
> PD-dev@iem.at
> http://lists.puredata.info/listinfo/pd-dev
>
>

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] does pd use dual buffer approach ?

2007-08-18 Thread Charles Henry
>   However, then I created a canvas that covered up some things to make
> a GOP patch.  Ummm apparently I did things in the wrong order, so it
> crashes now, every time.

nope, it's not the canvas I've confirmed that much I will need
to re-organize and re-do the patch... that will take a few days...
but take my word, you can make long filters without audio dropouts...

Chuck

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] does pd use dual buffer approach ?

2007-08-18 Thread Charles Henry
Hey, there, Sergei
  I was just working on an example to show you if it can be done...  I
have worked with this quite a bit, and have successfully used filters
as long as 1.5 seconds long @ 44kHz using my dinky little 1.6Ghz
Sempron.  Although the original implementation was a subband adaptive
filter using wavelet transform, and many other externals I wrote to do
it, it DID use pd's own blocking/dsp scheduling system. So, I for one
think it is possible, depending upon your system (also I was using
Linux, by the way).
  However, then I created a canvas that covered up some things to make
a GOP patch.  Ummm apparently I did things in the wrong order, so it
crashes now, every time.
I will try to recover the work I just did, and get back to you as soon as I can.

Chuck



On 8/18/07, Sergei Steshenko <[EMAIL PROTECTED]> wrote:
>
> --- Miller Puckette <[EMAIL PROTECTED]> wrote:
>
> > Depending on the OS, you can get at least 100 milliseconds of buffering and
> > perhaps much more... so it should be fine.
> >
> > cheers
> > Miller
> >
>
> Well, the OS is Linux (SUSE 10.2 to be precise) and I'm interested in FFT
> transforms taking, say, 3 seconds.
>
> Will pd allocate the necessary buffers/threads ?
>
> Can pd be configured to cope with such latencies ?
>
> If yes, at run time or at compile time ?
>
> Thanks,
>   Sergei.
>
> Applications From Scratch: http://appsfromscratch.berlios.de/
>
>
>   
> 
> Park yourself in front of a world of choices in alternative vehicles. Visit 
> the Yahoo! Auto Green Center.
> http://autos.yahoo.com/green_center/
>
> ___
> PD-dev mailing list
> PD-dev@iem.at
> http://lists.puredata.info/listinfo/pd-dev
>

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] pointers

2007-07-09 Thread Charles Henry
On 7/8/07, Mathieu Bouchard <[EMAIL PROTECTED]> wrote:
> On Sat, 7 Jul 2007, Charles Henry wrote:
>
> > I was wondering if you could clarify for me what types of data
> > structures you are pointing to, because I could think of an application
> > that this would solve.
>
> In modern GridFlow, "grid" messages (the only special messages in
> GridFlow) contain one single fake pointer which is always pointing to a
> GridOutlet structure. When an object receives that, it calls ->callback(x)
> on the structure, where x has to be a GridInlet*. This subscribes the
> GridInlet to the streaming that will be performed by the GridOutlet as
> soon as the message handler returns.

I think I understand now.  You pass pointers to structures.  That
makes a lot of sense.

> As of now, this is always ever done by grid.c, so actual grid-handling
> pd classes named like [#whatever] never know about this and it could
> actually change from version to version (and it did). When patching, you
> are not allowed to assume anything about the contents of the grid message.
>
> > For a while now, I was wondering how to include a wave packets
> > transform in pd.  The wave++ library has a wave packets transform,
> > where the structure of the transform is provided as a tree, and the
> > tree is a class.
>
> We've had an experimental datatype in LTI (a C++ library wrapped by
> GridFlow) for pointing to special (non-gridoutlet) structs like that but
> it's not like we're so proud of it.
>
> Btw, if that's a balanced tree, it's rather easy to flatten it into an
> array in a predictable way. You've thought about that? (Are you talking
> about wavelets?)

The wave packets transform is a wavelet transform, in which each of
the branches with greater than 1 point can be expaned in the same
manner.  The big idea of using one is to do a best basis search, which
depends on the signal characteristics.  So, the best basis can be
drastically different from one signal to another.
I am sure there is a way to flatten the tree into an array, with
various extra parameters that tell you how to traverse the tree as
moving to different indexes of the array.  You're certainly correct.
I really hadn't thought of that before.
The existing wave++ library has functions on classes.  So to flatten
the tree for passing, only to be un-flattened afterwards is less
efficient than to pass a pointer to the tree.

But really.
it's a pretty impractical example
there would be little need to have these specialized structures as
externals, to be passed to other externals.  A series of objects that
perform some kind of wave packets analysis/processing would only be
compatible with each other.  If you wanted to do some kind of
processing on the packets, you would have to write another external
for it, just to decode the structure anyway.  It's really just a
curiosity (as far as I can tell)
Thanks for describing to me how you pass "fake" pointers :)

Chuck

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


Re: [PD-dev] pointers

2007-07-09 Thread Charles Henry
I was wondering if you could clarify for me what types of data
structures you are pointing to, because I could think of an
application that this would solve.

For a while now, I was wondering how to include a wave packets
transform in pd.  The wave++ library has a wave packets transform,
where the structure of the transform is provided as a tree, and the
tree is a class.  Could we make an external that does the wave packets
transform and stores the tree, then passes a pointer to the tree to
the next stage in processing.  (I'm not suggesting to do this any time
soon, I just thought of an example)
Do you do similar things in GridFlow?  What kinds of pointers do you
think are necessary to handle?

Chuck

On 6/23/07, Mathieu Bouchard <[EMAIL PROTECTED]> wrote:
>
> GEM, PDP, GridFlow, all use fake pointers. That is, pd defines A_POINTER
> as being a t_gpointer * and all of the above externals disregard this
> completely and send something else. It's because pd provides no other
> sensible means to send a pointer.
>
> Older versions of GridFlow were sending a pointer as 2 floats of integer
> value, and I was going to make it 3 floats when beginning to support
> 64-bit, but thought that although misusing A_POINTER was more evil in my
> book, at least it's being done with passing only one atom around, and I'm
> also happy to be doing the same evil as everybody else too.
>
> (I could have tried a plain reinterpret_cast<> so that it'd be 1 float in
> 32-bit or 2 floats in 64-bit, but I didn't want to think of the special
> float values such as +0, -0, +inf, -inf, NaN, denormals, and anything
> else. It could've been fine, but I didn't want to have to think about it.)
>
>   _ _ __ ___ _  _ _ ...
> | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada
> ___
> PD-dev mailing list
> PD-dev@iem.at
> http://lists.puredata.info/listinfo/pd-dev
>
>

___
PD-dev mailing list
PD-dev@iem.at
http://lists.puredata.info/listinfo/pd-dev


[PD-dev] Symbol "_setup" not found

2007-03-20 Thread Charles Henry
I finished a first draft of an external, called tabread4a~ (which
gratuitously borrows tabread4~).  It compiles fine, but when loaded by
Pd, it returns an error: "Symbol "tabread4a_tilde_setup" not found"
Can anyone tell me where I've gone wrong here?

Chuck

Enclosed is the text of tabread4a~.c:

/ tabread4a~ ***/
/*  The guts of this external are "borrowed"
from tabread4~.  For normal or slower playback
speeds, this external is intended to simply run
the same as tabread4~. For faster speeds, the
interpolation polynomial is modified to eliminate
aliased frequencies. */

#include
#include

float interp(float x)
{
float absx=fabsf(x);
return 
((absx<2.0f)*((absx<1.0f)?(1-absx*(0.5f+absx*(1-0.5f*absx))):(1-absx*(1.83f-absx*(1.0f-0.166f*absx);
}

static t_class *tabread4a_tilde_class;

typedef struct _tabread4a_tilde
{
t_object x_obj;
int x_npoints;
float *x_vec;
t_symbol *x_arrayname;
float x_f;
float last_input;
} t_tabread4a_tilde;

static void *tabread4a_tilde_new(t_symbol *s)
{
t_tabread4a_tilde *x = (t_tabread4a_tilde *)pd_new(tabread4a_tilde_class);
x->x_arrayname = s;
x->x_vec = 0;
outlet_new(&x->x_obj, gensym("signal"));
x->x_f = 0;
x->last_input=0;
return (x);
}

static t_int *tabread4a_tilde_perform(t_int *w)
{
t_tabread4a_tilde *x = (t_tabread4a_tilde *)(w[1]);
t_float *in = (t_float *)(w[2]);
t_float *out = (t_float *)(w[3]);
int n = (int)(w[4]);
int maxindex;
float *buf = x->x_vec, *fp;
int i;
float diff, findex;

maxindex = x->x_npoints - 3;

if (!buf) goto zero;

#if 0   /* test for spam -- I'm not ready to deal with this */
for (i = 0,  xmax = 0, xmin = maxindex,  fp = in1; i < n; i++,  fp++)
{
float f = *in1;
if (f < xmin) xmin = f;
else if (f > xmax) xmax = f;
}
if (xmax < xmin + x->c_maxextent) xmax = xmin + x->c_maxextent;
for (i = 0, splitlo = xmin+ x->c_maxextent, splithi = xmax - x->c_maxextent,
fp = in1; i < n; i++,  fp++)
{
float f = *in1;
if (f > splitlo && f < splithi) goto zero;
}
#endif

findex = *in;
diff=fabsf(findex-x->last_input);
if (diff > ((float) (maxindex/2)))
diff=((float) maxindex) - diff;
for (i = 0; i < n; i++)
{
if (diff <= 1.0f)
{
int index = findex;
float frac,  a,  b,  c,  d, cminusb;
static int count;
if (index < 1)
index = 1, frac = 0;
else if (index > maxindex)
index = maxindex, frac = 1;
else frac = findex - index;
fp = buf + index;
a = fp[-1];
b = fp[0];
c = fp[1];
d = fp[2];
/* if (!i && !(count++ & 1023))
post("fp = %lx,  shit = %lx,  b = %f",  fp, buf->b_shit,  b); */
cminusb = c-b;
*out++ = b + frac * (
cminusb - 0.167f * (1.-frac) * (
(d - a - 3.0f * cminusb) * frac + (d + 2.0f*a - 3.0f*b)
)
);
}
else
{
int a,b,c, itemp, itemp2, itemp3;
float sum_left, sum_right;
a=ceilf(findex-2*diff);//lowest value used in interpolation
b=findex;  //floor( findex)
c=findex+2*diff;   // highest value used,
floor(findex+2*diff)

if ((a>0)&&(c<=maxindex))
{
fp=buf+a;
sum_left=(*fp++)*interp((findex-a)/diff);
while((a++)itemp)
sum_right+=(*(fp--))*interp((c-findex)/diff);
}
else
{
itemp=a;
while(itemp<1)
itemp+=maxindex;
fp=buf+itemp;
sum_left=(*fp++)*interp((findex-a++)/diff);
while(a<=b)
{
itemp2=b-a+1;
itemp3=maxindex-itemp+1;
itemp=(itemp2maxindex)
itemp-=maxindex;
fp=buf+itemp;
sum_right=(*fp--)*interp(((c--)-findex)/diff);
while(c>b)
{
itemp2=c-b;
itemp3=itemp-1;
itemp=(itemp2 ((float) (maxindex/2)))
diff=((float) maxindex) - diff;
findex = *in;
}
return (w+5);
 zero:
while (n--) *out++ = 0;

return (w+5);
}

void tabread4a_tilde_set(t_tabread4a_tilde *x, t_symbol *s)
{
t_garray *a;

x->x_arrayname = s;
if (!(a = (t_garray *)pd_findbyclass(x->x_arrayname, garray_class)))
{
if (*s->s_name)
pd_error(x, "tabread4a~: %s: no such array",
x->x_arrayname->s_name);
x->x_vec = 0;
}
else if (!garray_getfloatarray(a, &x->x_npoints, &x->x_vec))
{
pd_error(x, "%s