Hey Rob , Chris , David

I have been looking at Dimensional analysis from a string theory 

point of view that I gutted to make sense of it geometrically
so what I mean is that I use 


for $mass(-10..10){

for $length(-10..10){for $time(-10..10){
for $current(-10..10){

which is the 10 dimensional string theory dimensions plus one space dimension 
equals 11 ... but as you can see there are really 10 + (-10)   which equals 0 
so there is really only one dimension of [space] or 
("[mass]**0 *  [length]**0 * [time]**0 * [current]**0") = [space]
and 10 dimensions in that one space dimension and that space is flowing in two 
directions
giving the "10 and the -10"  really 20 dimensions plus the [space] dimension = 
21 dimensions
and then the permutations of mass length time and current is 21*21*21*21 not 
including a angle dimension
which equals 194481 SI units of you could say each one is it own dimension 

this is the model that I'm using for my particle simulator, in thinking that 
a Photon and Phonons are folded from strings and Electrons and Positrons are 
folded from
Photons and Phonons and Protons and Neutrons can be mathematically described
using electrons and positrons...  and all the Elements can then be described
this way with there spherical harmonics and nuclease descriptions mathematically
with out constants or dimensional values rationalized in ratios of those values 
concerning hardware the software and the experimentally derived constants and 
values
that can be used to match a purely mathematical model to known experimentation 
in the calculation of the effects that measurements have on a dimensional  
value..

meaning that I think that you can bypass the uncertainty princable if you  have 
the 
right mathematics, and you can compare the out come of those mathematics
to the out come of experimentation...

so that really you need to work in 194481 dimensions to be able to get the data 
so that you can really work in only 1 dimension [space] 

which is my plan to derive a purely mathematical model with out constants or 
values
so that the software and hardware have less limitations, realizing that i can 
rationalize 
those values and constant into ratios of the software and hardware limitations.

and a matrix for the 194481 Si units would be a 21*4 matrix which holds the
rationalized data of the ratios from hardware and software to experimentally 
confirmed data...

So the Idea is that it will be needed as far as 10's of thousand of dimensions 
are concerned to be able to do with out them and have a real theory and a 
purely mathematical model that is confirmed with experimentation...
so to make use of these simulations to create real products with predicted 
outcomes more presicly accurate !  





________________________________
From: chm <[email protected]>
To: David Mertens <[email protected]>
Cc: perldl <[email protected]>; Rob Freeman <[email protected]>
Sent: Wednesday, September 28, 2011 7:04 AM
Subject: Re: [Perldl] perl, cuda, and pdl

On 9/28/2011 8:15 AM, David Mertens wrote:
> Hey Rob,
>
> I've CC'd the PDL list in case somebody there can speak more to your
> concerns about PDL. I've never heard of anybody needing 10s of thousands of
> dimensions, and PDL might only support 256. Anybody know? As far as sparse
> matrix support, I never used it and it's not a crowning feature. Can anybody
> else speak more to the matter?

I think the dimensions he is describing are the lengths
of the vectors which would correspond to the size of
dim(0).

PDL doesn't natively support sparse representations
as far as I know.  One can use run-length encoding
and decoding to create a compact representation from
which sparse operations could be constructed.

You might wish to try the non-sparse version with PDL
to develop the approach (reducing the dimensionality
of your space if needed to meet memory requirements).
Once you have more details on the computation and
the memory requirements that could lead to a better
parallelized implementation.

--Chris

> There's no reason CUDA or OpenCL couldn't handle 10s of thousands of
> dimensions, if that's what you need, although you would have to write the C
> code to handle it. I'm not sure how to handle sparse matrices in CUDA,
> though I believe it's possible. However, my module doesn't really help teach
> CUDA, and you'll need to learn that somewhere before you'll see great gains
> in performance.
>
> David
>
> On Sep 26, 2011 11:45 PM, "Rob Freeman"<[email protected]>  wrote:
>>
>> David,
>>
>> GPU parallelization may not be sufficiently advanced for my purposes
>> yet anyway. The dimensions of my vectors are words, so they have
>> thousands, and even 10's of thousands of dimensions.
>>
>> I'll have a look at PDL. If it handles sparse arrays efficiently it
>> might get me over the hump. I can almost get away with speed issues by
>> storing intermediate products in RAM, but my current implementation
>> uses hashes, and Perl hashes seem to get way too big way too fast.
>>
>> The nVidia cross product routine may not matter. I need to define my
>> own basis vectors and their relationships. Nothing complex. It is
>> really just a lot of searching for combinations between vector
>> elements, substituting, and then collating all the substitutions. A
>> lot of small operations, and any can update any other, at any time. A
>> snap in parallel, but serially both really slow, and requiring really
>> enormous storage for intermediate results.
>>
>> -Rob
>>
>> On Mon, Sep 26, 2011 at 8:08 PM, David Mertens<[email protected]>
> wrote:
>>> Hi Rob!
>>>
>>> Thanks for contacting me about the CUDA module. Although I know the
>>> most about CUDA::Minimal, I expect that that PDL folks might also have
>>> something to say, so I've CC'd them on my response. The PDL community
>>> is a great resource for all questions related to numerical computing,
>>> except possibly for the BioPerl modules. Also, PDL might provide a
>>> good place to prototype your idea before moving to CUDA. Depending on
>>> the way in which you perform your cross product, PDL may be able to
>>> parallelize your calculation across multiple CPUs, if your machine has
>>> them.
>>>
>>> A cross-product seems like it would parallelize nicely, and nVidia
>>> even has a cross product available which you can find here:
>>> http://http.developer.nvidia.com/Cg/cross.html. However, the module
>>> that I wrote does not contain any Perl-callable CUDA kernels. The
>>> module was really aimed at my own CUDA work in which I wrote all my
>>> own kernels. If the supplied cross product does not work for you,
>>> you'll have to write your own kernel. How much do you know about CUDA?
>>>
>>> David
>
>
>
> _______________________________________________
> Perldl mailing list
> [email protected]
> http://mailman.jach.hawaii.edu/mailman/listinfo/perldl


_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl
_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl

Reply via email to