Are you guys aware of David Pollacks visi:io/visi:pro projects, it seems fairly 
similar to what you are describing. He's creating a simple datamodel/process 
language with strong type system constraints, that allows individuals to create 
domain specific plugins to a cloud processing / local GUI platform.  It's still 
in the very early stages but the idea is promising.  

Anyway this discussion has been very interesting to observe.
I do hope that someone solves it someday, 

Edward


Sent from my iPad

On 4 Oct 2012, at 00:10, David Barbour <dmbarb...@gmail.com> wrote:

> Distilling what you just said to its essence:
> humans develop miniature dataflows
> search algorithm automatically glues flows together
> search goal is a data type
> A potential issue is that humans - both engineers and end-users - will often 
> want a fair amount of control over which translations and data sources are 
> used, options for those translations, etc.. You need a good way to handle 
> preferences, policy, configurations. 
> 
> I tend to favor soft constraints in those roles. I'm actually designing a 
> module systems around the idea, and an implementation in Haskell (for RDP) 
> using the plugins system and dynamic types. (Related: 
> http://awelonblue.wordpress.com/2011/09/29/modularity-without-a-name/ , 
> http://awelonblue.wordpress.com/2012/04/12/make-for-haskell-values-part-alpha/).
>  
> 
> Regards,
> 
> Dave
> 
> On Wed, Oct 3, 2012 at 3:33 PM, Paul Homer <paul_ho...@yahoo.ca> wrote:
> I'm in a slightly different head-space with this idea. 
> 
> A URL for instance, is essentially an encoded set of instructions for 
> navigating to somewhere and then if it is a GET, grabbing the associated 
> data, lets say an image. If my theoretical user where to create a screen (or 
> perhaps we could call it a visual context), they'd just drag-and-drop an 
> image-type into the position they desired. They'd have to have some way of 
> tying that to 'which image', but for simplicity lets just say that they 
> already created something that allows them to search, and then list all of 
> the images from a known database context, so that the 'which image' is 
> cascaded down from their earlier work. Once they 'made the screen live' and 
> searched and selected, the underlying code would essentially get a request 
> for a data flow that specified the context (location), some 'type' 
> information (an image) and a context-specific instance id (as passed in from 
> the search and list). The kernel would then arrange for that data to be moved 
> from where-ever it is (local or remote, but lets go with remote) and 
> converted (if its base format was something the user's screen couldn't 
> handle, say a custom bitmap). So along the way there might be a translation 
> from one image format to another, and perhaps a 'compress and decompress' if 
> the source is remote. 
> 
> That whole flow wouldn't be constructed by a programmer, just the 
> translations, say bitmap->png, bits->compressed and compressed->bits. The 
> kernel would work backwards, knowing that it needed an image in png format, 
> and knowing that there exists base data stored in another context as a 
> bitmap, and knowing that for large data it is generally cheaper to 
> compress/decompress if the network is involved. The kernel would essentially 
> know the absolute minimum about the flow, and thus could algorithmically 
> decide on the optimal amount of work.
> 
> For most basic systems, for most data, once the user navigated into something 
> it's just a matter of shifting the data. I've done an end-run around any of 
> the processing issues, by jumping dumping them into the kernel. From your 
> list, scatter-gather, queries and views, etc. are all left up the the 
> translations. Incremental is just having the model in the context handles 
> updates. ACID is a property of the context.
> 
> I haven't given any real thought to issues like pulls or bi-directional but I 
> think that the screen would just send a flow back to the original context in 
> an observer style pattern associated with the raw pre-translated data. If any 
> of that changed in the context, the screen would redo any 'dirty' flows, but 
> that might not be a workable approach for millions of users watching the same 
> data.
> 
> The crux of this (crazy) idea is really that the full intelligence necessary 
> for moving the data about and playing with it is highly fragmented. 
> Programmers don't have to write massive intelligent sets of instructions, 
> they just have to know how data goes from one format to another. They can do 
> their thing in small bits and pieces and be as organized or inconsistent as 
> they like. The system comes together from the intelligence embedded in the 
> kernel, but the kernel isn't concerned with what are essentially domain or 
> data issues. It's all just bits that are on their way from one place to 
> another, and translations that are required along the way. Most of the 
> code-specific issues like security melt away (you have access to a context or 
> you don't) mostly because the linkage between the user and data is under 
> control of just one single (distributed) program. 
> 
> 
> Paul.
> 
> From: David Barbour <dmbarb...@gmail.com>
> 
> To: Paul Homer <paul_ho...@yahoo.ca>; Fundamentals of New Computing 
> <fonc@vpri.org> 
> Sent: Wednesday, October 3, 2012 5:27:12 PM
> 
> Subject: Re: [fonc] How it is
> 
> Your idea of "first specifying the model... then adding translations" can be 
> made simpler and more uniform, btw, if you treat acquiring initial data (the 
> model) as a "translation" between, say, a URL or query and the result. 
> 
> If you're interested in modeling computation as continuous synchronization of 
> bidirectional views between data models, you would probably be interested in 
> RDP (https://github.com/dmbarbour/Sirea/blob/master/README.md). 
> 
> Though, reuse of data models is necessarily more sophisticated than you are 
> imagining. There are many subtle and challenging issues in any conversion 
> between data models.  I discuss a few such issues here: 
> (http://awelonblue.wordpress.com/2011/06/15/data-model-independence/)
> 
> 
> 
> 
> On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer <paul_ho...@yahoo.ca> wrote:
> A bit long, but ...
> 
> The way most people think about programming is that they are writing 'code'. 
> As a lessor side-effect, that code is slinging around data. It grabs it from 
> the user, throws it into memory and then if it is interesting data, it writes 
> it to disk so that it can be looked at or edited later. The code is the 
> primary thing they are creating, while the data is just a side-effect of 
> using that code.
> 
> Way back I got introduced to seeing it the other way around. Data is 
> everything. It's what the user types in, which is moved into some 
> data-structures in memory and then is eventually restructured for persistence 
> to be stored for later usage. Data sometimes contains 'static linkages', that 
> is one datam points to another explicitly. Sometimes the linkages are 
> dynamic. A piece of code has to be run to make the connection between the 
> data. In this perspective, code is nothing more than dynamic linkages or 
> transformations between data-structures/formats (one could see the average of 
> a bunch of floats for example as a transformation to a more simplified 
> summation of the original data). The system is really just a massive flow of 
> data, while the code is just what helps it get from place to place.
> 
> In the second perspective, an inventory system allows the data to flow from 
> the users to the persistence medium. Sometimes the users need the data to 
> flow back to them again, possibly summarized, or just for re-editing. The 
> core of the system holds very simple data, basically a series of physical 
> items, each with many associated properties and probably a bunch of 
> cross-relationships. The underlying types, properties and relationships form 
> a model of the data. For our modern systems that model might be implemented 
> as a relational schema, but it could also be more exotic like NoSQL. 
> 
> In this sort of system, if the model where stored explicitly in the 
> persistence and it is simple enough that the users could do data entry 
> directly on a flat representation of it on the screen, then the whole system 
> would be as simple as flinging the data back and forth between the disks and 
> the screen. However as we all know, systems are never this trivial in the 
> real world. 
> 
> Users need to navigate to specific data, and they often want the computer to 
> fill in any 'global context information' for them as they move around. As 
> well, they generally enter data in a simplified format, store the data in 
> another, and then want a third way to view it. All of this amounts to a 
> series of transformations happening to the data as it flows back and forth. 
> Some transformations are simple, such as displaying a floating point number 
> as a string truncated to some level of precision. Some are very complex, such 
> as displaying a report that cross-checks the inventory to determine data or 
> real-life problems. But all of the things on the screen are either directly 
> data, or algorithmic transformations of the existing data.
> 
> As for programming, this type of system could be build by first specifying  
> the model. To add to this would be a series of transformations, each 
> basically a black box that specifies a set of input and a set of output. With 
> the model and the transformations, someone could lay out a series of screens 
> for the users (or power users could do it themselves). The underlying kernel 
> of the system would then take requests for the screens and use that to work 
> out the flow from or to the database. One could generalize this a bit further 
> by ignoring any difference between the screen and the disks, and just 
> thinking of them as a generalized 'context' of some  type. 
> 
> What I like about this idea is that once someone creates a model, it can be  
> re-used as is, elsewhere. Gradually industries will build up common models 
> (with less being secret). And as they add billions of little transformations, 
> these too can be shared. The kernel (if it it possible to actually write one 
> :-) only needs to exist once. Then all that remains is for people to toss 
> screens together as they need them (this part of programming is likely to 
> never be static). As for performance, once a flow has been established, it 
> would be possible to store and reuse any static data or transformation 
> sequences, and that auto-optimization would only exist in the kernel so it 
> could focus precisely on what provides the best results.
> 
> In a grand sense, you can see everything on the screen -- even little rounded 
> corners, images and gadgets -- as just data that has flowed there from the 
> disk somewhere (or network :-). The transformations behind something like a 
> windowing system can appear daunting, but we know that they all started life 
> as data somewhere that moved and bounced through a huge number of different 
> data-structures, until finally ending up as a set of bits toggled in a screen 
> buffer.
> 
> The on-going work to enhance the system would consistent of modeling data, 
> and creating transformations. In comparison to modern software development, 
> these would be very little pieces, and if they were shared are intrinsically 
> reusable (and recombination).
> 
> So I'd basically go backwards :-) No higher abstractions and bigger pieces, 
> but rather a sea of very little ones. It would be fun to try :-)
> 
> 
> Paul.
> 
> From: Loup Vaillant <l...@loup-vaillant.fr>
> To: Paul Homer <paul_ho...@yahoo.ca>; Fundamentals of New Computing 
> <fonc@vpri.org> 
> Sent: Wednesday, October 3, 2012 11:10:41 AM
> 
> Subject: Re: [fonc] How it is
> 
> De : Paul Homer <paul_ho...@yahoo.ca>
> 
> > If instead, programmers just built little pieces, and it was the
> > computer itself that was responsible for assembling it all together into
> > mega-systems, then we could reach scales that are unimaginable today.
> > […]
> 
> Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
> I have no idea what assembling mechanisms could be used.  Could you
> sketch a trivial example?
> 
> Loup.
> 
> 
> 
> 
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
> 
> 
> 
> 
> -- 
> bringing s-words to a pen fight
> 
> 
> 
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
> 
> 
> 
> 
> -- 
> bringing s-words to a pen fight
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to