Re: [fonc] photography and programming

2012-12-04 Thread Monty Zukowski
The point of an SLR is that you see through the same lens that is
taking the photograph.  The other cameras have a viewfinder which
introduces parallax issues and therefore is a bit off in terms of the
boundaries of the frame.

On Tue, Dec 4, 2012 at 6:16 PM, John Carlson  wrote:
> Wouldn't it be best to make programming a bit like single lens photography 
> instead of dual (or triple) lens photography?  It would seem like the fewer 
> lenses you use, the less likely it would be for one of them to be scratched.  
> Unless somehow there was a compensating factor in the lenses.
>
> My 2 bits.  Metaphor isn't quite right, but perhaps you see my point.
>
> Where's my post-mature optimization?
>
> John "Damn the torpedos, we're going full speed ahead and getting nowhere" 
> Carlson
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] photography and programming

2012-12-04 Thread John Carlson
Wouldn't it be best to make programming a bit like single lens photography 
instead of dual (or triple) lens photography?  It would seem like the fewer 
lenses you use, the less likely it would be for one of them to be scratched.  
Unless somehow there was a compensating factor in the lenses.

My 2 bits.  Metaphor isn't quite right, but perhaps you see my point.

Where's my post-mature optimization?

John "Damn the torpedos, we're going full speed ahead and getting nowhere" 
Carlson

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Obviously Kogge-Stone

2012-12-04 Thread Andre van Delft
Lately I was wondering if we could design hardware inspired on Program Algebra 
(PGA) and Maurer Computers.

PGA is an algebraic framework for sequential programming. PGA's creator Jan 
Bergstra writes in Why PGA?:

> We have spotted Maurer's 1967 JACM paper on 'A theory of computer 
> instructions' as the best possible basis for a theory of machines (Maurer 
> calls them computers) that can used in combination with PGA-style program 
> algebra. Maurer's model is particularly suitable to analyze the multitude of 
> conceivable machine models between what we call the split program machine 
> model (in computer architecture loosely indicated as the ISA architecture), 
> and the stored program running on a modern out-of-order pipelined 
> multi-processor chip. Here we are looking for quantitative results concerning 
> the fraction of conceivable functionalities that can actually be programmed 
> and stored on a given Maurer computer.
http://staff.science.uva.nl/~janb/pga/whypga.html

Bergstra also worked on Thread Algebra, which he uses in his papers on Maurer 
computers.

There is a lot to read here: 
http://staff.science.uva.nl/~mbz/Projects/TASI/tasi.html

André

Op 4 dec. 2012, om 23:55 heeft Casey Ransberger  het 
volgende geschreven:

> Oh, I was mostly fishing to see if anyone was doing anything fun with 
> hardware description. 
> 
> The present time sees a bigger trade between performance and power 
> consumption, but that's all in the realm of optimization. 
> 
> A mentor of mine in the software world once said something to me to the 
> effect of "Don't look under the ISA, you don't want to see what's down 
> there." Of course, that only encouraged me to look. 
> 
> After having done so, I'm seeing a lot of the same stuff there that motivated 
> FONC. Cruft built up over generations of deadlines and back compat, etc. I'm 
> pretty sure one could really put the STEPS treatment to hardware design, but 
> there's a catch. If you wanted to run Frank over a machine designed to be 
> understandable (rather than fast and compatible) and Frank hadn't seen 
> anything more in the way of optimization than what the researchers working on 
> it had to overcome with optimizations in order to complete their research, 
> one might end up with a whole system too slow to experiment with. 
> 
> That's conjecture (obviously) given that I don't have Frank to play out over 
> my FPGA rig, but I tend to trust my gut. Hence the post:)
> 
> So I got to thinking, while I was doing my little armchair exploration of 
> Verilog, what if we could declare/specify behavior with some concrete 
> algebra, and (I know, I know, I added an "and") find a way to do optimization 
> in an automated way. 
> 
> Thought experiment: what if I designed a behavior-specifying language, which 
> could be reduced to S-expressions, and then tried to find a set of fitness 
> functions around performance? Would it be possible to converge on a 
> performance goal by ripping off nature? If it was possible, how much compute 
> would I need to converge on an acceptable solution?
> 
> Probably crazy expensive compute because I'm doing two things that don't want 
> to sleep in the same room... verifying correctness (think a lot of tests in 
> some test language) and optimization (convergence on some set of perf 
> metrics.) 
> 
> Not just gen algs but gen programming. 
> 
> This popped into my head after thumbing through this thing:
> 
> http://www.amazon.com/gp/aw/d/0262111888
> 
> Of course, I'm skeptical (of both the content of that book and of my own 
> crazy ideas!) While a performance metric (in some continuum of flops and 
> watts) *does* seem like something GP might be able to optimize, correctness 
> *absolutely* does not. Hence the question about language. 
> 
> Has anyone here read that? I'm quite sure I haven't understood all of the 
> content, and with as obscure as it seems to be, it could be full of bunk. Of 
> course, with regard to obscurity, one might say the same thing of other stuff 
> folks around here know, understand well, and love. 
> 
> I could go into specific ideas I've had, but I won't, because I haven't 
> really a framework for vetting them. Instead, I'm going to ship the Wouldn't 
> It Be Cool If and let the good people of the list slaughter my little thought 
> experiment with the raw unrelenting debunking power of a group of people who 
> like science.
> 
> Bonus points if anyone can name the Dylan song title that I spoofed for the 
> thread, but that's way OT so please reply direct!
> 
> Casey
> 
> On Nov 30, 2012, at 3:35 PM, David Barbour  wrote:
> 
>> Could you clarify what you're thinking about? Is your question about 
>> metaprogramming of Verilog (with an implicit assumption that Verilog will 
>> save battery life)?
>> 
>> I've spent much time thinking about language and protocol design to extend 
>> battery resources. I happen to think the real wins are at higher levels - 
>> avoiding unnecessary work, amortizing wo

Re: [fonc] Obviously Kogge-Stone

2012-12-04 Thread David Barbour
>
> Probably crazy expensive compute because I'm doing two things that don't
> want to sleep in the same room... verifying correctness (think a lot of
> tests in some test language) and optimization (convergence on some set of
> perf metrics.)


You might be interested in some work by Juergen Schmidhuber:
http://www.idsia.ch/~juergen/goedelmachine.html

Or by Adam Chlipala: http://plv.csail.mit.edu/bedrock/

Start with the languages that support effective specification of and
analysis for correctness. Automate from there.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Obviously Kogge-Stone

2012-12-04 Thread Casey Ransberger
Oh, I was mostly fishing to see if anyone was doing anything fun with hardware 
description. 

The present time sees a bigger trade between performance and power consumption, 
but that's all in the realm of optimization. 

A mentor of mine in the software world once said something to me to the effect 
of "Don't look under the ISA, you don't want to see what's down there." Of 
course, that only encouraged me to look. 

After having done so, I'm seeing a lot of the same stuff there that motivated 
FONC. Cruft built up over generations of deadlines and back compat, etc. I'm 
pretty sure one could really put the STEPS treatment to hardware design, but 
there's a catch. If you wanted to run Frank over a machine designed to be 
understandable (rather than fast and compatible) and Frank hadn't seen anything 
more in the way of optimization than what the researchers working on it had to 
overcome with optimizations in order to complete their research, one might end 
up with a whole system too slow to experiment with. 

That's conjecture (obviously) given that I don't have Frank to play out over my 
FPGA rig, but I tend to trust my gut. Hence the post:)

So I got to thinking, while I was doing my little armchair exploration of 
Verilog, what if we could declare/specify behavior with some concrete algebra, 
and (I know, I know, I added an "and") find a way to do optimization in an 
automated way. 

Thought experiment: what if I designed a behavior-specifying language, which 
could be reduced to S-expressions, and then tried to find a set of fitness 
functions around performance? Would it be possible to converge on a performance 
goal by ripping off nature? If it was possible, how much compute would I need 
to converge on an acceptable solution?

Probably crazy expensive compute because I'm doing two things that don't want 
to sleep in the same room... verifying correctness (think a lot of tests in 
some test language) and optimization (convergence on some set of perf metrics.) 

Not just gen algs but gen programming. 

This popped into my head after thumbing through this thing:

http://www.amazon.com/gp/aw/d/0262111888

Of course, I'm skeptical (of both the content of that book and of my own crazy 
ideas!) While a performance metric (in some continuum of flops and watts) 
*does* seem like something GP might be able to optimize, correctness 
*absolutely* does not. Hence the question about language. 

Has anyone here read that? I'm quite sure I haven't understood all of the 
content, and with as obscure as it seems to be, it could be full of bunk. Of 
course, with regard to obscurity, one might say the same thing of other stuff 
folks around here know, understand well, and love. 

I could go into specific ideas I've had, but I won't, because I haven't really 
a framework for vetting them. Instead, I'm going to ship the Wouldn't It Be 
Cool If and let the good people of the list slaughter my little thought 
experiment with the raw unrelenting debunking power of a group of people who 
like science.

Bonus points if anyone can name the Dylan song title that I spoofed for the 
thread, but that's way OT so please reply direct!

Casey

On Nov 30, 2012, at 3:35 PM, David Barbour  wrote:

> Could you clarify what you're thinking about? Is your question about 
> metaprogramming of Verilog (with an implicit assumption that Verilog will 
> save battery life)?
> 
> I've spent much time thinking about language and protocol design to extend 
> battery resources. I happen to think the real wins are at higher levels - 
> avoiding unnecessary work, amortizing work over time, linear logics, graceful 
> degradation of services based on power access.
> 
> (Questions about power and energy were common in survivable networking 
> courses.)
> 
> Low level power saving is a common aspect of mobile computer architecture 
> design. But it's hard to push fundamentally better hardware designs without 
> an existing body of software that easily fits it.  
> On Nov 30, 2012 2:06 PM, "Casey Ransberger"  wrote:
> Since I'm running out of battery, and my adder is starting to go oh so 
> slowly, I thought I might challenge the lovely people of the list to make it 
> stop draining my battery so quickly.
> 
> :D
> 
> My first challenge idea was for someone to make it stop raining in Seattle, 
> but I realized that I was asking a lot with that.
> 
> Verilog would be cool, but better if you're translating whatcha got to 
> Verilog with OMeta, and you've come up with some randomly pretty language for 
> wires!
> 
> Come on, someone else has to be thinking about this;)
> 
> -- 
> Casey Ransberger
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/list