[fonc] Call for Papers for Workshops at AOSD 2012: FOAL, VariComp, DSAL, NEMARA, ESCOT and MISS

2011-10-29 Thread Monica Pinto
*** AOSD 2012 ***

March 25-30, 2012
Hasso-Plattner-Institut Potsdam, Germany
http://aosd.net/2012/

Call for Papers -- AOSD 2012 WORKSHOPS


Six workshops on aspect orientation and modularity will be held in
conjunction with MODULARITY: aosd.2012.

Workshops are scheduled to be held on March, Monday 26th and Tuesday 27th 2012.


FOAL: Foundations Of Aspect-Oriented Languages

  Submissions:  December 23rd, 2011
  Notification: January 13th, 2012
  Camera ready: January 23rd, 2012
  Workshop: March 26th, 2012

  http://www.eecs.ucf.edu/FOAL/index-2012.shtml


VariComp'12: Variability and Composition

  Submissions:  January 05th, 2012
  Notification: January 13th, 2012
  Camera ready: January 23rd, 2012
  Workshop: March 26th, 2012

  http://www.aosd.net/workshops/varicomp/2012/


DSAL: Workshop on Domain-Specific Aspect Languages

  Submissions:  December 30th, 2011
  Notification: January 13th, 2012
  Camera ready: January 23rd, 2012
  Workshop: March 26th, 2012

  http://www.dsal.cl/2012


NEMARA: Next Generation Modularity Approaches for Requirements and Architecture

  Submissions:  January 06th, 2012
  Notification: January 13th, 2012
  Camera ready: January 25th, 2012
  Workshop: March 27th, 2012

  https://sites.google.com/site/nemara2012/


ESCOT: Empirical Evaluation of Software Composition Techniques

  Submissions:  December 22nd, 2011
  Notification: January, 17th, 2012
  Camera ready: January 24rd, 2012
  Workshop: March 27th, 2012

  http://dawis2.icb.uni-due.de/events/escot2012


MISS: Modularity in Systems Software

  Submissions:  December 23rd, 2011
  Notification: January 13th, 2012
  Camera ready: January 23rd, 2012
  Workshop: March 27th, 2012

  http://www.aosd.net/workshops/miss/2012/

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-29 Thread BGB

On 10/29/2011 6:46 AM, karl ramberg wrote:

On Sat, Oct 29, 2011 at 5:06 AM, BGB  wrote:

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBwrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now
solvable
in
the large but the third one is still stuck in the "dark ages". I
recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that
made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
"It does everything that a mainframe does and more and it costs only
$100".
"Amazing!" exclaimed the passenger as he held the marvel in his hands,
"Where
can I get one?". "You can have this piece," said the gracious gent, "as
thank
you gift for helping me." "Thank you very much." the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed,
the
passenger yelled at him. "Hey! you forgot your suitcases!". "Not
really!"
the
gent shouted back. "Those are the batteries for your computer".

;-) .. Subbu

yeah...

this is probably a major issue at this point with "hugely multi-core"
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy
nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing
technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has
been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in
a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few "unique" ways of representing instructions (the
idea
being that they are aligned values of 1/2/4/8 bytes, rather than either
more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers
work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute

seems interesting, but is very much a pain trying to watch as my internet is
slow and the player doesn't really seem to buffer up the video all that far
when paused...


but, yeah, eval and reflection are features I really like, although sadly
one doesn't really have much of anything like this standard in C, meaning
one has to put a lot of effort into making a lot of scripting and VM
technology primarily simply to make up for the lack of things like 'eval'
and 'apply'.


this becomes at times a point of contention with many C++ developers, where
they often believe that the "greatness of C++ for everything" more than
makes up for its lack of reflection or dynamic features, and I hold that
plain C has a lot of merit if-anything because it is more readily amendable
to dynamic features (which can plug into the language from outside), which
more or less makes up for the lack of syntax sugar in many areas...

The notion I get from this presentation is that he is against C and
static languages in general.
It seems lambda calculus derived languages that are very dynamic and
can self generate code
is the way he thinks the exploration should take.


I was not that far into the video at the point I posted, due mostly to 
slow internet, and the player not allowing the "pause, let it buffer, 
and come back later" strategy, generally needed for things like You

Re: [fonc] IBM eyes brain-like computing

2011-10-29 Thread karl ramberg
On Sat, Oct 29, 2011 at 5:06 AM, BGB  wrote:
> On 10/28/2011 2:27 PM, karl ramberg wrote:
>>
>> On Fri, Oct 28, 2011 at 6:36 PM, BGB  wrote:
>>>
>>> On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

 On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:
>
> most likely, processing power will stop increasing (WRT density and/or
> watts) once the respective physical limits are met (basically, it would
> no longer be possible to get more processing power in the same space or
> using less power within the confines of the laws of physics).

 The adoption of computing machines at large is driven primarily by three
 needs
 - power (portable), space/weight and speed. The last two are now
 solvable
 in
 the large but the third one is still stuck in the "dark ages". I
 recollect
 a
 joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
 goes
 something like this:

 A man struggled to lug two heavy suitcases into a bogie in a train that
 was
 just about to depart. A fellow passenger helped him in and they start a
 conversation. The man turns out to be a salesman from a company that
 made
 portable computers. He showed one that fit in a pocket to his fellow
 passenger.
 "It does everything that a mainframe does and more and it costs only
 $100".
 "Amazing!" exclaimed the passenger as he held the marvel in his hands,
 "Where
 can I get one?". "You can have this piece," said the gracious gent, "as
 thank
 you gift for helping me." "Thank you very much." the passenger was
 thrilled
 beyond words as he gingerly explored the new gadget. Soon, the train
 reached
 the next station and the salesman stepped out. As the train departed,
 the
 passenger yelled at him. "Hey! you forgot your suitcases!". "Not
 really!"
 the
 gent shouted back. "Those are the batteries for your computer".

 ;-) .. Subbu
>>>
>>> yeah...
>>>
>>> this is probably a major issue at this point with "hugely multi-core"
>>> processors:
>>> if built, they would likely use lots of power and produce lots of heat.
>>>
>>> this is sort of also an issue with video cards, one gets a new/fancy
>>> nVidia
>>> card, which is then noted to have a few issues:
>>> it takes up two card slots (much of this apparently its heat-sink);
>>> it is long enough that it partially sticks into the hard-drive bays;
>>> it requires a 500W power supply;
>>> it requires 4 plugs from the power-supply;
>>> ...
>>>
>>> so, then one can joke that they have essentially installed a brick into
>>> their computer.
>>>
>>> nevermind it getting high framerates in games...
>>>
>>>
>>> however, they would have an advantage as well:
>>> people can still write their software in good old C/C++/Java/...
>>>
>>> it is likely that the existence of existing programming languages and
>>> methodologies will continue to be necessary of new computing
>>> technologies.
>>>
>>>
>>> also, likewise people will continue pushing to gradually drive-down the
>>> memory requirements, but for the most part the power use of devices has
>>> been
>>> largely dictated by what one can get from plugging a power-cord into the
>>> wall (vs either running off batteries, or OTOH, requiring one to plug in
>>> a
>>> 240V dryer/arc-welder/... style power cord).
>>>
>>>
>>> elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
>>> and x86-64, with a few "unique" ways of representing instructions (the
>>> idea
>>> being that they are aligned values of 1/2/4/8 bytes, rather than either
>>> more
>>> free-form byte-patterns or fixed-width instruction-words).
>>>
>>> or such...
>>>
>>>
>>> ___
>>> fonc mailing list
>>> fonc@vpri.org
>>> http://vpri.org/mailman/listinfo/fonc
>>>
>> This is also relevant regarding understanding how to make these computers
>> work:
>>
>> http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute
>
> seems interesting, but is very much a pain trying to watch as my internet is
> slow and the player doesn't really seem to buffer up the video all that far
> when paused...
>
>
> but, yeah, eval and reflection are features I really like, although sadly
> one doesn't really have much of anything like this standard in C, meaning
> one has to put a lot of effort into making a lot of scripting and VM
> technology primarily simply to make up for the lack of things like 'eval'
> and 'apply'.
>
>
> this becomes at times a point of contention with many C++ developers, where
> they often believe that the "greatness of C++ for everything" more than
> makes up for its lack of reflection or dynamic features, and I hold that
> plain C has a lot of merit if-anything because it is more readily amendable
> to dynamic features (which can plug into the language from outside), which
> more or less makes up for the lack of syntax sugar in many areas...

The notion I get