>On Thu, Feb 26, 2009 at 5:47 PM,  <[email protected]> wrote:
>>>On Thu, Feb 26, 2009 at 12:47 PM, Rick R <[email protected]> wrote:
>>>I'm having trouble understanding this proposal. BitC doesn't need to
>>>define a single integral type. We support multiple sizes of machine
>>>integers directly.
>>>
>>
>> Why? My (minimal) familiarity with proving stuff is that you've got
>> enough of a problem with bounded integers, which may overflow (which
>> is a runtime error and you want to avoid these, natch). But instead of
>> one type of bounded integer you have several...
>
>Because BitC is focused on systems programming, and we therefore need
>to deal concretely with things like hardware registers, cache line
>sizes, and things of that sort. Matters of data layout in systems
>programs are prescriptive. The formal inconveniences that you identify
>are real, but they cannot be solved at the cost of failing to satisfy
>pragmatics requirements. 

certainly. The question is where these pragmatic requirements are to be
satisfied - I'm wondering if the language is the right place. I'm
wondering if the output phase is more appropriate.

>This is why ML and Haskell will not
>ultimately succeed in this sort of space.
>
>In this domain we need to be able to write applications that are
>provably allocation-free and provably bounded space. In the absence of
>bounded integers that is not possible.

agreed.

>
>We also want to be within 1% to 1.5% of the performance of
>aggressively hand-optimized C (and privately I'ld like to *beat* that
>performance). That definitely cannot be done if we build any
>significant overhead in at the level of integer arithmetic.

surely 

>
>BitC does provide an unbounded integer arithmetic capability, and for
>many applications that is the right thing to use. It isn't good enough
>to write operating systems.

didn't know unbounded int was provided, but for you 2nd point, agreed

>
>> So, why bound integers at all? It should be possible to simply declare
>> that x is an int (meaning unbounded signed) or a domain-convenient
>> subset like -12..45 (like pascal) and map appropriately to the
>> underlying machine...
>
>Range constraints incur a discharge obligation. 

I presume that 'discharge obligation' mathspeak for 'you gotta prove it
stays fenced in'

>In the general case,
>this is an unsolved problem in the literature.

In general I'm sure it is. But we're not facing generalities but trying
to prove restricted things. Kernel stuff is going to be restricted. It
has to be to be provable.
I don't expect the compiler to prove range constraints are satisfied
statically but if you are doing systems programming I'd expect the
compiler mainly to stand aside while the programmer rolled out the big
gun provers (person + software, so definitely not fully automated) and
laid down a trail which the compiler could follow and exploit (or some
other means. I don't know how compiler-external proofs are intended to
work).
(this next bit is pretty stupid. I realise you are experts but I'll say
it anyway because I'm sure I'm missing something obvious)
You will get range constraints: if you index an array arr[y] then to
ensure runtime safety y must be within the range
arr.MinIndex..arr.MaxIndex (invented syntax) and you are going to have
to prove the same with a vector vec[y] then likewise only vec.MaxIndex
is going to vary at runtime, so it's harder. For more complex stuff it
just gets harder - but someone is going to have to prove that for
high-reliability software. Ranges come in, explicitly or implicitly.
You can't avoid them so why not let the language specify them.
  var y: 0..arr.MaxIndex 
is clearer of intent than
  var y: int32
and explicitly links the array to the value indexing it.
Dunno how you'd specify vector index limits, but they have to be
specified with respect to a vector rather than a machine word. Perhaps
as an assertion, but that too would have to be 'discharged' before it
could be removed.
(I saw something about dependent types which relate to this; read the
wiki pages but not much more. Whatever these exactly are, will they
cover all possibilites needed? I'll have to read more)

So, it seems to me to make sense that the language deals with integers
generally (it's got to be easier on the specification & prover!) but
if it can determine, or can be fed, proof that a range will not be
exceeded then it makes sense the back end spits out the int16 - or
whatever the smallest/fastest sized word fits - instead of code to
manage an unbounded integer. But that choice is an optimisation. It's
an optimisation that has to be made for acceptable performance, but it
seems that optimisation hints about machine words should not be
specifiable in the language. 

To put it another way, machine word size specifications should come out
of the back, not be fed in at the front.

I guess I've got more BitC reading to do. Lots.

jan


>
>> Actually, maybe another front. IIRC the original dec alphas were
>> restricted to 32 & 64 bit integers...
>
>Yes. And this is one of the reasons that the alpha failed. It was
>*horrible* at handling character data, for example.
>
>shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to