Jan. 19th, 2014

pnathan: elephant bypasses fence to drink from pool (Default)

I've taken an at-home coding vacation this week and last week. I've been doing reading on model railroads and sailing, as well as a smattering of other books.

I flipped through the list of the papers from POPL2014; it's kind of frustrating to me- they all appear to be focused on type systems. I'm not sure why this is such a thing in programming language design (perhaps it is enjoyable to work on the math!). But I don't think the big problem - software crisis if you will - is in data types. Data types in most of the developer's world sits in the grungy province of the C(C, C++, Java, C#, ObjC) family or in the more fluid province of the Perl family (Perl, Ruby, Python, Groovy, etc). Neither of these languages has the type system problem solved even at the level of ML languages (which are, AFAICT, sort of old hat today). So from a "cool insights useful in fifteen years" level, I'm depressed! These results probably won't ever get to an Ordinary Practitioner.

For me, the bridge into the Best Possible World is founded in Software Contracts, most commonly associated with Eiffel and occasionally supported in obscure extensions for other languages. Suppose that I claim that not only will some variable quux is an int, (solvable in C) but in fact it is between 0 and 10 (solvable in Ada, somewhat in SBCL, and perhaps in the Agda & family of dependently typed languages), and not only that, quux will always be passed into frobbing_quux and used to generate a result (similarily quantified). Let me call that "Code Type System". It may be that complete analysis of the code demands a complete type system of the data. Certainly some of this work has already been done in the model checking & static analysis community, as they can already detect certain concurrency bugs and malloc/free errors. Then, of course, we find ourselves building a model of the program in our "Code Type System" before we actually build the program, and if we find we need to alter the system, we have to rebuild the model framework. This is well understood to be the classic failure model of fully verified software development.

Let me instead take this thought experiment by way of an example, using Common Lisp.

(def-checked-fun mondo (quux baz)
  "Some docs"
  (check (and (calls-into frobbing_quux quux)
                     (type-of quux integer)
                     (range quux 0 10))
  ;; do stuff
  (frobbing_quux quux)
  ;; do stuff
  )

The theoretical def-checked-fun macro will examine the code, both as-is and macro-expanded, verifying that it does indeed appear that quux satisfies the required judgement. Of course, we can't answer that quux is of a particular type or that it falls into a certain range at this point: in order to make that judgement, either run-time checks need to be added (to constrain possibilities), or intraprocedural analysis needs to be performed. However, the calls-into check can be either demonstrated or an "unproven" result returned. This is simple - some CAR needs to be frobbing_quux with a bar in the argument list.

Some of this, in some measure is already done in SBCL. I have long thought (at least 8 months now), that a compile-time macro system (def-checked-fun is an example) could provide some very interesting insight, particularly if you begin to share information).

The worst possible result is that the checker returns something to the effect of "No possible errors detected; no data found to make judgement". In short, it's useless. The best possible result is that intraprocedural contracts can be built based on not simply types but expected semantics in a succinct system (only modelling what is useful), then when time comes for running, all this information is quietly removed.

I propose that this is a useful idea, and practical, too - for an Ordinary Practitioner. It's like lightweight unit tests that require almost no lines of code and almost no tweaking.

It's important to understand, of course, that I'm thinking about "correct" results, not "complete" results, quite two different things! Dawson Engler's talk/paper "A Few Billion Lines of Code Later" really strikes at the heart of what makes a system useful for this kind of work in practice and the exigencies of real world work. I won't summarize it here.

What I don't know is if CoQ or other systems (ACL2, etc) already implement this sort of fluid procedural static type checking. Possibly they do.

Most Popular Tags

Page Summary

Expand Cut Tags

No cut tags
Page generated Jul. 27th, 2017 04:37 am
Powered by Dreamwidth Studios