Freedom From The Box
I gave a version of this talk at Manhattan.js on November 8. The words are smoothed out versions of my speaker notes for the 15-minute slot. I would love to keep iterating on this talk, so if you want to see it in person, let me know! And maybe in 2018 I can publish some deeper looks into each of the points.
I always begin my talks introducing myself; though that seems a little funny here, just pretend I am talking.
Hi there! I am Sarah Groff Hennigh-Palermo. Which is a lot of names! But if it’s good enough for royalty, it’s good enough for me. You can call me Sarah GHP. I exist on the internet, and you should feel free to tweet nice things at me.
During the day, I work on Kickstarter’s front-end team,
and when I’m not there, I make digital art and digital art tools. This is for a collection I did for Electric Objects (👻RIP💀), and lately, I’ve been working on a tool for live-coding visuals as part of an algorave team with Kate Sicchio — we don’t have a name, I am pushing for Salad.
So I am going to talk about types and errors and miscomputation as a concept — and this all sounds perfectly high-minded — but it’s really, as the placeholder title for this talk maybe gave away, driven by the fact that
I hate types.
I do. I really hate them.
“But types are so cool, Sarah” you say. “How could you hate them?”
I am glad you asked. I am so excited to tell you. But! Before I get into that, I want to give a very important caveat.
This is not a talk about bashing things. I love being hyperbolic and dramatic when I talk about stuff in my life, but I’m not going to tell you types are bad or stupid or that you are bad or stupid for liking them.
A lot of very smart, very wonderful people — many of whom I care for and respect greatly—love type systems.
Rather, I want to give this talk and start this conversation because:
I do see how types can be interesting as a math problem or, as Rich Hickey recently suggested, as a puzzle. Alternately, you may value other parts of programming than I do and so be thrilled you can finally live your typed front-end dreams.
But here is why I do not share them and some ideas on what we can do instead. (I have solutions! Or at least suggestions!) But first problems.
Reason #1: I want to be freeeeeee.
So, this is the total stereotype of the dynamic, LISPy geek but it’s true. I feel tied down by the type system.
All static, dependently-typed systems will rule out or be incapable of expressing some perfectly good, error-free programs. Or as some of my smart friends say:
Type systems may make it hard or impossible to do things you don’t want to do, but they certainly don’t make it easy to do things you do want to do. That is the nature of placing the challenge of placating a compiler between the programmer and running code.
Reason#2: The effort-to-payoff ratio is skewed.
Even beyond the reduced solution space, using a type system requires a lot of extra overhead — cognitive and otherwise. Not only is there the extra time spent understanding and implementing system-compliant types, but because inference remains Not Great, for a system to be useful, you have to tell the compiler quite a lot, as our friend Will learned.
That might be okay, of course, if the time invested here took away time spent in other locations, but you still need good tests to check for logic and integration issues among other things, and despite what people like to think,
↗️ this is not actually documentation. So we still need both documentation and tests, even with types.
Okay, so onto reason 3:
Reason #3: All that fighting with compilers.
This is a bit of an inversion of what some folks love about type systems: the compiler tells you you have a problem before you even run the program.
However, because of the nature of compilers and code parsing, compilers often locate an error some distance from the code we have to change to fix it. An interpreter or a just-in-time compiler, by contrast, usually breaks closer to the issue.
undefined is not a function, but I’ll take a vague message in a precise context over a precise, displaced error any day.
And now, reason #4.
Reason #4: They don’t fit me or the the world.
This big, multivalent grab-bag of a reason is in many ways an extension of reasons one and three.
I would argue that more than just being a preference,
developing with tiny iterations is actually better. It’s a structure and a stance that is more aligned with what the whole point of programming is — to accomplish something.
So I’ve mentioned Rich Hickey a few times — and I always like to when I am talking about programming because if I have programming heroes, he’s pretty high on that list — but in this case, it’s also because he just gave a talk at the most recent Clojure Conj that spends a bunch of time on why Clojure is dynamic, why he made the choices he did.
And he brings this up, particularly in the context of typed research languages, this idea that residing in, reflecting, and changing the world are the purpose of the type of programming he is interested in and arguably all programming that is not mathematics research. In this context,
side effects and imperfection are not enemies to be avoided: they are the whole thing.
Reflecting and changing the imperfect, oscillating world is what programs are for.
I would argue that working in this small, iterative, accretionist way is how you write programs that are flexible, understood, and easy to change as your requirements change.
The problems that flow from such an approach are less likely to be type errors than to be logic errors — and a type system can’t help you with that.
Reason #5: Type systems emphasize our worst impulses.
This reason shades into what my buddy Justin calls type hate as a rhetorical device. And he’s not entirely wrong.
For not only is the pursuit of a systematic perfection unrealistic in terms of what we want to accomplish in the world but it is a goal that is bad for people.
While I think that only zombie Freud can truly explain the programmer fetish for correctness, it is pretty easy to argue that putting the idea of perfection at the core of computing underlies some of our less-humane effects. The idea that we are concretizing and defining — the idea that there is one answer — justifies shearing away complexities and embedding our biases.
And if you want to hear more about this, you should get me drunk — or read my masters thesis which has the same information but less yelling.
Last reason is a short reason:
Reason #6: I just haven’t been convinced.
The talks and examples I’ve seen have never gotten me excited. But I like to think I could still change my mind — maybe.
Till then, let’s talk about other options.
Because at this point, you are probably thinking, “Okay great, Sarah, you think dealing with types is annoying and you don’t care about mathematical provability because you are some liberal arts–savage, but—”
“—type errors are real. Shouldn’t we make sites that are more reliable for our users even if it is at the expense of your personal dev experience? After all, that’s what the money’s for.”
It was in trying to find out what other options for dealing with errors existed in computer science that I came across the concept of miscomputation.
Miscomputation encompasses all the ways in which we can write programs that do not behave as we wish them to.
Researcher Tomas Petricek categorizes them like this:
The most narrow error category is that of slips, “where your idea was right and you tried to encode it in the right way, but you made a syntax error, a typo or, say, a copy-and-paste error.”
I always think of these as errors that don’t ship much because they break right away or because they are surfaced via linters or basic manual testing.
At the other end of the generality spectrum, we have
mistakes, “where you made an error when thinking through the specification.”
This is an extra fuzzy category because what specification can mean is its own can of worms, but I think of a mistake as building the wrong thing.
Mistakes can also include problems of incomplete specifications, changing environments, and maintenance concerns, which, in some cases, can be viewed as the result of imperfect programming.
These are far larger than I can talk about today, but they can be addressed by so many different types of paradigms, processes and architectures.
Finally, in the middle, we have
failures: “where you had the right idea but encoded (some part of) it poorly.”
These are things like logic errors and some type errors. For instance, cases where you pass a function the wrong type and then try to execute a built-in function. And they are what I will be focusing on because they encompass both what good dependent typing seeks to fix and the kinds of errors it seems to miss.
So what are some other ways to address failures?
The first alternative approach is pretty much the opposite of proving a program correct:
Approach #1: Just let it crash.
This is most often associated with Erlang — which does have the most fantastic introductory video— but you don’t necessarily need a supervisor model or a concurrent system for this to apply.
When you make a mistake, it still renders a “broken” page, which is often perfectly serviceable, if not pretty.
.catch() not just to throw or log an error but to retry, recover, or continue without the failed section.
This slides nicely into the next, complementary option: trading the time you are using on type definition files on to focus on good defaults.
So, I don’t know about you, but the primary failures that I seem to run into (and which typing seems to address) involve passing an argument that is undefined or the wrong scalar type and trying to operate on it or indexing into a property that does not exist.
For instance trying to call
The first protection — guards on
undefined — can be added by using default function arguments. Interestingly, these can fail when
null is passed instead of
undefined, so you do have to agree that you are going to “let” an argument fail by leaving it
Similarly, you can create a default object for cases where the arguments might be more complex. For instance, in dealing with credit card number formatting functions, we have a
DEFAULT card that takes care of a number of guards and checks that would otherwise be required.
For the second case, attempting to operate on the wrong scalar, I am a fan of creating pass-through functions, where the scalar type of the argument is checked and if it not the type expected by the function, it is either returned unharmed or a default is returned.
(Note: I’ve since learned that another term for this is “creating a total function.”)
If you don’t love the idea of writing type checks for each small function or pipeline you are working with, the
Maybe data structure is a good option. (Ok, yes, this is sort of a type, but hey even questionable paradigms produce good ideas!) Used alone, it lacks most of the drawbacks and provides a nice wrapper that lets you set a default if the operation fails — assuming you don’t have a system where everything becomes a
Another good way to find out where you might need checks or may have made a mistake in your assumptions
is using generative property testing, for instance with
Here the idea is to use automatically generated data to test properties of your code, where properties are descriptions one always expects to be true: for instance, sorting a sorted array should return an array with values in the same order.
One advantage of this type of testing is to uncover the assumptions you have embedded into your functions, both as you select the type of data to generate and as you define a property.
In some ways, we could even think of generative testing as getting some of the same value as using types:
we are making explicit what we expect of our data and its transformations — without the drawbacks of fighting with compilers or having to overspecify where it doesn’t matter.
Another way to deal with the imperfect world is one I came across in the original map reduce paper from Google: record skipping.
In this case, when a processor failed multiple times on a record, it eventually removed that record from processing, but otherwise continued. In the case of link-processing for page rank, this is a perfectly good solution: there are enough records still to give a good signal, so why fail on an imperfection?
One approach could be to create a
shortMap function that takes a transform function that may fail on some elements of the array. If it does fail, the element is removed from the array. Alternatively,
looseMap could return
undefined in cases where the transformation failed, setting up the collection to be used in a system that makes strong use of default arguments.
Given the current state of the language, this is similar to the good defaults we talked about before, in particular applied to batch operations: mapping, filtering etc. Going forward, I like to imagine what a dynamic language with loose batch functions built in would look like and the kinds of problems we might want to solve with it, in particular things like transforming and displaying streams of content or other works generated by people or machines, or cases in which we know we are working with incomplete or unknown records — more realistic data handling.
On a language level, another solution might be:
throwing fewer errors. That sounds a little silly at first, but instead we might do something like
null, the way Clojure and Clojurescript use
nil, (often called nil punning).
After all, what purpose does it serve to stop a program entirely when you try to index into a nested object, say, and ask for a deep key that is not there? Why not just return
null — the value is that it does not exist.
Okay this is getting pretty hypothetical now, and the last one, from the “Miscomputation” paper is the most hypothetical:
We could learn to live with errors. In this case that implies having tools to live-code fixes and surface errors as they happen.
The paper uses Smalltalk as an example of a language that works with errors and locates some of the arguments for embracing mistakes in the practice of live coding and of digital art.
At first I was tempted to dismiss this as being unimportant in terms of practical software, because while errors can help discover good aesthetics, they did not seem to have a place in interface implementations.
But then, if the interface is the membrane between the machine and the world, developing tools that allow us to see maladapted code and address it quickly might be the best way to create code that works.
This is where pushing away types as as a solution to the wrong problem is, I think, an important rhetoric.
What if instead of constructing a tower of tools to defend us from imperfections, we were spending our efforts on tools that lived in harmony with imperfection and therefore with the truth of the world?
I also try to keep in mind what types are useful for. Things like
Anyways, there are about 10 talks hidden inside this one, but today my time is about up. So, I hope you feel encouraged to look elsewhere to solve some of your problems or maybe just question the wave a little.
The paper I pulled from is easy to read and totally worthwhile — his concluding suggestion, which I won’t spoil, does imagine a world where types and freedoms might live happily nearby one another.
All of these are on the credits and links page of this talk, which I popped onto the internet for ya, I’ll tweet out the link. Meanwhile, thanks so much!
Rich Hickey’s 2017 Conj Keynote | Clojure and Types, from LispCast | Nil Punning from LispCast | Zombie Freud | My stupid thesis | Tomas Petricek’s “Miscomputation in software: Learning to live with errors” | React 16 Error Boundaries | TestCheck.js
And that’s it! Please remember, this is a document capturing a particular talk, not everything I think on the subject. But it’s still ok to be mad if you really wanna be. ❤️
Originally published on hackernoon.com
Sign up now and apply for roles at companies that interest you.