## Saturday, January 25, 2014

### OBT 2014

I just attended the Off the Beaten Track workshop, which is a POPL workshop where people gather to offer their new, untested, and (ideally) radical ideas. Here are some quick and incomplete reactions.

Chris Martens gave a talk, Languages for Actions in Simulated Worlds, in which she described some of her ideas for using linear logic programming to model interactive fiction.

The idea behind this is that you can view the state of a game (say, an interactive fiction) as a set of hypotheses in linear logic, and the valid state transitions as hypotheses of the form $A \multimap B$. So if your game state is $$\Gamma = \{ A, X, Y, Z \}$$ Then matching the rule $A \multimap B$ against the context and applying the $\multimap{}L$ rule will give you the modified context $$\Gamma = \{ \mathbf{B}, X, Y, Z \}$$ Then, a story is a proof that the end of the story is derivable

Here are some thoughts I had about her talk:

• A story is actually a linearization of a proof term.

The proof term just records the dependencies in the state transitions, and there is no intrinsic order beyond that. There are many possible linearizations for each proof term, and different linearizations yield different stories.

So one question is whether various literary effects (like plot structure) can be seen to arise as properties of the linearization. There has been a lot of work on linearizing proof nets, and I wonder if that has some applications to this.

• Can control operators be used to model flashbacks?

That is, not all stories occur in a linear chronological order. Sometimes narratives jump forwards and backwards in time. Can we somehow use control operators to model time travel? Call/cc is often explained intuitively in terms of time travel, and it would be cool if that could be turned into a literal explanation of what was happening.

• In Martens's account, proof search is used by the computer to work out the consequences of user actions. Is there any way we can make the player do some deduction? For example, some stories gradually reveal information to the reader until the reader has a flash of insight about what has really happening.

So is there any way to "make the player do proof search", in order to achieve game effects related to hidden information? (I'm not even sure what I mean, honestly.)

• Can we use theorem proving to model fate and prophecies in games?

So, if you have a Lincoln simulator in which John Wilkes Booth kills Abraham Lincoln at the end of the game, then you might want to let the player do anything which does not make this impossible. In current games, this might be done by giving Booth rules-breaking defenses or simply forbidding players from aiming at Booth. But this lacks elegance, not to mention literary value!

It would be better if the available actions to the player to only be those that don't rule out this event -- not just immediately, but eventually into the future. We might imagine that the game could use linear theorem proving, and then forbid an actions if there is any path along which Booth dies $n$ steps in the future. It's still removing choice, but perhaps if we are subtle enough about it, it will seem a bit more natural, as if coincidence is worked subtly to ensure the outcome rather than a game hack.

Note that all of these ideas were spurred by just the very first section of her talk -- she gave me a lot to think about!

Bret Victor gave the keynote talk. On the internet, he has a reputation as a bomb-throwing radical interaction designer, but his in-person persona is very mild. He opened his talk with a paean to the virtues of direct manipulation, and then he moved on to demo some sample pieces of software which worked more like the way he wanted.

His argument, loosely paraphrased, was that people write programs to create interactive behaviors, and that it would be easier to do in a direct manipulation style. By this, he meant that you want a programming environment in which the behavior and the means for creating and modifying them "lived together" -- that is, you use the same conceptual framework for both the relevant behaviors and the operators on them. So if you want to create a visual program, he thought it would be better to offer visual operators to modify pictures rather than code.

He illustrated this with a pair of demo programs. The first of these was a program that let you build interactive animations with timelines and some vector graphics primitives, and the second was a data visualization program that worked similarly, but which replaced the timeline with a miniature spreadsheet.

Both of these programs seemed like extremely mild and conservative designs to me. The basic vocabulary he worked with --- grids, vectors, timelines, spreadsheets --- were all familiar primitives from drawing, animation and data-manipulation programs, and then he augmented it all by letting you parameterize these things with variables, to get you procedural abstractions. He was careful to make variables visualizable using sliders, a la Conal Elliot's tangible values, but beyond that he really seemed to hook things together in the most direct way possible.

Something I was really curious about was how Victor might design an interface that was not visual --- e.g., how would you apply his design principles for a blind user? What does it mean for sounds, rather than pictures?

#### 1 comment:

1. I meant to bring this up at OBT but forgot: When you said "A story is actually a linearization of a proof term", well, that would be the case for a story on paper. But it wouldn't necessarily be the case for, say, a Twine game, would it? Similarly, a choose-your-own-adventure book has to be linearized in order to be printed in the book, but if it's rendered in hypertext then no such linearization is needed.

Also, your comment about interfaces that aren't visual makes me think of some things that Lea Albaugh has said about how our current cultural infatuation with smooth glass screens may be holding us back, and how there's a lot of work to be done on interfaces that give haptic feedback.

I really liked the idea of direct-manipulation programming from Bret Victor's talk, but I'm hung up on something that relates to a comment that someone sitting next to me made during the talk. Their comment boiled down to: this system couldn't be self-hosting, could it? I unfortunately couldn't stick around for the whole talk, because I had my own talk to prepare for, but if I had stayed until the end, my question would have been something like, "When you sit in your Bret Victor Bat Cave full of magical wonders, and you make things -- when you make demos like the ones you just showed us -- do you make them using direct-manipulation programming, or do you make them by writing code like all the rest of us schlubs? If the latter, then do you imagine that such a divide will always be necessary -- that there will always be some programmers that do indirect manipulation, while others do direct manipulation using the tools the others made? If so, then what do you think the cultural implications of such a divide are? If not, then how might you propose that such a divide could be overcome?"