Collected here are all papers for which I was the principal or only author, for which I am proud of the result, to which I have the source, and which I think I can legally redistribute.
Each paper is available in your choice of PDF, Postscript, or gzipped Postscript. The PDF is generated from the Postscript by the ps2pdf tool. I recommend you choose Postscript for a better-looking output but PDF if you can read that more easily in your browser.
Motivation: You're trying to automate your cyber-defenses because hiring system administrators with security skills has turned out to be too expensive. You're finding that automated defenses can easily be overwhelmed by data from intrusion detectors and you're also worried that an automated defense might be tricked into helping the attacker disable parts of the system. Read this paper for an approach that will make automated cyber-defense feasible.
Motivation: To defend your computer system against cyber-attack, you've decided to use redundant, heterogenous, distributed components coordinated by a fault-tolerant protocol. Fine, but rather than just hoping that fault tolerance will mask the damage caused by an attack, you'll do better to react, adapt, and try to contain the attacker. Read this paper for ideas on cyber-defense, many of which were later analyzed and implemented on BBN's ITUA and DPASA projects.
Motivation: You're a large organization entrusted with defending the United States. You've come to depend on computers a lot but they seem easily corrupted, and even after a couple decades of trying to build your own incorruptible versions, you've given up and are buying Wintel. (It's cheaper, plus your employees insist on playing Minesweeper at work.) Doesn't that leave the United States dangerously vulnerable? Read this paper to learn how you might be able to have your Wintel cake and still eat it with peace of mind.
Motivation: Think that fault tolerance and security are quite different properties? You're right. But in one way they are quite alike: both constrain the flow of information. Read this paper to see that similarity described mathematically.
The paper suggests that because fault tolerance and security have similar specifications, they might be verified using similar techniques. So far, however, I've been unable to make significant progress toward that goal.
Motivation: You want to be able to verify that your systems are secure but you're having trouble getting the right formal security policy. Access-control policies are too weak because they don't constrain covert channels. Noninterference policies are too strong because they don't allow any covert channels at all. To allow selective violations of a noninterference policy strikes you as a pathetic hack, and the examples you've seen leave you with a policy than is non-composable. Read this paper for a noninterference policy that limits the rate at which information can flow through covert channels and is composable too.
I think this paper is my best work. One of these days I will return to the problem of formally verifying security and build tools for verifying policies like those described here.
Motivation: You're building a secure distributed system and want to verify its security. Naturally, you'd like to break that verification down into smaller steps. Why not use the recently invented "hook-up security" policy? Then once you've shown that each component is secure you'll know immediately the whole system is secure. Unfortunately, "hook-up security" is a difficult property to work with. Read this paper to see how it can be decomposed into simpler properties and verified with the help of the Gypsy formal verification tools.
These papers describe work on specific projects at Key Software, Inc. (Not currently available for download.)
Note that Franklin L. Webber was formerly named Douglas G. Weber. Papers written before 1997 use his older name.