knuth.jpg

I recieved this Knuth reward check for a remark I sent to Knuth about a to a particular SAT-solving heuristic concerned with the selection of decision literals, which has advanced the state of the art for the fairly obscure problem of Aurifeuillean factorization, but with other implications for problems such as 'show that some adjacency-configuration of Knights on a Chess board always results in a draw' or 'Find possible starting board states for a Game of Life, which evolves to some configuration after 216 rounds'.

I don't have any formal background in the the topic of SMT and SAT solvers, but I've taken a pretty earnest interest in them, steadily filling in background in automated logic and satisfiability. I've written a solver of my own design as well as an interactive development environment for SAT Solvers & SMTLIBv2.

Although they've been applied to an extraordinary number of problems in the design of electronics, the rarefied air of automated reasoning tools in line-of-business software is perceptibly evaporating. Both the extraordinary pace of hardware along with the dependable churn of speed records at formal SAT & SMT performance 'contests' have allowed industrial engines of satisfiability to realistically check properties of real-world software, which is giving the first intimations of a major shift in static analysis & the study of copy protection.

I've personally used such solvers in the service of checking real-world code, assisted me not only in finding, but also generating several 0day exploits (as well of plenty of ordinary bugs) well-respected software (and even a language runtime).

I say this with the experience to be cynical about automated vulnerability research: Academics have been voicing their concerns over a fundamental shift in symbolic or even artificially intelligent analysis for more than two decades now. There's an established procedure for these sort of claims. First beginning with some claim of extraordinary results, prophesying the end of manual audits, usually included with some impressive results. The next step is hazier, happening outside of the channels of formal discussions of security, among the more seedier realm of blackhat hackers, independent researchers and cyberdefense inquiries: The results just don't seem to stack up against actual software, if they do they can't get over the substantial limitations of the underlying method, or the state-explosion present in actual software, never offering a new approach to some of the trickiest and most mundane problems generated by common structures of programs loops.

Still, the problem child of security research is beginning to take it's first steps. Tools like angr have been used to automatically find & exploit vulnerabilities in software, winning prizes like the DARPA Cyber Grand Challenge and generating considerable interest. The application of such tools to existing industrial software still seems faraway, but the adversarial nature of security research demands that researchers not only rely upon their own cunning, but to extend and advance their intelligence with the capabilities of a machine with whatever tools exist today, rather than what may exist tommorrow.