Andrew W. Keep

Postdoctoral Researcher

Publications

Peer-reviewed publications

Andrew W. Keep and R. Kent Dybvig. A run-time representation of Scheme record types. Journal of Functional Programming, available on CJO2014. doi:10.1017/S0956796814000203. [BibTeX] [Abstract]
@article{keep-dybvig-jfp-2014,
  author = {Keep, Andrew W. and Dybvig, R. Kent},
  title = {A Run-time Representation of {Scheme} Record Types},
  journal = {Journal of Functional Programming},
  volume = {FirstView},
  month = {10},
  year = {2014},
  issn = {1469-7653},
  pages = {1--42},
  numpages = {42},
  doi = {10.1017/S0956796814000203},
  URL = {http://journals.cambridge.org/article_S0956796814000203},
}
[close]

The Revised6 Report on the Algorithmic Language Scheme added a mechanism to the Scheme programming language for creating new record types procedurally. While many programming languages support user defined, structured data types, these are usually handled syntactically, so that the compiler can make choices at compile time about the memory layout of these data types. The procedural record types in Scheme, however, can be constructed at run time, making the efficient run-time representation of record types important to ensure good run-time performance. The run-time representation used in our implementation provides an extended model for record types allowing record types to represent foreign scalar data types, e.g., machine word integers, and allows the base record type to be extended to create non-R6RS record-type systems. This article describes our run-time representation for record types, how the garbage collector handles foreign scalar data types, and includes extended record type systems both for an object-oriented programming model and a representation of foreign structured data types.

[close]
Shuying Liang, Weibin Sun, Matthew Might, Andrew W. Keep, and David Van Horn. Pruning, Pushdown Exception-Flow Analysis. In Proceedings of the 2014 14th IEEE Working Conference on Source Code Analysis and Manipulation, SCAM ‘14, Washington, DC, USA, 2014, IEEE Computer Society. [BibTeX] [Abstract]
@inproceedings{liang-etal-scam-2014,
  author = {Liang, Shuying and Sun, Weibin and Might, Matthew and
            Keep, Andrew W. and Van Horn, David},
  title = {Pruning, Pushdown Exception-Flow Analysis},
  booktitle = {Proceedings of the 2014 14th IEEE Working Conference
               on Source Code Analysis and Manipulation},
  series = {SCAM '14},
  year = {2014},
  month = {September},
  publisher = {IEEE Computer Society},
  address = {Washington, DC, USA}
}
[close]

Statically reasoning in the presence of exceptions and about the effects of exceptions is challenging: exception-flows are mutually determined by traditional control-flow and points-to analyses. We tackle the challenge of analyzing exception-flows from two angles. First, from the angle of pruning control-flows (both normal and exceptional), we derive a pushdown framework for an object-oriented language with full-featured exceptions. Unlike traditional analyses, it allows precise matching of throwers to catchers. Second, from the angle of pruning points-to information, we generalize abstract garbage collection to object-oriented programs and enhance it with liveness analysis. We then seamlessly weave the techniques into enhanced reachability computation, yielding highly precise exception-flow analysis, without becoming intractable, even for large applications. We evaluate our pruned, pushdown exception-flow analysis, comparing it with an established analysis on large scale standard Java benchmarks. The results show that our analysis significantly improves analysis precision over traditional analysis within a reasonable analysis time.

[close]
Shuying Liang, Andrew W. Keep, Matthew Might, David Van Horn, Steven Lyde, Thomas Gilray, and Petey Aldous. Sound and Precise Malware Analysis for Android via Pushdown Reachability and Entry-Point Saturation. In Proceedings 3rd Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, SPSM ‘13, New York, NY, USA, 2013. ACM. [BibTeX] [Abstract]
@inproceedings{liang-etal-ccs-spsm-2013,
  author = {Liang, Shuying and Keep, Andrew W. and
            Might, Matthew and Van Horn, David and
            Lyde, Steven and Gilray, Thomas and Aldous, Petey},
  title = {Sound and Precise Malware Analysis for Android via
           Pushdown Reachability and Entry-Point Saturation},
  booktitle = {Proceedings 3rd Annual ACM CCS Workshop on
               Security and Privacy in Smartphones and Mobile
               Devices},
  series = {SPSM '13},
  year = {2013},
  address = {New York, NY, USA},
  publisher = {ACM},
  isbn = {978-1-4503-2491-5},
  doi = {10.1145/2516760.2516769}
}
[close]

Sound malware analysis of Android applications is challenging. First, object-oriented programs exhibit highly interprocedural, dynamically dispatched control structure. Second, the Android programming paradigm relies heavily on the asynchronous execution of multiple entry points. Existing analysis techniques focus more on the second challenge, while relying on traditional analytic techniques that suffer from inherent imprecision or unsoundness to solve the first.

We present Anadroid, a static malware analysis framework for Android apps. Anadroid exploits two techniques to soundly raise precision: (1) it uses a pushdown system to precisely model dynamically dispatched interprocedural and exception-driven control-flow; (2) it uses Entry-Point Saturation (EPS) to soundly approximate all possible interleavings of asynchronous entry points in Android applications. (It also integrates static taint-flow analysis and least permissions analysis to expand the class of malicious behaviors which it can catch.) Anadroid provides rich user interface support for human analysts which must ultimately rule on the “maliciousness” of a behavior.

To demonstrate the effectiveness of Anadroid's malware analysis, we had teams of analysts analyze a challenge suite of 52 Android applications released as part of the Automated Program Analysis for Cybersecurity (APAC) DARPA program. The first team analyzed the apps using a version of Anadroid that uses traditional (finite-state-machine-based) control-flow-analysis found in existing malware analysis tools; the second team analyzed the apps using a version of Anadroid that uses our enhanced pushdown-based control-flow-analysis. We measured machine analysis time, human analyst time, and their accuracy in flagging malicious applications. With pushdown analysis, we found statistically significant (p < 0.05) decreases in time: from 85 minutes per app to 35 minutes per app in human plus machine analysis time; and statistically significant (p < 0.05) increases in accuracy with the pushdown-driven analyzer: from 71% correct identification to 95% correct identification.

[close]
Andrew W. Keep and R. Kent Dybvig. A Nanopass Framework for Commercial Compiler Development. In Proceedings of the 18th ACM SIGPLAN International Conference on Functional Programming, ICFP ‘13, New York, NY, USA, 2013. ACM. [BibTeX] [Abstract]
@inproceedings{keep-dybvig-icfp-2013,
  author = {Keep, Andrew W. and Dybvig, R. Kent},
  title = {A Nanopass Framework for Commercial Compiler
           Development},
  booktitle = {Proceedings of the 18th ACM SIGPLAN International
               Conference on Functional Programming},
  series = {ICFP '13},
  year = {2013},
  address = {New York, NY, USA},
  publisher = {ACM},
  isbn = {978-1-4503-2326-0},
  doi = {10.1145/2500365.2500618}
}
[close]

Contemporary compilers must typically handle sophisticated high-level source languages, generate efficient code for multiple hardware architectures and operating systems, and support source-level debugging, profiling, and other program development tools. As a result, compilers tend to be among the most complex of software systems. Nanopass frameworks are designed to help manage this complexity. A nanopass compiler is comprised of many single-task passes with formally defined intermediate languages. The perceived downside of a nanopass compiler is that the extra passes will lead to substantially longer compilation times. To determine whether this is the case, we have created a plug replacement for the commercial Chez Scheme compiler, implemented using an updated nanopass framework, and we have compared the speed of the new compiler and the code it generates against the original compiler for a large set of benchmark programs. This paper describes the updated nanopass framework, the new compiler, and the results of our experiments. The compiler produces faster code than the original, averaging 15–27% depending on architecture and optimization level, due to a more sophisticated but slower register allocator and improvements to several optimizations. Compilation times average well within a factor of two of the original compiler, despite the slower register allocator and the replacement of five passes of the original 10 with over 50 nanopasses.

[close]
Andrew W. Keep and R. Kent Dybvig. A sufficiently smart compiler for procedural records. In Proceedings of the 2012 Workshop on Scheme and Functional Programming, Scheme ’12, 2012. [BibTeX] [Abstract]
@inproceedings{keep-scheme-workshop-2012,
  author = {Keep, Andrew W. and Dybvig, R. Kent},
  title = {A Sufficiently Smart Compiler for Procedural
           Records},
  booktitle = {Proceedings of the 2012 Workshop on Scheme and
               Functional Programming},
  series = {Scheme '12},
  year = {2012},
}
[close]

Many languages include a syntax for declaring programmer-defined structured data types, i.e., structs or records. R6RS supports syntactic record definitions but also allows records to be defined procedurally, i.e., via a set of run-time operations. Indeed, the procedural interface is considered to be the primitive interface, and the syntactic interface is designed to be macro expandable into code that uses the procedural interface. Run-time creation of record types has a potentially significant impact. In particular, record creation, field access, and field mutation cannot generally be open coded, as it can be with syntactically specified records. Often, however, the shape of a record type can be determined statically, and in such a case, performance equivalent to that of syntactically specified record types can be attained. This paper describes an efficient run-time implementation of procedural record types, discusses its overhead, and describes a set of compiler optimizations that eliminate the overhead when record-type information can be determined statically. The optimizations improve the performance of a set of representative benchmark programs by over 20% on average.

[close]
Andrew W. Keep, Alex Hearn, and R. Kent Dybvig. Optimizing closures in O(0)-time. In Proceedings of the 2012 Workshop on Scheme and Functional Programming, Scheme ‘12, 2012. [BibTeX] [Abstract]
@inproceedings{keep-hearn-scheme-workshop-2012,
  author = {Keep, Andrew W. and Hearn, Alex and
            Dybvig, R. Kent},
  title = {Optimizing Closures in {O}(0)-time},
  booktitle = {Proceedings of the 2012 Workshop on Scheme and
               Functional Programming},
  series = {Scheme '12},
  year = {2012},
}
[close]

The flat-closure model for the representation of first-class procedures is simple, safe-for-space, and efficient, allowing the values or locations of free variables to be accessed with a single memory indirect. It is a straightforward model for programmers to understand, allowing programmers to predict the worst-case behavior of their programs. This paper presents a set of optimizations that improve upon the flat-closure model along with an algorithm that implements them, and it shows that the optimizations together eliminate over 50% of run-time closure-creation and free-variable access overhead in practice, with insignificant compile-time overhead. The optimizations never add overhead and remain safe-for-space, thus preserving the benefits of the flat-closure model.

[close]
Michael D. Adams, Andrew W. Keep, Jan Midtgaard, Matthew Might, Arun Chauhan, and R. Kent Dybvig. Flow-sensitive sub-zero control-flow analysis in linear-log time. In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, OOPSLA ‘11, pages 483-498, New York, NY, USA, 2011. ACM. [BibTeX] [Abstract]
@inproceedings{adams-cfa-2011,
  author = {Adams, Michael D. and Keep, Andrew W. and
            Midtgaard, Jan and Might, Matthew and
            Chauhan, Arun and Dybvig, R. Kent},
  title = {Flow-Sensitive Sub-Zero Control-Flow Analysis in
           Linear-Log Time},
  booktitle = {Proceedings of the 2011 ACM International
               Conference on Object Oriented Programming Systems
               Languages and Applications},
  pages = {483--498},
  year = {2011},
  series = {OOPSLA '11},
  address = {New York, NY, USA},
  publisher = {ACM},
  isbn = {978-1-4503-0940-0},
  doi = {10.1145/2048066.2048105},
}
[close]

The flexibility of dynamically typed languages such as JavaScript, Python, Ruby, and Scheme comes at the cost of run-time type checks. Some of these checks can be eliminated via control-flow analysis. However, traditional control-flow analysis (CFA) is not ideal for this task as it ignores flow-sensitive information that can be gained from dynamic type predicates, such as JavaScript’s instanceof and Scheme’s pair?, and from type-restricted operators, such as Scheme’s car. Yet, adding flow-sensitivity to a traditional CFA worsens the already significant compile-time cost of traditional CFA. This makes it unsuitable for use in just-in-time compilers.

In response, we have developed a fast, flow-sensitive type-recovery algorithm based on the linear-time, flow-insensitive sub-0CFA. The algorithm has been implemented as an experimental optimization for the commercial Chez Scheme compiler, where it has proven to be effective, justifying the elimination of about 60% of run-time type checks in a large set of benchmarks. The algorithm processes on average over 100,000 lines of code per second and scales well asymptotically, running in only O(n log n) time. We achieve this compile-time performance and scalability through a novel combination of data structures and algorithms.

[close]
Andrew W. Keep and R. Kent Dybvig. Ftypes: Structured foreign types. In Proceedings of the 2011 Workshop on Scheme and Functional Programming, Scheme ‘11, 2011. [BibTeX] [Abstract]
@inproceedings{keep-scheme-workshop-2011,
  author = {Keep, Andrew W. and Dybvig, R. Kent},
  title = {Ftypes: Structured foreign types},
  booktitle = {Proceedings of the 2011 Workshop on Scheme and
               Functional Programming},
  series = {Scheme '11},
  year = {2011},
}
[close]

High-level programming languages, like Scheme, typically represent data in ways that differ from the host platform to support consistent behavior across platforms and automatic storage management, i.e., garbage collection. While crucial to the programming model, differences in data representation can complicate interaction with foreign programs and libraries that employ machine-dependent data structures that do not readily support garbage collection. To bridge this gap, many high-level languages feature foreign function interfaces that include some ability to interact with foreign data, though they often do not provide complete control over the structure of foreign data, are not always fully integrated into the language and run-time system, and are often not as efficient as possible. This paper describes a Scheme syntactic abstraction for describing foreign data, a set of operators for interacting with that data, and an implementation that together provide a complete, well integrated, and efficient solution for handling structured data outside the Scheme heap.

[close]
Andrew W. Keep and R. Kent Dybvig. Enabling cross-library optimization and compile-time error checking in the presence of procedural macros. In Proceedings of the 2010 Workshop on Scheme and Functional Programming, Scheme ‘10, pages 66-76, 2010. [BibTeX] [Abstract]
@inproceedings{keep-scheme-workshop-2010,
  author = {Keep, Andrew W. and Dybvig, R. Kent},
  title = {Enabling cross-library optimization and compile-time
           error checking in the presence of procedural macros},
  booktitle = {Proceedings of the 2010 Workshop on Scheme and
               Functional Programming},
  series = {Scheme '10},
  year = {2010},
  pages = {66--76},
}
[close]

Libraries and top-level programs are the basic units of portable code in the language defined by the Revised6 Report on Scheme. As such, they are naturally treated as compilation units, with source optimization and certain forms of compile-time error checking occurring within but not across library and program boundaries. This paper describes a library-group form that can be used to turn a group of libraries and optionally a top-level program into a single compilation unit, allowing whole programs to be constructed from groups of independent pieces and enabling cross-library optimization and compile-time error checking. The paper also describes the implementation, which is challenging partly because of the need to support the use of one library’s run-time exports when another library in the same group is compiled. The implementation does so without expanding any library in the group more than once, since doing so is expensive in some cases and, more importantly, semantically unsound in general. While described in the context of Scheme, the techniques presented in this paper are applicable to any language that supports both procedural macros and libraries, and might be adaptable to dependently typed languages or template meta-programming languages that provide full compile-time access to the source language.

[close]
Andrew W. Keep, Michael D. Adams, Lindsey Kuper, William E. Byrd, and Daniel P. Friedman. A pattern matcher for miniKanren or how to get into trouble with CPS macros. In Proceedings of the 2009 Scheme and Functional Programming Workshop, Scheme ‘09, pages 37-45, 2009. [BibTeX] [Abstract]
@inproceedings{keep-scheme-workshop-2009,
  author = {Keep, Andrew W. and Adams, Michael D. and
            Kuper, Lindsey and Byrd, William E. and
            Friedman, Daniel P.},
  title = {A pattern matcher for {miniKanren} or How to get
           into trouble with {CPS} macros},
  booktitle = {Proceedings of the 2009 Scheme and Functional
               Programming Workshop},
  series = {Scheme '09},
  pages = {37--45},
  year = {2009},
  number = {CPSLO-CSC-09-03},
  series = {California Polytechnic State University Technical
            Report},
  url = {http://digitalcommons.calpoly.edu/csse_fac/83/},
}
[close]

In this paper we present two implementations of the pattern-matching macros λe and matche for αKanren. The first implementation generates clean code, but our use of CPS-macros in its implementation leads to a problem in determining when a new binding needs to be introduced for a pattern variable. This problem stems from our delayed creation of bindings, where the comparison of identifiers is done first and then binders created in a later step. This may lead to an issue when macros generating λe or matche expressions may appear to break hygiene because the CPS-macros incorrectly identify two identifiers as being the same. The second implementation addresses these concerns by using more traditional macros, generating bindings as new variables are encountered.

[close]
Andrew Keep and Arun Chauhan. Concrete partial evaluation in Ruby. In Proceedings of the 2008 Fourth IEEE International Conference on eScience, ESCIENCE ‘08, pages 346-347, Washington, DC, USA, 2008. IEEE Computer Society. [BibTeX] [Abstract] [Poster]
@inproceedings{keep-eScience-2008,
  author = {Keep, Andrew and Chauhan, Arun},
  title = {Concrete Partial Evaluation in {Ruby}},
  booktitle = {Proceedings of the 2008 Fourth IEEE International
               Conference on eScience},
  series = {ESCIENCE '08},
  year = {2008},
  isbn = {978-0-7695-3535-7},
  pages = {346--347},
  doi = {http://dx.doi.org/10.1109/eScience.2008.141},
  publisher = {IEEE Computer Society},
  address = {Washington, DC, USA},
}
[close]

Software tools have become a central part of the scientific researchers’ toolbox, but developing them can prove a distraction from the central focus of a research team’s investigation. Dynamic languages, like Ruby, can provide an easy platform for rapid development and deployment of software that can be easily shared through SOAP, REST, or even RPC-style API interfaces with fellow researchers across the globe. In this extended abstract we present progress in addressing one of Ruby’s biggest shortcomings, performance. Our technique uses compiler analysis of Ruby’s C-based interpreter and core libraries in order to provide a basis for partial evaluation. The partial evaluator makes use of the results of this analysis along with a running Ruby session in order to evaluate more complex expressions than could normally be handled by traditional partial evaluation techniques, while ensuring that “unsafe” expressions are left for evaluation during run-time.

[close]

Technical reports

Arun Chauhan, Andrew Keep, Chun-Yu Shei, and Pushkar Ratnalikar. RubyWrite: An embedded DSL for tree rewriting. Technical Report TR704, Indiana University, Bloomington, IN, February 2013. [BibTeX] [Abstract]
@techreport{chauhan-2013-rubywrite,
  author = {Chauhan, Arun and Keep, Andrew and Shei, Chun-Yu and
            Ratnalikar, Pushkar},
  title = {{RubyWrite}: An Embedded {DSL} for Tree Rewriting},
  number = {TR704},
  institution = {Indiana University},
  address = {Bloomington, IN},
  month = {February},
  year = {2013},
}
[close]

RubyWrite is a Domain Specific Language (DSL), embedded within Ruby, with the goal of providing an extensible, effective, portable, and easy to use framework for encoding source-level transformations. Embedding within Ruby provides access to the powerful features of Ruby, including its meta-programming capabilities. Ruby’s multi-paradigm programming model and flexible syntax drove our decision to use it as a host language. Easy integration with C interfaces lets us move performance critical operations to C, or link with external libraries. RubyWrite consists of three components, a tree builder, an unparser, and a tree rewriter. It has been used in multiple compiler research projects and as a teaching aid in a graduate-level compilers course. We expect RubyWrite to be widely applicable and a proof of concept in leveraging a modern language to write a portable compiler infrastructure core.

[close]

Ph.D. Thesis

Andrew W. Keep. A Nanopass Framework for Commercial Compiler Development. PhD thesis, Indiana University, February 2013. [BibTeX] [Abstract]
@phdthesis{keep-phdthesis-2013,
  author = {Keep, Andrew W.},
  advisor = {Dybvig, R. Kent},
  title = {A Nanopass Framework for Commercial Compiler
           Development},
  year = {2013},
  month = feb,
  isbn = {978-1-303-07214-7},
  publisher = {Indiana University},
  address = {Indianapolis, IN, USA},
}
[close]

Contemporary commercial compilers typically handle sophisticated high-level source languages, generate efficient assembly or machine code for multiple hardware architectures, run under and generate code to run under multiple operating systems, and support source-level debugging, profiling, and other program development tools. As a result, commercial compilers tend to be among the most complex of software systems.

Nanopass frameworks are designed to help make this complexity manageable. A nanopass framework is a domain-specific language, embedded in a general purpose programming language, to aid in compiler development. A nanopass compiler is comprised of many small passes, each of which performs a single task and specifies only the interesting transformations to be performed by the pass. Intermediate languages are formally specified by the compiler writer, which allows the infrastructure both to verify that the output of each pass is well-formed and to fill in the uninteresting boilerplate parts of each pass.

Prior nanopass frameworks were prototype systems aimed at educational use, but we believe that a suitable nanopass framework can be used to support the development of commercial compilers. We have created such a framework and have demonstrated its effectiveness by using the framework to create a new commercial compiler that is a “plug replacement” for an existing commercial compiler. The new compiler uses a more sophisticated, although slower, register allocator and implements nearly all of the optimizations of the original compiler, along with several “new and improved” optimizations. When compared to the original compiler on a set of benchmarks, code generated by the new compiler runs, on average, 21.5% faster. The average compile time for these benchmarks is less than twice as long as with the original compiler. This dissertation provides a description of the new framework, the new compiler, and several experiments that demonstrate the performance and effectiveness of both, as well as a presentation of several optimizations performed by the new compiler and facilitated by the infrastructure.

[close]

Published Software

Andrew W. Keep, Dipanwita Sarkar, R. Kent Dybvig, and Oscar Waddell. Nanopass Framework. [ Software ]. [BibTeX]
@misc{keep-etal-nanopass-framework,
  author = {Keep, Andrew W. and Sarkar, Dipanwita Dybvig, R. Kent
            and Waddell, Oscar},
  title = {Nanopass Framework},
  howpublished =
    {\url{https://github.com/akeep/nanopass-framework}},
  year = {2012--2014},
  note = {software},
}
[close]

Projects

The nanopass framework is a domain-specific language (DSL) for writing compilers. The current version is implemented in Chez Scheme and is being used by both the new Chez Scheme compiler and Harlan, a programming language for general purpose GPU programming.

The Tapas project is a quick static analyzer for Dalvik bytecode that identifies procedures that call library routines identified as potentially dangerous. The tool is designed to aid a human analyst in identifying malware in Android applications.

Talks

Previous Talks

CCS-SPSM ‘13

I will be presenting work that my colleague Shuying Liang and I wrote up together on her Dalvik bytecode analysis framework on November 8, 2013, at the 3rd Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices in Berlin, Germany.

Clojure Conj ‘13

I will be presenting on using the nanopass framework to develop compilers at Clojure Conj in Alexandria, VA. (I will also be on hand for the Scheme and Functional Languages Workshop the day before the conj starts and potentially presenting the paper my advisor and I wrote on automatic cross-library optimization—this paper is still under review.)

ICFP ‘13

I presented my experience report on writing a commercial compiler using the nanopass framework on Friday, September 27th 2013, at the 18th annual International Conference on Functional Programming. Slides: [PDF] [Keynote]

Personal

In addition to my research work at the University of Utah, I have started working with a couple of colleagues on hacking old 8-bit gaming systems, specifically 6502-based systems like the Atari 2600 and the Nintendo Entertainment System (NES). Our current project is connecting two NES systems to allow multi-player games to be played over a network.