Jay Ligatti
ligatti@cse.usf.edu

Associate Professor
Dept. of Computer Science & Engineering
University of South Florida
Phone:   +1-813-974-0908

Curriculum Vitae

Content

  1. Research Interests
  2. Publications
  3. Current Projects
  4. Older Projects
  5. Teaching
  6. Current Students
  7. Former Students
  8. Talks

Research interests

Software security and programming languages (CV)

wordle image
(Wordle of research papers)

Publications, arranged by

Publication date: 2016-2014, 2013-2010, 2009-2006, 2005-2002

▶ 2016

  • Induction on Failing Derivations. Jay Ligatti. Technical Report PL-Sep13. University of South Florida, March 2016. (This is a major revision to the September 2013 version.) [BibTeX] A proof technique, called induction on failing derivations, is introduced. We wish to prove properties of judgments in deductive systems. Standard techniques exist for proving such properties on valid judgments; this note defines a technique for proving such properties on invalid (underivable) judgments. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. Technical Report. University of South Florida, March 2016. (This is a major revision to the August 2014 version of the technical report, which itself was a major revision of technical report CSE-071012, "Completely Subtyping Iso-Recursive Types", from July 2012.) [BibTeX] Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]

▶ 2015

  • A Theory of Gray Security Policies. Donald Ray and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2015. [BibTeX] This paper generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates---in a practically useful way---specifications of how secure systems are from specifications of how secure users require their systems to be. [Abstract]
  • Design of Adiabatic Dynamic Differential Logic for DPA-Resistant Secure Integrated Circuits. Matthew Morrison, Nagarajan Ranganathan, and Jay Ligatti. IEEE Transactions on Very Large Scale Integration Systems (TVLSI), Vol 23, No 8, pp 1381-1389. IEEE, August 2015. [BibTeX] Production of cost-effective secure integrated chips, such as smart cards, requires hardware designers to consider tradeoffs in size, security, and power consumption. To design successful security-centric designs, the low-level hardware must contain built-in protection mechanisms to supplement cryptographic algorithms, such as advanced encryption standard and triple data encryption standard by preventing side-channel attacks, such as differential power analysis (DPA). Dynamic logic obfuscates the output waveforms and the circuit operation, reducing the effectiveness of the DPA attack. For stronger mitigation of DPA attacks, we propose the implementation of adiabatic dynamic differential logic (ADDL) for applications in secure integrated circuit (IC) design. Such an approach is effective in reducing power consumption, demonstrated using HSPICE simulations with 22-nm predictive technology. The benefits of our design are demonstrated by comparing instantaneous power waveforms and observing the magnitude of differential power spikes during switching events. First, simulation results for body biasing on subthreshold adiabatic inverters show an improvement in differential power up to 43.28% for similar inverters without body biasing. Then, a high-performance ADDL is presented for an implementation in high-frequency secure ICs. This method improves the differential power over previous dynamic and differential logic methods by up to 89.65%. Finally, we propose a body-biased ADDL for ultralow power applications. Simulation results show that the differential power was improved upon by a factor of 199.16. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. International Journal of Information Security (IJIS), Vol 14, No 1, pp 47-60. Springer, February 2015. (preliminary version) [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. The operational semantics of MRAs is simple and enables straightforward definitions of concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simple and expressive. Putting all of these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]

▶ 2014

  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the 17th International Information Security Conference (ISC), October 2014. [BibTeX] This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Fingerprinting Far Proximity from Radio Emissions. Tao Wang, Yao Liu, and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2014. [BibTeX] As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable "fingerprint" of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Technical Report CSE-TR-081114. University of South Florida, August 2014. [BibTeX] This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]

▶ 2013

  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. Technical Report USF-CSE-1213, University of South Florida, December 2013. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model general security monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]

▶ 2012

  • Enforcing More with Less: Formalizing Target-aware Run-time Monitors. Yannis Mallios, Lujo Bauer, Dilsun Kaynar, and Jay Ligatti. Proceedings of the International Workshop on Security and Trust Management (STM), September 2012. [BibTeX] Run-time monitors ensure that untrusted software and system behavior adheres to a security policy. This paper defines an expressive formal framework, based on I/O automata, for modeling systems, policies, and run-time monitors in more detail than is typical. We explicitly model, for example, the environment, applications, and the interaction between them and monitors. The fidelity afforded by this framework allows us to explicitly formulate and study practical constraints on policy enforcement that were often only implicit in previous models, providing a more accurate view of what can be enforced by monitoring in practice. We introduce two definitions of enforcement, target-specific and generalized, that allow us to reason about practical monitoring scenarios. Finally, we provide some meta-theoretical comparison of these definitions and we apply them to investigate policy enforcement in scenarios where the monitor designer has knowledge of the target application and show how this can be exploited to make more efficient design choices. [Abstract]
  • A Location-based Policy-specification Language for Mobile Devices. Joshua Finnis, Nalin Saigal, Adriana Iamnitchi, and Jay Ligatti. Pervasive and Mobile Computing Journal, Vol 8, No 3, pp 402-414. Elsevier, June 2012. Local version. [BibTeX] The dramatic rise in mobile applications has greatly increased threats to the security and privacy of users. Security mechanisms on mobile devices are currently limited, so users need more expressive ways to ensure that downloaded mobile applications do not act maliciously. Policy-specification languages were created for this purpose; they allow the enforcement of user-defined policies on third-party applications. We have implemented LoPSiL, a location-based policy-specification language for mobile devices. This article describes LoPSiL’s design and implementation, several example policies, and experiments that demonstrate LoPSiL’s viability for enforcing policies on mobile devices. [Abstract]
  • Defining Code-injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), January 2012. [BibTeX] This paper shows that existing definitions of code-injection attacks (e.g., SQL-injection attacks) are flawed. The flaws make it possible for attackers to circumvent existing mechanisms, by supplying code-injecting inputs that are not recognized as such. The flaws also make it possible for benign inputs to be treated as attacks. After describing these flaws in conventional definitions of code-injection attacks, this paper proposes a new definition, which is based on whether the symbols input to an application get used as (normal-form) values in the application’s output. Because values are already fully evaluated, they cannot be considered “code” when injected. This simple new definition of code-injection attacks avoids the problems of existing definitions, improves our understanding of how and when such attacks occur, and enables us to evaluate the effectiveness of mechanisms for mitigating such attacks. [Abstract]

▶ 2011

  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Journal of Information Processing, Vol 19, pp 292-306, Information Processing Society of Japan, July 2011. [BibTeX] Complex software-security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many tools and techniques exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulates entire policies as atomic modules that cannot be decomposed or allow fine-grained policy modularization but require expertize to use correctly. This paper presents PoliSeer, a GUI-based tool designed to enable users who are not expert policy engineers to flexibly specify, visualize, modify, and enforce complex runtime policies on untrusted software. PoliSeer users rely on expert policy engineers to specify universally composable policy modules; PoliSeer users then build complex policies by composing those expert-written modules. This paper describes the design and implementation of PoliSeer and a case study in which we have used PoliSeer to specify and enforce a policy on PoliSeer itself. [Abstract]

▶ 2010

  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2010. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • A Packet-classification Algorithm for Arbitrary Bitmask Rules, with Automatic Time-space Tradeoffs. Jay Ligatti, Josh Kuhn, and Chris Gage. Proceedings of the IEEE International Conference on Computer Communication Networks (ICCCN), August 2010. [BibTeX] We present an algorithm for classifying packets according to arbitrary (including noncontiguous) bitmask rules. As its principal novelty, the algorithm is parameterized by the amount of memory available and can customize its data structures to optimize classification time without exceeding a given memory bound. The algorithm thus automatically trades time for space efficiency as needed. The two extremes of this time-space tradeoff (linear search through the rules versus a single table that maps every possible packet to its class number) are special cases of the general algorithm we present. Additional features of the algorithm include its simplicity, its open-source prototype implementation, its good performance even with worst-case rule sets, and its extendability to handle range rules and dynamic updates to rule sets. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Proceedings of the International Conference on Trust Management (IFIP-TM), June 2010. [BibTeX] Few tools exist for decomposing complex security policies into simpler modules. The policy-engineering tools that do exist either encapsulate entire policies as atomic, indecomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities. This paper briefly presents PoliSeer, the first tool we are aware of that allows complex policies to be specified, visualized, modified, and enforced as compositions of simpler policy modules. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Technical Report USF-CSE-SS-102809, University of South Florida, October 2009, revised June 2010. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]

▶ 2009

  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Technical Report CSE-SSec-112509, University of South Florida, November 2009. [BibTeX] Complex security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many software-engineering tools exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few policy-engineering tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulate entire policies as atomic, indescomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities.

    This paper presents PoliSeer, the first tool we are aware of that allows engineers to specify, visualize, modify, and enforce complex as compositions of simpler policy modules. We describe the design and implementation of PoliSeer, as well as a case study in which we have bootstrapped PoliSeer by using it to specify and enforce a policy on itself.
    [Abstract]
  • IVCon: Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Technical Report CSE-110909-SE, University of South Florida, November 2009. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon provides an interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. ACM Transactions on Information and System Security, Vol 13, No 1, pp 1-40. ACM Press, October 2009. Local version. [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Proceedings of the ACIS International Conference on Software Engineering Research, Management, and Applications (SERA), December 2009. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • LoPSiL: A Location-based Policy-specification Language. Jay Ligatti, Billy Rickey, and Nalin Saigal. Proceedings of the International ICST Conference on Security and Privacy in Mobile Information and Communication Systems (MobiSec), June 2009. [BibTeX] This paper describes the design of LoPSiL, a language for specifying location-dependent security and privacy policies. Policy-specification languages like LoPSiL are domain-specific programming languages intended to simplify the tasks of specifying and enforcing sound security policies on untrusted (i.e., potentially insecure) software. As far as we are aware, LoPSiL is the first imperative policy-specification language to provide abstractions specifically tailored to location-dependent policies for mobile-device applications. We have implemented a proof-of-concept compiler that inputs a LoPSiL policy P and a mobile-device application program A and outputs a new application program A’ equivalent to A, except that A’ contains inlined enforcement code that ensures that A’ satisfies P at runtime. We report our experiences using this compiler to design and implement several policies for mobile-device applications. [Abstract]
  • Composing Expressive Runtime Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. ACM Transactions on Software Engineering and Methodology (TOSEM), Vol 18, No 3, pp 1-43. ACM Press, May 2009. Local version. [BibTeX] Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute security-relevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software’s attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and anti-virus and anti-spyware tools.

    Unfortunately, the run-time policies we require program monitors to enforce grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem, which guarantees that monitors will intercept all security-relevant actions of untrusted software.
    [Abstract]
  • Run-Time Enforcement of Nonsafety Policies. Jay Ligatti, Lujo Bauer, and David Walker. ACM Transactions on Information and System Security (TISSEC), Vol 12, No 3, pp 1-41. ACM Press, January 2009. Local version. [BibTeX] A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed.

    This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, i.e., that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.
    [Abstract]

▶ 2008

  • Defining and Visualizing Many-to-many Relationships between Concerns and Code. Nalin Saigal and Jay Ligatti. Technical Report CSE-090608-SE, University of South Florida, September 2008. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. In this paper, we describe IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]

▶ 2007

  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2007. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Technical Report TR-776-07, Princeton University, April 2007. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]

▶ 2006

  • A Type-theoretic Interpretation of Pointcuts and Advice. Jay Ligatti, David Walker, and Steve Zdancewic. Science of Computer Programming: Special Issue on Foundations of Aspect-Oriented Programming, Vol 63, No 3, pp 240-266. Elsevier, December 2006. Local version. [BibTeX] This article defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from a user-friendly external language to a compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of an elegant and extensible core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. We prove Preservation and Progress lemmas for our core language and show that the translation from MinAML source into core is type-preserving. Together these two results imply that the source language is type safe. We also consider several extensions to our basic framework including a general mechanism for analyzing the current call stack.
    [Abstract]
  • Static Typing for a Faulty Lambda Calculus. David Walker, Lester Mackey, Jay Ligatti, George Reis, and David August. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), September 2006. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. These faults do not cause permanent damage, but may result in incorrect program execution by altering signal transfers or stored values. While the likelihood that such transient faults will cause any significant damage may seem remote, over the last several years transient faults have caused costly failures in high-end machines at America Online, eBay, and the Los Alamos Neutron Science Center, among others [6, 44, 15]. Because susceptibility to transient faults is proportional to the size and density of transistors, the problem of transient faults will become increasingly important in the coming decades.

    This paper defines the first formal, type-theoretic framework for studying reliable computation in the presence of transient faults. More specifically, it defines λzap, a lambda calculus that exhibits intermittent data faults. In order to detect and recover from these faults, λzap programs replicate intermediate computations and use majority voting, thereby modeling software-based fault tolerance techniques studied extensively, but informally [10, 20, 30, 31, 32, 33, 41].

    To ensure that programs maintain the proper invariants and use λzap primitives correctly, the paper defines a type system for the language. This type system guarantees that well-typed programs can tolerate any single data fault. To demonstrate that λzap can serve as an idealized typed intermediate language, we define a type-preserving translation from a standard simply-typed lambda calculus into λzap.
    [Abstract]
  • Policy Enforcement via Program Monitoring. Jarred Adam Ligatti. PhD thesis, Princeton University, June 2006. [BibTeX] One way to guarantee that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This thesis considers the space of policies enforceable by monitoring the run-time behaviors of programs and develops a practical language for specifying monitors’ policies.

    In order to delineate the space of policies that monitors can enforce, we first have to define exactly what it means for a monitor to enforce a policy. We therefore begin by building a formal framework for analyzing policy enforcement; we precisely define policies, monitors, and enforcement. Having this framework allows us to consider the enforcement powers of program monitors and prove that they enforce an interesting set of policies that we define and call the infinite renewal properties. We show how, when given any reasonable infinite renewal property, to construct a program monitor that provably enforces that policy.

    In practice, the security policies enforced by program monitors grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composeable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composeable policies on Java applications. We also formalize the central workings of Polymer by defining an unambiguous semantics for our language.
    [Abstract]

▶ 2005

  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), November 2005. [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the 7th International Conference on Formal Engineering Methods (ICFEM), November 2005. [BibTeX] Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Proceedings of the 10th European Symposium on Research in Computer Security (ESORICS), September 2005. [BibTeX] We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Composing Security Policies with Polymer. Lujo Bauer, Jay Ligatti, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2005. [BibTeX] We introduce a language and system that supports definition and composition of complex run-time security policies for Java applications. Our policies are comprised of two sorts of methods. The first is query methods that are called whenever an untrusted application tries to execute a security-sensitive action. A query method returns a suggestion indicating how the security-sensitive action should be handled. The second sort of methods are those that perform state updates as the policy’s suggestions are followed.

    The structure of our policies facilitates composition, as policies can query other policies for suggestions. In order to give programmers control over policy composition, we have designed the system so that policies, suggestions, and application events are all first-class objects that a higher-order policy may manipulate. We show how to use these programming features by developing a library of policy combinators.

    Our system is fully implemented, and we have defined a formal semantics for an idealized subset of the language containing all of the key features. We demonstrate the effectiveness of our system by implementing a large-scale security policy for an email client.
    [Abstract]
  • Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-18, Microsoft Research, February 2005 (revised June 2005). [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-17, Microsoft Research, February 2005 (revised June 2005). [BibTeX] Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Technical Report TR-720-05, Princeton University, January 2005 (revised June 2005). [BibTeX] We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. International Journal of Information Security (IJIS), Vol 4, No 1-2, pp 2-16. Springer-Verlag, Feb 2005. Local Version. [BibTeX] We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]

▶ 2004

  • A Language and System for Composing Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. Technical Report TR-699-04, Princeton University, January 2004. [BibTeX] We introduce a new language and system that allows security architects to develop well-structured and easy-to-maintain security policies for Java applications. In our system, policies are first-class objects. Consequently, programmers can define parameterized meta-policies that act as policy combinators and policy modifiers, so that complex security policies can be implemented by composing simple base policies. We demonstrate the effectiveness of our design by building up a library of powerful policy combinators and showing how they can be used. We also describe some issues we encountered while implementing our system and provide performance results. [Abstract]

▶ 2003

  • Types and Effects for Non-interfering Program Monitors. Lujo Bauer, Jarred Ligatti, and David Walker. In M. Okada, B. Pierce, A. Scedrov, H. Tokuda, and A. Yonezawa, editors, Lecture Notes in Computer Science: Software Security - Theories and Systems (Revised Papers of the 2002 Mext-NSF-JSPS International Symposium), Vol 2609, pp 154-171. Springer-Verlag, November 2003. [BibTeX] A run-time monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors. Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values or other policies, and we define operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. [Abstract]
  • A Theory of Aspects. David Walker, Steve Zdancewic, and Jay Ligatti. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), August 2003. [BibTeX] This paper defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from its user-friendly external language to its compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of a clean, easy-to-understand, and easy-to-reason-about core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. The labels are well-scoped and the language as a whole is well-typed. Consequently, programmers can use lexical scoping in the standard way to prevent aspects from interfering with local program invariants.
    [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. Princeton University Technical Report TR-681-03, May 2003. [BibTeX] We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]

▶ 2002

  • A Calculus for Composing Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-655-02, Princeton University, August 2002. [BibTeX] A runtime monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors.

    Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values, or other policies. There are also operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. We also present a preliminary implementation of our language.
    [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Foundations of Computer Security Workshop (FCS) '02 (associated with LICS '02), July 2002. [BibTeX] We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-649-02, Princeton University, July 2002. [BibTeX] We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]

Venue type: Journals, Conferences, Workshops, Thesis, Technical Reports

  • Design of Adiabatic Dynamic Differential Logic for DPA-Resistant Secure Integrated Circuits. Matthew Morrison, Nagarajan Ranganathan, and Jay Ligatti. IEEE Transactions on Very Large Scale Integration Systems (TVLSI), Vol 23, No 8, pp 1381-1389. IEEE, August 2015. [BibTeX] Production of cost-effective secure integrated chips, such as smart cards, requires hardware designers to consider tradeoffs in size, security, and power consumption. To design successful security-centric designs, the low-level hardware must contain built-in protection mechanisms to supplement cryptographic algorithms, such as advanced encryption standard and triple data encryption standard by preventing side-channel attacks, such as differential power analysis (DPA). Dynamic logic obfuscates the output waveforms and the circuit operation, reducing the effectiveness of the DPA attack. For stronger mitigation of DPA attacks, we propose the implementation of adiabatic dynamic differential logic (ADDL) for applications in secure integrated circuit (IC) design. Such an approach is effective in reducing power consumption, demonstrated using HSPICE simulations with 22-nm predictive technology. The benefits of our design are demonstrated by comparing instantaneous power waveforms and observing the magnitude of differential power spikes during switching events. First, simulation results for body biasing on subthreshold adiabatic inverters show an improvement in differential power up to 43.28% for similar inverters without body biasing. Then, a high-performance ADDL is presented for an implementation in high-frequency secure ICs. This method improves the differential power over previous dynamic and differential logic methods by up to 89.65%. Finally, we propose a body-biased ADDL for ultralow power applications. Simulation results show that the differential power was improved upon by a factor of 199.16. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. International Journal of Information Security, Vol 14, No 1, pp 47-60. Springer, February 2015. (preliminary version) [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. The operational semantics of MRAs is simple and enables straightforward definitions of concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simple and expressive. Putting all of these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]
  • A Location-based Policy-specification Language for Mobile Devices. Joshua Finnis, Nalin Saigal, Adriana Iamnitchi, and Jay Ligatti. Pervasive and Mobile Computing Journal, Vol 8, No 3, pp 402-414. Elsevier, June 2012. Local version. [BibTeX] The dramatic rise in mobile applications has greatly increased threats to the security and privacy of users. Security mechanisms on mobile devices are currently limited, so users need more expressive ways to ensure that downloaded mobile applications do not act maliciously. Policy-specification languages were created for this purpose; they allow the enforcement of user-defined policies on third-party applications. We have implemented LoPSiL, a location-based policy-specification language for mobile devices. This article describes LoPSiL’s design and implementation, several example policies, and experiments that demonstrate LoPSiL’s viability for enforcing policies on mobile devices. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Journal of Information Processing, Vol 19, pp 292-306, Information Processing Society of Japan, July 2011. [BibTeX] Complex software-security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many tools and techniques exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulates entire policies as atomic modules that cannot be decomposed or allow fine-grained policy modularization but require expertize to use correctly. This paper presents PoliSeer, a GUI-based tool designed to enable users who are not expert policy engineers to flexibly specify, visualize, modify, and enforce complex runtime policies on untrusted software. PoliSeer users rely on expert policy engineers to specify universally composable policy modules; PoliSeer users then build complex policies by composing those expert-written modules. This paper describes the design and implementation of PoliSeer and a case study in which we have used PoliSeer to specify and enforce a policy on PoliSeer itself. [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. ACM Transactions on Information and System Security, Vol 13, No 1, pp 1-40. ACM Press, October 2009. Local version. [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • Composing Expressive Runtime Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. ACM Transactions on Software Engineering and Methodology, Vol 18, No 3, pp 1-43. ACM Press, May 2009. Local version. [BibTeX] Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute security-relevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software’s attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and anti-virus and anti-spyware tools.

    Unfortunately, the run-time policies we require program monitors to enforce grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem, which guarantees that monitors will intercept all security-relevant actions of untrusted software.
    [Abstract]
  • Run-Time Enforcement of Nonsafety Policies. Jay Ligatti, Lujo Bauer, and David Walker. ACM Transactions on Information and System Security, Vol 12, No 3, pp 1-41. ACM Press, January 2009. Local version. [BibTeX] A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed.

    This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, i.e., that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.
    [Abstract]
  • A Type-theoretic Interpretation of Pointcuts and Advice. Jay Ligatti, David Walker, and Steve Zdancewic. Science of Computer Programming: Special Issue on Foundations of Aspect-Oriented Programming, Vol 63, No 3, pp 240-266. Elsevier, December 2006. Local version. [BibTeX] This article defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from a user-friendly external language to a compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of an elegant and extensible core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. We prove Preservation and Progress lemmas for our core language and show that the translation from MinAML source into core is type-preserving. Together these two results imply that the source language is type safe. We also consider several extensions to our basic framework including a general mechanism for analyzing the current call stack.
    [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. International Journal of Information Security, Vol 4, No 1-2, pp 2-16. Springer-Verlag, Feb 2005. Local Version. [BibTeX] We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]
  • A Theory of Gray Security Policies. Donald Ray and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2015. [BibTeX] This paper generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates---in a practically useful way---specifications of how secure systems are from specifications of how secure users require their systems to be. [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the 17th International Information Security Conference (ISC), October 2014. [BibTeX] This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Fingerprinting Far Proximity from Radio Emissions. Tao Wang, Yao Liu, and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2014. [BibTeX] As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable "fingerprint" of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • Defining Code-injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), January 2012. [BibTeX] This paper shows that existing definitions of code-injection attacks (e.g., SQL-injection attacks) are flawed. The flaws make it possible for attackers to circumvent existing mechanisms, by supplying code-injecting inputs that are not recognized as such. The flaws also make it possible for benign inputs to be treated as attacks. After describing these flaws in conventional definitions of code-injection attacks, this paper proposes a new definition, which is based on whether the symbols input to an application get used as (normal-form) values in the application’s output. Because values are already fully evaluated, they cannot be considered “code” when injected. This simple new definition of code-injection attacks avoids the problems of existing definitions, improves our understanding of how and when such attacks occur, and enables us to evaluate the effectiveness of mechanisms for mitigating such attacks. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2010. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • A Packet-classification Algorithm for Arbitrary Bitmask Rules, with Automatic Time-space Tradeoffs. Jay Ligatti, Josh Kuhn, and Chris Gage. Proceedings of the IEEE International Conference on Computer Communication Networks (ICCCN), August 2010. [BibTeX] We present an algorithm for classifying packets according to arbitrary (including noncontiguous) bitmask rules. As its principal novelty, the algorithm is parameterized by the amount of memory available and can customize its data structures to optimize classification time without exceeding a given memory bound. The algorithm thus automatically trades time for space efficiency as needed. The two extremes of this time-space tradeoff (linear search through the rules versus a single table that maps every possible packet to its class number) are special cases of the general algorithm we present. Additional features of the algorithm include its simplicity, its open-source prototype implementation, its good performance even with worst-case rule sets, and its extendability to handle range rules and dynamic updates to rule sets. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Proceedings of the International Conference on Trust Management (IFIP-TM), June 2010. [BibTeX] Few tools exist for decomposing complex security policies into simpler modules. The policy-engineering tools that do exist either encapsulate entire policies as atomic, indecomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities. This paper briefly presents PoliSeer, the first tool we are aware of that allows complex policies to be specified, visualized, modified, and enforced as compositions of simpler policy modules. [Abstract]
  • Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Proceedings of the ACIS International Conference on Software Engineering Research, Management, and Applications (SERA), December 2009. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • LoPSiL: A Location-based Policy-specification Language. Jay Ligatti, Billy Rickey, and Nalin Saigal. Proceedings of the International ICST Conference on Security and Privacy in Mobile Information and Communication Systems (MobiSec), June 2009. [BibTeX] This paper describes the design of LoPSiL, a language for specifying location-dependent security and privacy policies. Policy-specification languages like LoPSiL are domain-specific programming languages intended to simplify the tasks of specifying and enforcing sound security policies on untrusted (i.e., potentially insecure) software. As far as we are aware, LoPSiL is the first imperative policy-specification language to provide abstractions specifically tailored to location-dependent policies for mobile-device applications. We have implemented a proof-of-concept compiler that inputs a LoPSiL policy P and a mobile-device application program A and outputs a new application program A’ equivalent to A, except that A’ contains inlined enforcement code that ensures that A’ satisfies P at runtime. We report our experiences using this compiler to design and implement several policies for mobile-device applications. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2007. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Static Typing for a Faulty Lambda Calculus. David Walker, Lester Mackey, Jay Ligatti, George Reis, and David August. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), September 2006. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. These faults do not cause permanent damage, but may result in incorrect program execution by altering signal transfers or stored values. While the likelihood that such transient faults will cause any significant damage may seem remote, over the last several years transient faults have caused costly failures in high-end machines at America Online, eBay, and the Los Alamos Neutron Science Center, among others [6, 44, 15]. Because susceptibility to transient faults is proportional to the size and density of transistors, the problem of transient faults will become increasingly important in the coming decades.

    This paper defines the first formal, type-theoretic framework for studying reliable computation in the presence of transient faults. More specifically, it defines λzap, a lambda calculus that exhibits intermittent data faults. In order to detect and recover from these faults, λzap programs replicate intermediate computations and use majority voting, thereby modeling software-based fault tolerance techniques studied extensively, but informally [10, 20, 30, 31, 32, 33, 41].

    To ensure that programs maintain the proper invariants and use λzap primitives correctly, the paper defines a type system for the language. This type system guarantees that well-typed programs can tolerate any single data fault. To demonstrate that λzap can serve as an idealized typed intermediate language, we define a type-preserving translation from a standard simply-typed lambda calculus into λzap.
    [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), November 2005. [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the 7th International Conference on Formal Engineering Methods (ICFEM), November 2005. [BibTeX] Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Proceedings of the 10th European Symposium on Research in Computer Security (ESORICS), September 2005. [BibTeX] We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Composing Security Policies with Polymer. Lujo Bauer, Jay Ligatti, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2005. [BibTeX] We introduce a language and system that supports definition and composition of complex run-time security policies for Java applications. Our policies are comprised of two sorts of methods. The first is query methods that are called whenever an untrusted application tries to execute a security-sensitive action. A query method returns a suggestion indicating how the security-sensitive action should be handled. The second sort of methods are those that perform state updates as the policy’s suggestions are followed.

    The structure of our policies facilitates composition, as policies can query other policies for suggestions. In order to give programmers control over policy composition, we have designed the system so that policies, suggestions, and application events are all first-class objects that a higher-order policy may manipulate. We show how to use these programming features by developing a library of policy combinators.

    Our system is fully implemented, and we have defined a formal semantics for an idealized subset of the language containing all of the key features. We demonstrate the effectiveness of our system by implementing a large-scale security policy for an email client.
    [Abstract]
  • Types and Effects for Non-interfering Program Monitors. Lujo Bauer, Jarred Ligatti, and David Walker. In M. Okada, B. Pierce, A. Scedrov, H. Tokuda, and A. Yonezawa, editors, Lecture Notes in Computer Science: Software Security - Theories and Systems (Revised Papers of the 2002 Mext-NSF-JSPS International Symposium), Vol 2609, pp 154-171. Springer-Verlag, November 2003. [BibTeX] A run-time monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors. Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values or other policies, and we define operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. [Abstract]
  • A Theory of Aspects. David Walker, Steve Zdancewic, and Jay Ligatti. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), August 2003. [BibTeX] This paper defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from its user-friendly external language to its compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of a clean, easy-to-understand, and easy-to-reason-about core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. The labels are well-scoped and the language as a whole is well-typed. Consequently, programmers can use lexical scoping in the standard way to prevent aspects from interfering with local program invariants.
    [Abstract]
  • Enforcing More with Less: Formalizing Target-aware Run-time Monitors. Yannis Mallios, Lujo Bauer, Dilsun Kaynar, and Jay Ligatti. Proceedings of the International Workshop on Security and Trust Management (STM), September 2012. [BibTeX] Run-time monitors ensure that untrusted software and system behavior adheres to a security policy. This paper defines an expressive formal framework, based on I/O automata, for modeling systems, policies, and run-time monitors in more detail than is typical. We explicitly model, for example, the environment, applications, and the interaction between them and monitors. The fidelity afforded by this framework allows us to explicitly formulate and study practical constraints on policy enforcement that were often only implicit in previous models, providing a more accurate view of what can be enforced by monitoring in practice. We introduce two definitions of enforcement, target-specific and generalized, that allow us to reason about practical monitoring scenarios. Finally, we provide some meta-theoretical comparison of these definitions and we apply them to investigate policy enforcement in scenarios where the monitor designer has knowledge of the target application and show how this can be exploited to make more efficient design choices. [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Foundations of Computer Security Workshop (FCS) '02 (associated with LICS '02), July 2002. [BibTeX] We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]
  • Policy Enforcement via Program Monitoring. Jarred Adam Ligatti. PhD thesis, Princeton University, June 2006. [BibTeX] One way to guarantee that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This thesis considers the space of policies enforceable by monitoring the run-time behaviors of programs and develops a practical language for specifying monitors’ policies.

    In order to delineate the space of policies that monitors can enforce, we first have to define exactly what it means for a monitor to enforce a policy. We therefore begin by building a formal framework for analyzing policy enforcement; we precisely define policies, monitors, and enforcement. Having this framework allows us to consider the enforcement powers of program monitors and prove that they enforce an interesting set of policies that we define and call the infinite renewal properties. We show how, when given any reasonable infinite renewal property, to construct a program monitor that provably enforces that policy.

    In practice, the security policies enforced by program monitors grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composeable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composeable policies on Java applications. We also formalize the central workings of Polymer by defining an unambiguous semantics for our language.
    [Abstract]
  • Induction on Failing Derivations. Jay Ligatti. Technical Report PL-Sep13. University of South Florida, March 2016. (This is a major revision to the September 2013 version.) [BibTeX] A proof technique, called induction on failing derivations, is introduced. We wish to prove properties of judgments in deductive systems. Standard techniques exist for proving such properties on valid judgments; this note defines a technique for proving such properties on invalid (underivable) judgments. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. Technical Report. University of South Florida, March 2016. (This is a major revision to the August 2014 version of the technical report, which itself was a major revision of technical report CSE-071012, "Completely Subtyping Iso-Recursive Types", from July 2012.) [BibTeX] Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Technical Report CSE-TR-081114. University of South Florida, August 2014. [BibTeX] This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. Technical Report USF-CSE-1213, University of South Florida, December 2013. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform securi ty-relevant actions and their results. Because previous work could not model general security monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple operational semantics that makes it straightforward to define concrete MRAs. Moreover, the de finitions of policies and enforcement with MRAs are simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by c haracterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Technical Report USF-CSE-SS-102809, University of South Florida, October 2009, revised June 2010. [BibTeX] This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Technical Report CSE-SSec-112509, University of South Florida, November 2009. [BibTeX] Complex security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many software-engineering tools exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few policy-engineering tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulate entire policies as atomic, indescomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities.

    This paper presents PoliSeer, the first tool we are aware of that allows engineers to specify, visualize, modify, and enforce complex as compositions of simpler policy modules. We describe the design and implementation of PoliSeer, as well as a case study in which we have bootstrapped PoliSeer by using it to specify and enforce a policy on itself.
    [Abstract]
  • IVCon: Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Technical Report CSE-110909-SE, University of South Florida, November 2009. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon provides an interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Defining and Visualizing Many-to-many Relationships between Concerns and Code. Nalin Saigal and Jay Ligatti. Technical Report CSE-090608-SE, University of South Florida, September 2008. [BibTeX] Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. In this paper, we describe IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Technical Report TR-776-07, Princeton University, April 2007. [BibTeX] A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-18, Microsoft Research, February 2005 (revised June 2005). [BibTeX] Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-17, Microsoft Research, February 2005 (revised June 2005). [BibTeX] Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Technical Report TR-720-05, Princeton University, January 2005 (revised June 2005). [BibTeX] We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • A Language and System for Composing Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. Technical Report TR-699-04, Princeton University, January 2004. [BibTeX] We introduce a new language and system that allows security architects to develop well-structured and easy-to-maintain security policies for Java applications. In our system, policies are first-class objects. Consequently, programmers can define parameterized meta-policies that act as policy combinators and policy modifiers, so that complex security policies can be implemented by composing simple base policies. We demonstrate the effectiveness of our design by building up a library of powerful policy combinators and showing how they can be used. We also describe some issues we encountered while implementing our system and provide performance results. [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. Princeton University Technical Report TR-681-03, May 2003. [BibTeX] We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]
  • A Calculus for Composing Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-655-02, Princeton University, August 2002. [BibTeX] A runtime monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors.

    Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values, or other policies. There are also operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. We also present a preliminary implementation of our language.
    [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-649-02, Princeton University, July 2002. [BibTeX] We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]

Current Research projects

Completeness Establishing the completeness of subtyping relations
Ciao Principled definition and analysis of injection attacks
RunTime Theory and practice of monitoring software at runtime
Poco Theory and practice of security-policy composition

Older Research projects

Grouper A packet-classification algorithm
PoliSeer Specifying and visualizing complex security policies
IVCon Inline visualization of concerns
Polymer Software monitoring in theory and practice
Project Zap Trustworthy computing in the presence of transient faults
AspectML Foundations of aspect-oriented programming languages
Gleipnir Enforcing control-flow policies on software

Teaching

Compilers: Fall 2015 Fall 2013 Fall 2011 Fall 2009 Fall 2007
Foundations of Software Security: Spring 2016 Spring 2015 Spring 2014 Spring 2013 Spring 2012
Spring 2010 Spring 2008 Spring 2007
Programming Languages: Spring 2016 Fall 2014 Fall 2012 Fall 2010 Fall 2008
Advanced Programming Languages: Spring 2015 Spring 2011
Operating Systems: Fall 2006

Current graduate students

Danielle Ferguson (PhD student 2012-present)
Hernan Palombo (PhD student 2013-present, co-advised with Hao Zheng)
Yan Albright (PhD student 2014-present)
Cagri Cetin (PhD student 2014-present)
Jean-Baptiste Subils (PhD student 2014-present)
Shamaria Engram (PhD student 2015-present)
Ivory Hernandez (MS student 2015-present)
Joshua Winfrey (MS student 2015-present)
Michael Quintero (MS student 2015-present)
Kimberly Bursum (MS student 2015-present)

Former graduate students

▶ Donald Ray

▶ Bader AlBassam

▶ Clayton Whitelaw

▶ Cory Juhlin

▶ Grant Smith

▶ Daniel Lomsak

▶ Zach Carter

▶ Stan Naspinski

▶ Matt Spaulding

▶ Nalin Saigal

▶ Josh Kuhn

▶ Brandy Eyers

▶ Srikar Reddy

Talks

CLOSE X