Jay Ligatti


ligatti@usf.edu
Professor and Director of Graduate Admissions
Dept. of Computer Science & Engineering
University of South Florida

Curriculum Vitae

Content

  1. Research Interests
  2. Publications
  3. Current Projects
  4. Older Projects
  5. Teaching
  6. Current Students
  7. Former Students
  8. Talks

Research interests

Software security and programming languages (CV)

wordle image

Publications, arranged by

Publication date: 2025-2022, 2021-2018, 2017-2014, 2013-2010, 2009-2006, 2005-2002

▶ 2024

  • A Preliminary Study on Using Large Language Models in Software Pentesting. Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Lawrence Hall, Jay Ligatti, S. Raj Rajgopalan, Armin Ziaie Tabari. Proceedings of the Workshop on SOC Operations and Construction (WOSOC), February 2024. Large language models (LLM) are perceived to offer promising potentials for automating security tasks, such as those found in security operation centers (SOCs). As a first step towards evaluating this perceived potential, we investigate the use of LLMs in software pentesting, where the main task is to automatically identify software security vulnerabilities in source code. We hypothesize that an LLM-based AI agent can be improved over time for a specific security task as human operators interact with it. Such improvement can be made, as a first step, by engineering prompts fed to the LLM based on the responses produced, to include relevant contexts and structures so that the model provides more accurate results. Such engineering efforts become sustainable if the prompts that are engineered to produce better results on current tasks, also produce better results on future unknown tasks. To examine this hypothesis, we utilize the OWASP Benchmark Project 1.2 which contains 2,740 hand-crafted source code test cases containing various types of vulnerabilities. We divide the test cases into training and testing data, where we engineer the prompts based on the training data (only), and evaluate the final system on the testing data. We compare the AI agent's performance on the testing data against the performance of the agent without the prompt engineering. We also compare the AI agent's results against those from SonarQube, a widely used static code analyzer for security testing. We built and tested multiple versions of the AI agent using different off-the-shelf LLMs -- Google's Gemini-pro, as well as OpenAI's GPT-3.5-Turbo and GPT-4-Turbo (with both chat completion and assistant APIs). The results show that using LLMs is a viable approach to build an AI agent for software pentesting that can improve through repeated use and prompt engineering. [Abstract]

▶ 2023

  • Co-Creation in Secure Software Development: Applied Ethnography and the Interface of Software and Development. Daniel Lende, Alexis Monkhouse, Jay Ligatti, and Xinming Ou. Human Organization: Special Issue on Intersection of Software and Human Systems, April 2023. Long-term ethnographic research conducted at a software company examined how security concerns and practices became part of software development. Participant observation over a two-year period was done by researchers with cybersecurity backgrounds and training in both computer science and qualitative research, with ongoing analysis done by a larger interdisciplinary team. In situ researchers joined as software engineers and participated in daily work activities while observing development practices and analyzing software (in)security. The first year of research found that improving security during software development can be helped by a co-creation model, whereby security experts work directly with software developers to provide security tools applicable to the specific software within the workflow. Researchers-as-developers fostered conversations, concerns, and considerations of how to implement security within the process of development. The second year used a situated learning approach to understand the interface between software development, security, and the development team. Through an interactive learning process, software engineers gathered knowledge and applied it, helping to foster greater concerns for security as part of the overall “culture” of development within the company. This locally situated co-creation approach has resonances with participatory approaches in business anthropology and implications for how to promote the co-creation of knowledge and expertise more broadly. [Abstract]

▶ 2022

  • ProProv: A Language and Graphical Tool for Specifying Data Provenance Policies. Kevin Dennis, Shamaria Engram, Tyler Kaczmarek, and Jay Ligatti. Proceedings of the IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS), December 2022. Local version. The Function-as-a-Service cloud computing paradigm has made large-scale application development convenient and efficient as developers no longer need to deploy or manage the necessary infrastructure themselves. However, as a consequence of this abstraction, developers lose insight into how their code is executed and data is processed. Cloud providers currently offer little to no assurance of the integrity of customer data. One approach to robust data integrity verification is the analysis of data provenance-—logs that describe the causal history of data, applications, users, and non-person entities. This paper introduces ProProv, a new domain-specific language and graphical user interface for specifying policies over provenance metadata to automate provenance analyses.

    To evaluate the convenience and usability of the new ProProv interface, 61 individuals were recruited to construct provenance policies using both ProProv and the popular, general-purpose policy specification language Rego—used as a baseline for comparison. We found that, compared to Rego, the ProProv interface greatly increased the number of policies successfully constructed, improved the time taken to construct those policies, and reduced the failed-attempt rate. Participants successfully constructed 73% of the requested policies using ProProv, compared to 41% using Rego. To further evaluate the usability of the tools, participants were given a 10-question questionnaire measured using the System Usability Scale (SUS). The median SUS score for the graphical ProProv interface was above average and fell into the “excellent” category, compared to below average and “OK” for Rego. These results highlight the impacts that graphical domainspecific tools can have on the accuracy and speed of policy construction.
    [Abstract]

▶ 2021

  • Far Proximity Identification in Wireless Systems. Tao Wang, Jian Weng, Jay Ligatti, and Yao Liu. IEEE Transactions on Dependable and Secure Computing (TDSC), Vol 18, No 5, pp 2403-2418. IEEE, September/October 2021. (preliminary version) As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable fingerprint of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • An Analysis of the Role of Situated Learning in Starting a Security Culture in a Software Company. Anwesh Tuladhar, Daniel Lende, Jay Ligatti, and Xinming Ou. Proceedings of the Usenix Symposium on Usable Privacy and Security (SOUPS), August 2021. Local version. We conducted an ethnographic study of a software development company to explore if and how a development team adopts security practices into the development lifecycle. A PhD student in computer science with prior training in qualitative research methods was embedded in the company for eight months. The researcher joined the company as a software engineer and participated in all development activities as a new hire would, while also making observations on the development practices. During the fieldwork, we observed a positive shift in the development team's practices regarding secure development. Our analysis of data indicates that the shift can be attributed to enabling all software engineers to see how security knowledge could be applied to the specific software products they worked on. We also observed that by working with other developers to apply security knowledge under the concrete context where the software products were built, developers who possessed security expertise and wanted to push for more secure development practices (security advocates) could be effective in achieving this goal. Our data point to an interactive learning process where software engineers in a development team acquire knowledge, apply it in practice, and contribute to the team, leading to the creation of a set of preferred practices, or "culture" of the team. This learning process can be understood through the lens of the situated learning framework, where it is recognized that knowledge transfer happens within a community of practice, and applying the knowledge is the key in individuals (software engineers) acquiring it and the community (development team) embodying such knowledge in its practice. Our data show that enabling a situated learning environment for security gives rise to security-aware software engineers. We discuss the roles of management and security advocates in driving the learning process to start a security culture in a software company. [Abstract]

▶ 2020

  • Through the Lens of Code Granularity: A Unified Approach to Security Policy Enforcement. Shamaria Engram and Jay Ligatti. Proceedings of the IEEE Conference on Applications, Information and Network Security (AINS), November 2020. A common way to characterize security enforcement mechanisms is based on the time at which they operate. Mechanisms operating before a program's execution are static mechanisms, and mechanisms operating during a program's execution are dynamic mechanisms. This paper introduces a different perspective and classifies mechanisms based on the granularity of program code that they monitor. Classifying mechanisms in this way provides a unified view of security mechanisms and shows that all security mechanisms can be encoded as dynamic mechanisms that operate at different levels of program code granularity. The practicality of the approach is demonstrated through a prototype implementation of a framework for enforcing security policies at various levels of code granularity on Java bytecode applications. [Abstract]
  • Cybersecurity Vulnerabilities in Mobile Fare Payment Applications: A Case Study. Kevin Dennis, Maxat Alibayev, Sean J. Barbeau, and Jay Ligatti. Transportation Research Record: Journal of the Transportation Research Board (TRR), Vol 2674, No 11, pp 616-624. Sage Publishing, November 2020. Mobile fare payment applications are becoming increasingly commonplace in the public transportation industry as both a customer convenience and an effort to reduce fare management costs and improve operations for agencies. However, there is relatively little literature on vulnerabilities and liabilities in mobile fare payment applications. Furthermore, few public agencies or supporting vendors have policies or established processes in place to receive vulnerability reports or patch vulnerabilities discovered in their technologies. Given the rapidly increasing number of data breaches in general industry IT systems, as well as the fact that mobile fare payment apps are a nexus between customer and agency financial information, the security of these mobile applications deserve further scrutiny. This paper presents a vulnerability discovered in a mobile fare payment application deployed at a transit agency in Florida that, due to the system architecture, may have affected customers in as many as 40 cities across the United States - an estimated 1,554,000 users. Lessons learned from the vulnerability disclosure process followed by the research team as well as recommendations for public agencies seeking to improve the security of these types of applications are also discussed. [Abstract]
  • An Ethnographic Understanding of Software (In)Security and a Co-Creation Model to Improve Secure Software Development. Hernan Palombo, Armin Tabari, Daniel Lende, Jay Ligatti, and Xinming Ou. Proceedings of the Usenix Symposium on Usable Privacy and Security (SOUPS), August 2020. We present an ethnographic study of secure software development processes in a software company using the anthropological research method of participant observation. Two PhD students in computer science trained in qualitative methods were embedded in a software company for one and a half years of total research time. The researchers participated in everyday work activities such as coding and meetings, and observed software (in)security phenomena both through investigating historical data (code repositories and ticketing system records), and through pen-testing the developed software and observing developers' and management's reactions to the discovered vulnerabilities. Our study found that 1) security vulnerabilities are sometimes intentionally introduced and/or overlooked due to the difficulty in managing the various stakeholders' responsibilities in an economic ecosystem, and cannot be simply blamed on developers' lack of knowledge or skills; 2) accidental vulnerabilities discovered in the pen-testing process produce different reactions in the development team, often times contrary to what a security researcher would predict. These findings highlight the nuanced nature of the root causes of software vulnerabilities, and indicate the need to take into account a significant amount of contextual information to understand how and why software vulnerabilities emerge during software development. Rather than simply addressing deficits in developer knowledge or practice, this research sheds light onto at times forgotten human factors that significantly impact the security of software developed by actual companies. Our analysis also shows that improving software security in the development process can benefit from a co-creation model, where security experts work side by side with software developers to better identify security concerns and provide tools that are readily applicable within the specific context of the software development workflow. [Abstract]
  • An Evaluation of the Power Consumption of Coauthentication as a Continuous User Authentication Method in Mobile Systems. Brandon Corn, Ashley Ruiz, Alfredo Perez, Cagri Cetin, and Jay Ligatti. Proceedings of the Annual ACM Southeast Conference (ACMSE), April 2020. Methods for continuous user authentication have become important with the proliferation of mobile devices in m-Health and human-centered systems. These methods must guarantee user identity with high assurance, authenticate without explicit intervention, and be power-aware. We present an evaluation of the power consumption of collaborative authentication (coauthentication) as a continuous authentication method. Coauthentication is a single-factor method in which multiple registered devices work together to authenticate a user, minimizing obtrusiveness while providing high user authentication assurance. To evaluate coauthentication's power consumption, we conducted experiments using two Bluetooth-enabled mobile devices and a stand-alone server in a local area network and running coauthentication continuously for eight hours. We found that the protocol uses approximately between 1.19% and 4.0% of the total power used by the devices. These results give evidence of the feasibility of using coauthentication as a continuous authentication method in mobile devices from the power consumption perspective. [Abstract]
  • Stream-Monitoring Automata. Hernan Palombo, Egor Dolzhenko, Jay Ligatti, and Hao Zheng. Proceedings of the 9th International Conference on Software and Computer Applications (ICSCA), February 2020. Over the past nearly twenty years, numerous formal models of enforcement and runtime monitors have been investigated. This paper takes the lessons learned from earlier models and proffers a new general model of runtime enforcement that is more suitable for modeling security mechanisms that operate over infinite event streams. The new model, called Stream-Monitoring Automata (SMAs), enables the constraints and analyses of interest in previous models to be encoded, and overcomes several shortcomings of existing models with respect to expressiveness. SMAs capture the practical abilities of mechanisms to monitor infinite event streams, execute even in the absence of event inputs, enforce non-safety policies, and operate an enforcement model in which extraneous constraints such as transparency and uncontrollable events may be specified as meta-policies. [Abstract]
  • PoCo: A Language for Specifying Obligation-Based Policy Compositions. Danielle Ferguson, Yan Albright, Daniel Lomsak, Tyler Hanks, Kevin Orr, and Jay Ligatti. Proceedings of the 9th International Conference on Software and Computer Applications (ICSCA), February 2020. Existing security-policy-specification languages allow users to specify obligations, but challenges remain in the composition of complex obligations, including effective approaches for resolving conflicts between policies and obligations and allowing policies to react to other obligations. This paper presents PoCo, a policy-specification language and enforcement system for the principled composition of atomic-obligation policies. PoCo enables policies to interact meaningfully with other policies' obligations, thus preventing unexpected and insecure behaviors that can arise from partially executed obligations or obligations that execute actions in violation of other policies. [Abstract]

▶ 2019

  • SQL-Identifier Injection Attacks. Cagri Cetin, Dmitry Goldgof, and Jay Ligatti. Proceedings of the IEEE Conference on Communications and Network Security (CNS), June 2019. This paper defines a class of SQL-injection attacks that are based on injecting identifiers, such as table and column names, into SQL statements. An automated analysis of GitHub shows that 15.7% of 120,412 posted Java source files contain code vulnerable to SQL-Identifier Injection Attacks (SQL-IDIAs). We have manually verified that some of the 18,939 Java files identified during the automated analysis are indeed vulnerable to SQL-IDIAs, including deployed Electronic Medical Record software for which SQL-IDIAs enable discovery of confidential patient information. Although prepared statements are the standard defense against SQL injection attacks, existing prepared-statement APIs do not protect against SQL-IDIAs. This paper therefore proposes and evaluates an extended prepared-statement API to protect against SQL-IDIAs. [Abstract]
  • A Dual-Task Interference Game-Based Experimental Framework for Comparing the Usability of Authentication Methods. Jean-Baptiste Subils, Joseph Perez, Peiwei Liu, Shamaria Engram, Cagri Cetin, Dmitry Goldgof, Natalie Ebner, Daniela Oliveira, and Jay Ligatti. Proceedings of the IEEE International Conference on Human System Interaction (HSI), June 2019. This paper introduces a game-based framework to compare the usability of authentication methods. The framework uses a dual-task interference technique to determine the usability of authentication methods. In the experiment, subjects participate in a multi-tasking game that simulates a conversation being interrupted by authentication requirements. By simulating a conversation, the goal is to reproduce a real use of authentication, and collect ecologically sound data. Participants also perform each authentication method in a standalone manner, which allows for comparison of the usability under two different cognitive loads. The authentication techniques evaluated represent each of the three main authentication factors, specifically password, fingerprint, and coauthentication. The three aspects of usability used to compare authentication techniques in this framework are efficiency, effectiveness, and satisfaction. An experiment with 43 participants enrolled was conducted to collect data pertaining to these aspects. The results show that fingerprint and coauthentication (both laptop and phone) are the more usable techniques evaluated. [Abstract]
  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, and Dmitry Goldgof. Proceedings of the ACM Symposium on Applied Computing (SAC), April, 2019. This paper introduces and evaluates collaborative authentication, or coauthentication, a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication provides security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication secret, without some of the inconveniences of multi-factor techniques, such as having to enter passwords or biometrics. Coauthentication provides additional security benefits, including: preventing phishing, replay, and man-in-the-middle attacks; basing authentications on high-entropy secrets that can be generated and updated automatically; and availability protections against, for example, device misplacement and denial-of-service attacks. Coauthentication is amenable to many applications, including m-out-of-n, continuous, group, shared-device, and anonymous authentications. The principal security properties of coauthentication have been formally verified in ProVerif, and implementations have performed efficiently compared to password-based authentication. [Abstract]
  • Composition of Atomic-Obligation Security Policies. Danielle Ferguson, Yan Albright, Daniel Lomsak, Tyler Hanks, Kevin Orr, and Jay Ligatti. USF Technical Report CSE-SEC-021719. February 2019. Revised September 2019. Existing security-policy-specification languages allow users to specify obligations, but open challenges remain in the composition of complex obligations, including effective approaches for resolving conflicts between policies and obligations and allowing policies to react to the obligations of other policies. This paper presents PoCo, a policy-specification language and enforcement system for the principled composition of atomic-obligation policies. PoCo enables policies to interact meaningfully with other policies' obligations, thus preventing the unexpected and insecure behaviors that can arise due to partially executed obligations or obligations that execute actions in violation of other policies. This paper also presents and analyzes the PoCo language's formal semantics and implementation. [Abstract]
  • Cybersecurity in Public Transportation: A Literature Review. Kevin Dennis, Maxat Alibayev, Sean Barbeau, and Jay Ligatti. Proceedings of the 98th Transportation Research Board Annual Meeting (TRB), January, 2019. Transportation information technologies (IT) have significantly developed in recent years from individual nodes to large, interconnected networks of devices, similar to those seen in modern IT systems. With this rapid development comes security concerns that have typically been constrained to classical computer systems. This paper reviews the existing literature regarding the state of cybersecurity in public transportation, focusing on the technical aspects of security previously published in technical venues. In particular, the paper examines transit technologies, equipment, and protocols for known vulnerabilities and defenses. Existing attack and vulnerabilities were identified for the following technologies: connected vehicles (CVs), autonomous vehicles (AVs), electronic ticketing systems, traffic signal controllers, traffic signal priority/preemption (TSP), and dynamic message signs (DMS). No known vulnerabilities were found in the literature for AVL/CAD systems, online trip planners, mobile fare payment, onboard Wi-Fi, CCTV, and APCs, but given their complexity, their wide attack surfaces, and the known vulnerabilities in related technologies, the authors believe that it is reasonable to expect that security vulnerabilities do exist in these technologies as well. Several directions for future work are discussed, including better employee training, architecture of on-board Wi-Fi systems used for critical operational purposes, and data encryption and sharing policies at the agency, especially as related to customer data. [Abstract]

▶ 2018

  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, Dmitry Goldgof. USF Technical Report CSE-SEC-092418. September 2018. This paper introduces and evaluates collaborative authentication, or coauthentication, a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication provides security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication secret, without some of the inconveniences of multi-factor techniques, such as having to enter passwords or biometrics. Coauthentication provides additional security benefits, including: preventing phishing, replay, and man-in-the-middle attacks; basing authentications on high-entropy secrets that can be generated and updated automatically; and availability protections against, for example, device misplacement and denial-of-service attacks. Coauthentication is amenable to many applications, including m-out-of-n, continuous, group, shared-device, and anonymous authentications. The principal security properties of coauthentication have been formally verified in ProVerif, and implementations have performed efficiently compared to password-based authentication. [Abstract]

▶ 2017

  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, Dmitry Goldgof. USF Technical Report Auth-7-17-17. July 2017. Collaborative authentication, or coauthentication, is a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication aims to provide security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication device, without the inconveniences of multi-factor techniques, such as having to enter passwords or scan biometrics. Coauthentication can provide additional security benefits, including: preventing phishing and man-in-the-middle attacks, basing authentications on high-entropy secrets that can be generated and updated automatically, and availability protections against, for example, device misplacement or denial-of-service (DoS) attacks. This paper introduces coauthentication and discusses and evaluates applications, example protocols, and implementations. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. ACM Transactions on Programming Languages and Systems (TOPLAS), Vol 39, No 1, Article 4, Pages 1-36. ACM Press, March 2017. Local version. Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]

▶ 2016

  • Induction on Failing Derivations. Jay Ligatti. Technical Report PL-Sep13. University of South Florida, March 2016. (This is a major revision to the September 2013 version.) A proof technique, called induction on failing derivations, is introduced. We wish to prove properties of judgments in deductive systems. Standard techniques exist for proving such properties on valid judgments; this note defines a technique for proving such properties on invalid (underivable) judgments. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. Technical Report. University of South Florida, March 2016. (This is a major revision to the August 2014 version of the technical report, which itself was a major revision of technical report CSE-071012, "Completely Subtyping Iso-Recursive Types", from July 2012.) Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]

▶ 2015

  • A Theory of Gray Security Policies. Donald Ray and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2015. This paper generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates---in a practically useful way---specifications of how secure systems are from specifications of how secure users require their systems to be. [Abstract]
  • Design of Adiabatic Dynamic Differential Logic for DPA-Resistant Secure Integrated Circuits. Matthew Morrison, Nagarajan Ranganathan, and Jay Ligatti. IEEE Transactions on Very Large Scale Integration Systems (TVLSI), Vol 23, No 8, pp 1381-1389. IEEE, August 2015. Production of cost-effective secure integrated chips, such as smart cards, requires hardware designers to consider tradeoffs in size, security, and power consumption. To design successful security-centric designs, the low-level hardware must contain built-in protection mechanisms to supplement cryptographic algorithms, such as advanced encryption standard and triple data encryption standard by preventing side-channel attacks, such as differential power analysis (DPA). Dynamic logic obfuscates the output waveforms and the circuit operation, reducing the effectiveness of the DPA attack. For stronger mitigation of DPA attacks, we propose the implementation of adiabatic dynamic differential logic (ADDL) for applications in secure integrated circuit (IC) design. Such an approach is effective in reducing power consumption, demonstrated using HSPICE simulations with 22-nm predictive technology. The benefits of our design are demonstrated by comparing instantaneous power waveforms and observing the magnitude of differential power spikes during switching events. First, simulation results for body biasing on subthreshold adiabatic inverters show an improvement in differential power up to 43.28% for similar inverters without body biasing. Then, a high-performance ADDL is presented for an implementation in high-frequency secure ICs. This method improves the differential power over previous dynamic and differential logic methods by up to 89.65%. Finally, we propose a body-biased ADDL for ultralow power applications. Simulation results show that the differential power was improved upon by a factor of 199.16. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. International Journal of Information Security (IJIS), Vol 14, No 1, pp 47-60. Springer, February 2015. (preliminary version) This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. The operational semantics of MRAs is simple and enables straightforward definitions of concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simple and expressive. Putting all of these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]

▶ 2014

  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the 17th International Information Security Conference (ISC), October 2014. This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Fingerprinting Far Proximity from Radio Emissions. Tao Wang, Yao Liu, and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2014. As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable "fingerprint" of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Technical Report CSE-TR-081114. University of South Florida, August 2014. This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]

▶ 2013

  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. Technical Report USF-CSE-1213, University of South Florida, December 2013. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model general security monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]

▶ 2012

  • Enforcing More with Less: Formalizing Target-aware Run-time Monitors. Yannis Mallios, Lujo Bauer, Dilsun Kaynar, and Jay Ligatti. Proceedings of the International Workshop on Security and Trust Management (STM), September 2012. Run-time monitors ensure that untrusted software and system behavior adheres to a security policy. This paper defines an expressive formal framework, based on I/O automata, for modeling systems, policies, and run-time monitors in more detail than is typical. We explicitly model, for example, the environment, applications, and the interaction between them and monitors. The fidelity afforded by this framework allows us to explicitly formulate and study practical constraints on policy enforcement that were often only implicit in previous models, providing a more accurate view of what can be enforced by monitoring in practice. We introduce two definitions of enforcement, target-specific and generalized, that allow us to reason about practical monitoring scenarios. Finally, we provide some meta-theoretical comparison of these definitions and we apply them to investigate policy enforcement in scenarios where the monitor designer has knowledge of the target application and show how this can be exploited to make more efficient design choices. [Abstract]
  • A Location-based Policy-specification Language for Mobile Devices. Joshua Finnis, Nalin Saigal, Adriana Iamnitchi, and Jay Ligatti. Pervasive and Mobile Computing Journal, Vol 8, No 3, pp 402-414. Elsevier, June 2012. Local version. The dramatic rise in mobile applications has greatly increased threats to the security and privacy of users. Security mechanisms on mobile devices are currently limited, so users need more expressive ways to ensure that downloaded mobile applications do not act maliciously. Policy-specification languages were created for this purpose; they allow the enforcement of user-defined policies on third-party applications. We have implemented LoPSiL, a location-based policy-specification language for mobile devices. This article describes LoPSiL’s design and implementation, several example policies, and experiments that demonstrate LoPSiL’s viability for enforcing policies on mobile devices. [Abstract]
  • Defining Code-injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), January 2012. This paper shows that existing definitions of code-injection attacks (e.g., SQL-injection attacks) are flawed. The flaws make it possible for attackers to circumvent existing mechanisms, by supplying code-injecting inputs that are not recognized as such. The flaws also make it possible for benign inputs to be treated as attacks. After describing these flaws in conventional definitions of code-injection attacks, this paper proposes a new definition, which is based on whether the symbols input to an application get used as (normal-form) values in the application’s output. Because values are already fully evaluated, they cannot be considered “code” when injected. This simple new definition of code-injection attacks avoids the problems of existing definitions, improves our understanding of how and when such attacks occur, and enables us to evaluate the effectiveness of mechanisms for mitigating such attacks. [Abstract]

▶ 2011

  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Journal of Information Processing, Vol 19, pp 292-306, Information Processing Society of Japan, July 2011. Complex software-security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many tools and techniques exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulates entire policies as atomic modules that cannot be decomposed or allow fine-grained policy modularization but require expertize to use correctly. This paper presents PoliSeer, a GUI-based tool designed to enable users who are not expert policy engineers to flexibly specify, visualize, modify, and enforce complex runtime policies on untrusted software. PoliSeer users rely on expert policy engineers to specify universally composable policy modules; PoliSeer users then build complex policies by composing those expert-written modules. This paper describes the design and implementation of PoliSeer and a case study in which we have used PoliSeer to specify and enforce a policy on PoliSeer itself. [Abstract]

▶ 2010

  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2010. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • A Packet-classification Algorithm for Arbitrary Bitmask Rules, with Automatic Time-space Tradeoffs. Jay Ligatti, Josh Kuhn, and Chris Gage. Proceedings of the IEEE International Conference on Computer Communication Networks (ICCCN), August 2010. We present an algorithm for classifying packets according to arbitrary (including noncontiguous) bitmask rules. As its principal novelty, the algorithm is parameterized by the amount of memory available and can customize its data structures to optimize classification time without exceeding a given memory bound. The algorithm thus automatically trades time for space efficiency as needed. The two extremes of this time-space tradeoff (linear search through the rules versus a single table that maps every possible packet to its class number) are special cases of the general algorithm we present. Additional features of the algorithm include its simplicity, its open-source prototype implementation, its good performance even with worst-case rule sets, and its extendability to handle range rules and dynamic updates to rule sets. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Proceedings of the International Conference on Trust Management (IFIP-TM), June 2010. Few tools exist for decomposing complex security policies into simpler modules. The policy-engineering tools that do exist either encapsulate entire policies as atomic, indecomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities. This paper briefly presents PoliSeer, the first tool we are aware of that allows complex policies to be specified, visualized, modified, and enforced as compositions of simpler policy modules. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Technical Report USF-CSE-SS-102809, University of South Florida, October 2009, revised June 2010. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]

▶ 2009

  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Technical Report CSE-SSec-112509, University of South Florida, November 2009. Complex security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many software-engineering tools exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few policy-engineering tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulate entire policies as atomic, indescomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities.

    This paper presents PoliSeer, the first tool we are aware of that allows engineers to specify, visualize, modify, and enforce complex as compositions of simpler policy modules. We describe the design and implementation of PoliSeer, as well as a case study in which we have bootstrapped PoliSeer by using it to specify and enforce a policy on itself.
    [Abstract]
  • IVCon: Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Technical Report CSE-110909-SE, University of South Florida, November 2009. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon provides an interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. ACM Transactions on Information and System Security, Vol 13, No 1, pp 1-40. ACM Press, October 2009. Local version. Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Proceedings of the ACIS International Conference on Software Engineering Research, Management, and Applications (SERA), December 2009. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • LoPSiL: A Location-based Policy-specification Language. Jay Ligatti, Billy Rickey, and Nalin Saigal. Proceedings of the International ICST Conference on Security and Privacy in Mobile Information and Communication Systems (MobiSec), June 2009. This paper describes the design of LoPSiL, a language for specifying location-dependent security and privacy policies. Policy-specification languages like LoPSiL are domain-specific programming languages intended to simplify the tasks of specifying and enforcing sound security policies on untrusted (i.e., potentially insecure) software. As far as we are aware, LoPSiL is the first imperative policy-specification language to provide abstractions specifically tailored to location-dependent policies for mobile-device applications. We have implemented a proof-of-concept compiler that inputs a LoPSiL policy P and a mobile-device application program A and outputs a new application program A’ equivalent to A, except that A’ contains inlined enforcement code that ensures that A’ satisfies P at runtime. We report our experiences using this compiler to design and implement several policies for mobile-device applications. [Abstract]
  • Composing Expressive Runtime Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. ACM Transactions on Software Engineering and Methodology (TOSEM), Vol 18, No 3, pp 1-43. ACM Press, May 2009. Local version. Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute security-relevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software’s attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and anti-virus and anti-spyware tools.

    Unfortunately, the run-time policies we require program monitors to enforce grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem, which guarantees that monitors will intercept all security-relevant actions of untrusted software.
    [Abstract]
  • Run-Time Enforcement of Nonsafety Policies. Jay Ligatti, Lujo Bauer, and David Walker. ACM Transactions on Information and System Security (TISSEC), Vol 12, No 3, pp 1-41. ACM Press, January 2009. Local version. A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed.

    This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, i.e., that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.
    [Abstract]

▶ 2008

  • Defining and Visualizing Many-to-many Relationships between Concerns and Code. Nalin Saigal and Jay Ligatti. Technical Report CSE-090608-SE, University of South Florida, September 2008. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. In this paper, we describe IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]

▶ 2007

  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2007. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Technical Report TR-776-07, Princeton University, April 2007. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]

▶ 2006

  • A Type-theoretic Interpretation of Pointcuts and Advice. Jay Ligatti, David Walker, and Steve Zdancewic. Science of Computer Programming: Special Issue on Foundations of Aspect-Oriented Programming, Vol 63, No 3, pp 240-266. Elsevier, December 2006. Local version. This article defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from a user-friendly external language to a compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of an elegant and extensible core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. We prove Preservation and Progress lemmas for our core language and show that the translation from MinAML source into core is type-preserving. Together these two results imply that the source language is type safe. We also consider several extensions to our basic framework including a general mechanism for analyzing the current call stack.
    [Abstract]
  • Static Typing for a Faulty Lambda Calculus. David Walker, Lester Mackey, Jay Ligatti, George Reis, and David August. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), September 2006. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. These faults do not cause permanent damage, but may result in incorrect program execution by altering signal transfers or stored values. While the likelihood that such transient faults will cause any significant damage may seem remote, over the last several years transient faults have caused costly failures in high-end machines at America Online, eBay, and the Los Alamos Neutron Science Center, among others [6, 44, 15]. Because susceptibility to transient faults is proportional to the size and density of transistors, the problem of transient faults will become increasingly important in the coming decades.

    This paper defines the first formal, type-theoretic framework for studying reliable computation in the presence of transient faults. More specifically, it defines λzap, a lambda calculus that exhibits intermittent data faults. In order to detect and recover from these faults, λzap programs replicate intermediate computations and use majority voting, thereby modeling software-based fault tolerance techniques studied extensively, but informally [10, 20, 30, 31, 32, 33, 41].

    To ensure that programs maintain the proper invariants and use λzap primitives correctly, the paper defines a type system for the language. This type system guarantees that well-typed programs can tolerate any single data fault. To demonstrate that λzap can serve as an idealized typed intermediate language, we define a type-preserving translation from a standard simply-typed lambda calculus into λzap.
    [Abstract]
  • Policy Enforcement via Program Monitoring. Jarred Adam Ligatti. PhD thesis, Princeton University, June 2006. One way to guarantee that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This thesis considers the space of policies enforceable by monitoring the run-time behaviors of programs and develops a practical language for specifying monitors’ policies.

    In order to delineate the space of policies that monitors can enforce, we first have to define exactly what it means for a monitor to enforce a policy. We therefore begin by building a formal framework for analyzing policy enforcement; we precisely define policies, monitors, and enforcement. Having this framework allows us to consider the enforcement powers of program monitors and prove that they enforce an interesting set of policies that we define and call the infinite renewal properties. We show how, when given any reasonable infinite renewal property, to construct a program monitor that provably enforces that policy.

    In practice, the security policies enforced by program monitors grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composeable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composeable policies on Java applications. We also formalize the central workings of Polymer by defining an unambiguous semantics for our language.
    [Abstract]

▶ 2005

  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), November 2005. Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the 7th International Conference on Formal Engineering Methods (ICFEM), November 2005. Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Proceedings of the 10th European Symposium on Research in Computer Security (ESORICS), September 2005. We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Composing Security Policies with Polymer. Lujo Bauer, Jay Ligatti, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2005. We introduce a language and system that supports definition and composition of complex run-time security policies for Java applications. Our policies are comprised of two sorts of methods. The first is query methods that are called whenever an untrusted application tries to execute a security-sensitive action. A query method returns a suggestion indicating how the security-sensitive action should be handled. The second sort of methods are those that perform state updates as the policy’s suggestions are followed.

    The structure of our policies facilitates composition, as policies can query other policies for suggestions. In order to give programmers control over policy composition, we have designed the system so that policies, suggestions, and application events are all first-class objects that a higher-order policy may manipulate. We show how to use these programming features by developing a library of policy combinators.

    Our system is fully implemented, and we have defined a formal semantics for an idealized subset of the language containing all of the key features. We demonstrate the effectiveness of our system by implementing a large-scale security policy for an email client.
    [Abstract]
  • Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-18, Microsoft Research, February 2005 (revised June 2005). Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-17, Microsoft Research, February 2005 (revised June 2005). Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Technical Report TR-720-05, Princeton University, January 2005 (revised June 2005). We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. International Journal of Information Security (IJIS), Vol 4, No 1-2, pp 2-16. Springer-Verlag, Feb 2005. Local Version. We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]

▶ 2004

  • A Language and System for Composing Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. Technical Report TR-699-04, Princeton University, January 2004. We introduce a new language and system that allows security architects to develop well-structured and easy-to-maintain security policies for Java applications. In our system, policies are first-class objects. Consequently, programmers can define parameterized meta-policies that act as policy combinators and policy modifiers, so that complex security policies can be implemented by composing simple base policies. We demonstrate the effectiveness of our design by building up a library of powerful policy combinators and showing how they can be used. We also describe some issues we encountered while implementing our system and provide performance results. [Abstract]

▶ 2003

  • Types and Effects for Non-interfering Program Monitors. Lujo Bauer, Jarred Ligatti, and David Walker. In M. Okada, B. Pierce, A. Scedrov, H. Tokuda, and A. Yonezawa, editors, Lecture Notes in Computer Science: Software Security - Theories and Systems (Revised Papers of the 2002 Mext-NSF-JSPS International Symposium), Vol 2609, pp 154-171. Springer-Verlag, November 2003. A run-time monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors. Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values or other policies, and we define operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. [Abstract]
  • A Theory of Aspects. David Walker, Steve Zdancewic, and Jay Ligatti. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), August 2003. This paper defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from its user-friendly external language to its compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of a clean, easy-to-understand, and easy-to-reason-about core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. The labels are well-scoped and the language as a whole is well-typed. Consequently, programmers can use lexical scoping in the standard way to prevent aspects from interfering with local program invariants.
    [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. Princeton University Technical Report TR-681-03, May 2003. We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]

▶ 2002

  • A Calculus for Composing Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-655-02, Princeton University, August 2002. A runtime monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors.

    Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values, or other policies. There are also operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. We also present a preliminary implementation of our language.
    [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Foundations of Computer Security Workshop (FCS) '02 (associated with LICS '02), July 2002. We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-649-02, Princeton University, July 2002. We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]

Venue type: Journals, Conferences, Workshops, Thesis, Technical Reports

  • Co-Creation in Secure Software Development: Applied Ethnography and the Interface of Software and Development. Daniel Lende, Alexis Monkhouse, Jay Ligatti, and Xinming Ou. Human Organization: Special Issue on Intersection of Software and Human Systems, April 2023. Long-term ethnographic research conducted at a software company examined how security concerns and practices became part of software development. Participant observation over a two-year period was done by researchers with cybersecurity backgrounds and training in both computer science and qualitative research, with ongoing analysis done by a larger interdisciplinary team. In situ researchers joined as software engineers and participated in daily work activities while observing development practices and analyzing software (in)security. The first year of research found that improving security during software development can be helped by a co-creation model, whereby security experts work directly with software developers to provide security tools applicable to the specific software within the workflow. Researchers-as-developers fostered conversations, concerns, and considerations of how to implement security within the process of development. The second year used a situated learning approach to understand the interface between software development, security, and the development team. Through an interactive learning process, software engineers gathered knowledge and applied it, helping to foster greater concerns for security as part of the overall “culture” of development within the company. This locally situated co-creation approach has resonances with participatory approaches in business anthropology and implications for how to promote the co-creation of knowledge and expertise more broadly. [Abstract]
  • Far Proximity Identification in Wireless Systems. Tao Wang, Jian Weng, Jay Ligatti, and Yao Liu. IEEE Transactions on Dependable and Secure Computing (TDSC), Vol 18, No 5, pp 2403-2418. IEEE, September/October 2021. (preliminary version) As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable fingerprint of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • Cybersecurity Vulnerabilities in Mobile Fare Payment Applications: A Case Study. Kevin Dennis, Maxat Alibayev, Sean J. Barbeau, and Jay Ligatti. Transportation Research Record: Journal of the Transportation Research Board (TRR), Vol 2674, No 11, pp 616-624. Sage Publishing, November 2020. Mobile fare payment applications are becoming increasingly commonplace in the public transportation industry as both a customer convenience and an effort to reduce fare management costs and improve operations for agencies. However, there is relatively little literature on vulnerabilities and liabilities in mobile fare payment applications. Furthermore, few public agencies or supporting vendors have policies or established processes in place to receive vulnerability reports or patch vulnerabilities discovered in their technologies. Given the rapidly increasing number of data breaches in general industry IT systems, as well as the fact that mobile fare payment apps are a nexus between customer and agency financial information, the security of these mobile applications deserve further scrutiny. This paper presents a vulnerability discovered in a mobile fare payment application deployed at a transit agency in Florida that, due to the system architecture, may have affected customers in as many as 40 cities across the United States - an estimated 1,554,000 users. Lessons learned from the vulnerability disclosure process followed by the research team as well as recommendations for public agencies seeking to improve the security of these types of applications are also discussed. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. ACM Transactions on Programming Languages and Systems (TOPLAS), Vol 39, No 1, Article 4, Pages 1-36. ACM Press, March 2017. Local version. Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]
  • Design of Adiabatic Dynamic Differential Logic for DPA-Resistant Secure Integrated Circuits. Matthew Morrison, Nagarajan Ranganathan, and Jay Ligatti. IEEE Transactions on Very Large Scale Integration Systems (TVLSI), Vol 23, No 8, pp 1381-1389. IEEE, August 2015. Production of cost-effective secure integrated chips, such as smart cards, requires hardware designers to consider tradeoffs in size, security, and power consumption. To design successful security-centric designs, the low-level hardware must contain built-in protection mechanisms to supplement cryptographic algorithms, such as advanced encryption standard and triple data encryption standard by preventing side-channel attacks, such as differential power analysis (DPA). Dynamic logic obfuscates the output waveforms and the circuit operation, reducing the effectiveness of the DPA attack. For stronger mitigation of DPA attacks, we propose the implementation of adiabatic dynamic differential logic (ADDL) for applications in secure integrated circuit (IC) design. Such an approach is effective in reducing power consumption, demonstrated using HSPICE simulations with 22-nm predictive technology. The benefits of our design are demonstrated by comparing instantaneous power waveforms and observing the magnitude of differential power spikes during switching events. First, simulation results for body biasing on subthreshold adiabatic inverters show an improvement in differential power up to 43.28% for similar inverters without body biasing. Then, a high-performance ADDL is presented for an implementation in high-frequency secure ICs. This method improves the differential power over previous dynamic and differential logic methods by up to 89.65%. Finally, we propose a body-biased ADDL for ultralow power applications. Simulation results show that the differential power was improved upon by a factor of 199.16. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. International Journal of Information Security, Vol 14, No 1, pp 47-60. Springer, February 2015. (preliminary version) This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. The operational semantics of MRAs is simple and enables straightforward definitions of concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simple and expressive. Putting all of these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]
  • A Location-based Policy-specification Language for Mobile Devices. Joshua Finnis, Nalin Saigal, Adriana Iamnitchi, and Jay Ligatti. Pervasive and Mobile Computing Journal, Vol 8, No 3, pp 402-414. Elsevier, June 2012. Local version. The dramatic rise in mobile applications has greatly increased threats to the security and privacy of users. Security mechanisms on mobile devices are currently limited, so users need more expressive ways to ensure that downloaded mobile applications do not act maliciously. Policy-specification languages were created for this purpose; they allow the enforcement of user-defined policies on third-party applications. We have implemented LoPSiL, a location-based policy-specification language for mobile devices. This article describes LoPSiL’s design and implementation, several example policies, and experiments that demonstrate LoPSiL’s viability for enforcing policies on mobile devices. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Journal of Information Processing, Vol 19, pp 292-306, Information Processing Society of Japan, July 2011. Complex software-security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many tools and techniques exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulates entire policies as atomic modules that cannot be decomposed or allow fine-grained policy modularization but require expertize to use correctly. This paper presents PoliSeer, a GUI-based tool designed to enable users who are not expert policy engineers to flexibly specify, visualize, modify, and enforce complex runtime policies on untrusted software. PoliSeer users rely on expert policy engineers to specify universally composable policy modules; PoliSeer users then build complex policies by composing those expert-written modules. This paper describes the design and implementation of PoliSeer and a case study in which we have used PoliSeer to specify and enforce a policy on PoliSeer itself. [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. ACM Transactions on Information and System Security, Vol 13, No 1, pp 1-40. ACM Press, October 2009. Local version. Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • Composing Expressive Runtime Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. ACM Transactions on Software Engineering and Methodology, Vol 18, No 3, pp 1-43. ACM Press, May 2009. Local version. Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute security-relevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software’s attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and anti-virus and anti-spyware tools.

    Unfortunately, the run-time policies we require program monitors to enforce grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem, which guarantees that monitors will intercept all security-relevant actions of untrusted software.
    [Abstract]
  • Run-Time Enforcement of Nonsafety Policies. Jay Ligatti, Lujo Bauer, and David Walker. ACM Transactions on Information and System Security, Vol 12, No 3, pp 1-41. ACM Press, January 2009. Local version. A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed.

    This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, i.e., that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.
    [Abstract]
  • A Type-theoretic Interpretation of Pointcuts and Advice. Jay Ligatti, David Walker, and Steve Zdancewic. Science of Computer Programming: Special Issue on Foundations of Aspect-Oriented Programming, Vol 63, No 3, pp 240-266. Elsevier, December 2006. Local version. This article defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from a user-friendly external language to a compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of an elegant and extensible core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. We prove Preservation and Progress lemmas for our core language and show that the translation from MinAML source into core is type-preserving. Together these two results imply that the source language is type safe. We also consider several extensions to our basic framework including a general mechanism for analyzing the current call stack.
    [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. International Journal of Information Security, Vol 4, No 1-2, pp 2-16. Springer-Verlag, Feb 2005. Local Version. We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]
  • ProProv: A Language and Graphical Tool for Specifying Data Provenance Policies. Kevin Dennis, Shamaria Engram, Tyler Kaczmarek, and Jay Ligatti. Proceedings of the IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS), December 2022. Local version. The Function-as-a-Service cloud computing paradigm has made large-scale application development convenient and efficient as developers no longer need to deploy or manage the necessary infrastructure themselves. However, as a consequence of this abstraction, developers lose insight into how their code is executed and data is processed. Cloud providers currently offer little to no assurance of the integrity of customer data. One approach to robust data integrity verification is the analysis of data provenance-—logs that describe the causal history of data, applications, users, and non-person entities. This paper introduces ProProv, a new domain-specific language and graphical user interface for specifying policies over provenance metadata to automate provenance analyses.

    To evaluate the convenience and usability of the new ProProv interface, 61 individuals were recruited to construct provenance policies using both ProProv and the popular, general-purpose policy specification language Rego—used as a baseline for comparison. We found that, compared to Rego, the ProProv interface greatly increased the number of policies successfully constructed, improved the time taken to construct those policies, and reduced the failed-attempt rate. Participants successfully constructed 73% of the requested policies using ProProv, compared to 41% using Rego. To further evaluate the usability of the tools, participants were given a 10-question questionnaire measured using the System Usability Scale (SUS). The median SUS score for the graphical ProProv interface was above average and fell into the “excellent” category, compared to below average and “OK” for Rego. These results highlight the impacts that graphical domainspecific tools can have on the accuracy and speed of policy construction.
    [Abstract]
  • An Analysis of the Role of Situated Learning in Starting a Security Culture in a Software Company. Anwesh Tuladhar, Daniel Lende, Jay Ligatti, and Xinming Ou. Proceedings of the Usenix Symposium on Usable Privacy and Security (SOUPS), August 2021. Local version. We conducted an ethnographic study of a software development company to explore if and how a development team adopts security practices into the development lifecycle. A PhD student in computer science with prior training in qualitative research methods was embedded in the company for eight months. The researcher joined the company as a software engineer and participated in all development activities as a new hire would, while also making observations on the development practices. During the fieldwork, we observed a positive shift in the development team's practices regarding secure development. Our analysis of data indicates that the shift can be attributed to enabling all software engineers to see how security knowledge could be applied to the specific software products they worked on. We also observed that by working with other developers to apply security knowledge under the concrete context where the software products were built, developers who possessed security expertise and wanted to push for more secure development practices (security advocates) could be effective in achieving this goal. Our data point to an interactive learning process where software engineers in a development team acquire knowledge, apply it in practice, and contribute to the team, leading to the creation of a set of preferred practices, or "culture" of the team. This learning process can be understood through the lens of the situated learning framework, where it is recognized that knowledge transfer happens within a community of practice, and applying the knowledge is the key in individuals (software engineers) acquiring it and the community (development team) embodying such knowledge in its practice. Our data show that enabling a situated learning environment for security gives rise to security-aware software engineers. We discuss the roles of management and security advocates in driving the learning process to start a security culture in a software company. [Abstract]
  • Through the Lens of Code Granularity: A Unified Approach to Security Policy Enforcement. Shamaria Engram and Jay Ligatti. Proceedings of the IEEE Conference on Applications, Information and Network Security (AINS), November 2020. A common way to characterize security enforcement mechanisms is based on the time at which they operate. Mechanisms operating before a program's execution are static mechanisms, and mechanisms operating during a program's execution are dynamic mechanisms. This paper introduces a different perspective and classifies mechanisms based on the granularity of program code that they monitor. Classifying mechanisms in this way provides a unified view of security mechanisms and shows that all security mechanisms can be encoded as dynamic mechanisms that operate at different levels of program code granularity. The practicality of the approach is demonstrated through a prototype implementation of a framework for enforcing security policies at various levels of code granularity on Java bytecode applications. [Abstract]
  • An Ethnographic Understanding of Software (In)Security and a Co-Creation Model to Improve Secure Software Development. Hernan Palombo, Armin Tabari, Daniel Lende, Jay Ligatti, and Xinming Ou. Proceedings of the Usenix Symposium on Usable Privacy and Security (SOUPS), August 2020. We present an ethnographic study of secure software development processes in a software company using the anthropological research method of participant observation. Two PhD students in computer science trained in qualitative methods were embedded in a software company for one and a half years of total research time. The researchers participated in everyday work activities such as coding and meetings, and observed software (in)security phenomena both through investigating historical data (code repositories and ticketing system records), and through pen-testing the developed software and observing developers' and management's reactions to the discovered vulnerabilities. Our study found that 1) security vulnerabilities are sometimes intentionally introduced and/or overlooked due to the difficulty in managing the various stakeholders' responsibilities in an economic ecosystem, and cannot be simply blamed on developers' lack of knowledge or skills; 2) accidental vulnerabilities discovered in the pen-testing process produce different reactions in the development team, often times contrary to what a security researcher would predict. These findings highlight the nuanced nature of the root causes of software vulnerabilities, and indicate the need to take into account a significant amount of contextual information to understand how and why software vulnerabilities emerge during software development. Rather than simply addressing deficits in developer knowledge or practice, this research sheds light onto at times forgotten human factors that significantly impact the security of software developed by actual companies. Our analysis also shows that improving software security in the development process can benefit from a co-creation model, where security experts work side by side with software developers to better identify security concerns and provide tools that are readily applicable within the specific context of the software development workflow. [Abstract]
  • An Evaluation of the Power Consumption of Coauthentication as a Continuous User Authentication Method in Mobile Systems. Brandon Corn, Ashley Ruiz, Alfredo Perez, Cagri Cetin, and Jay Ligatti. Proceedings of the Annual ACM Southeast Conference (ACMSE), April 2020. Methods for continuous user authentication have become important with the proliferation of mobile devices in m-Health and human-centered systems. These methods must guarantee user identity with high assurance, authenticate without explicit intervention, and be power-aware. We present an evaluation of the power consumption of collaborative authentication (coauthentication) as a continuous authentication method. Coauthentication is a single-factor method in which multiple registered devices work together to authenticate a user, minimizing obtrusiveness while providing high user authentication assurance. To evaluate coauthentication's power consumption, we conducted experiments using two Bluetooth-enabled mobile devices and a stand-alone server in a local area network and running coauthentication continuously for eight hours. We found that the protocol uses approximately between 1.19% and 4.0% of the total power used by the devices. These results give evidence of the feasibility of using coauthentication as a continuous authentication method in mobile devices from the power consumption perspective. [Abstract]
  • Stream-Monitoring Automata. Hernan Palombo, Egor Dolzhenko, Jay Ligatti, and Hao Zheng. Proceedings of the 9th International Conference on Software and Computer Applications (ICSCA), February 2020. Over the past nearly twenty years, numerous formal models of enforcement and runtime monitors have been investigated. This paper takes the lessons learned from earlier models and proffers a new general model of runtime enforcement that is more suitable for modeling security mechanisms that operate over infinite event streams. The new model, called Stream-Monitoring Automata (SMAs), enables the constraints and analyses of interest in previous models to be encoded, and overcomes several shortcomings of existing models with respect to expressiveness. SMAs capture the practical abilities of mechanisms to monitor infinite event streams, execute even in the absence of event inputs, enforce non-safety policies, and operate an enforcement model in which extraneous constraints such as transparency and uncontrollable events may be specified as meta-policies. [Abstract]
  • PoCo: A Language for Specifying Obligation-Based Policy Compositions. Danielle Ferguson, Yan Albright, Daniel Lomsak, Tyler Hanks, Kevin Orr, and Jay Ligatti. Proceedings of the 9th International Conference on Software and Computer Applications (ICSCA), February 2020. Existing security-policy-specification languages allow users to specify obligations, but challenges remain in the composition of complex obligations, including effective approaches for resolving conflicts between policies and obligations and allowing policies to react to other obligations. This paper presents PoCo, a policy-specification language and enforcement system for the principled composition of atomic-obligation policies. PoCo enables policies to interact meaningfully with other policies' obligations, thus preventing unexpected and insecure behaviors that can arise from partially executed obligations or obligations that execute actions in violation of other policies. [Abstract]
  • SQL-Identifier Injection Attacks. Cagri Cetin, Dmitry Goldgof, and Jay Ligatti. Proceedings of the IEEE Conference on Communications and Network Security (CNS), June 2019. This paper defines a class of SQL-injection attacks that are based on injecting identifiers, such as table and column names, into SQL statements. An automated analysis of GitHub shows that 15.7% of 120,412 posted Java source files contain code vulnerable to SQL-Identifier Injection Attacks (SQL-IDIAs). We have manually verified that some of the 18,939 Java files identified during the automated analysis are indeed vulnerable to SQL-IDIAs, including deployed Electronic Medical Record software for which SQL-IDIAs enable discovery of confidential patient information. Although prepared statements are the standard defense against SQL injection attacks, existing prepared-statement APIs do not protect against SQL-IDIAs. This paper therefore proposes and evaluates an extended prepared-statement API to protect against SQL-IDIAs. [Abstract]
  • A Dual-Task Interference Game-Based Experimental Framework for Comparing the Usability of Authentication Methods. Jean-Baptiste Subils, Joseph Perez, Peiwei Liu, Shamaria Engram, Cagri Cetin, Dmitry Goldgof, Natalie Ebner, Daniela Oliveira, and Jay Ligatti. Proceedings of the IEEE International Conference on Human System Interaction (HSI), June 2019. This paper introduces a game-based framework to compare the usability of authentication methods. The framework uses a dual-task interference technique to determine the usability of authentication methods. In the experiment, subjects participate in a multi-tasking game that simulates a conversation being interrupted by authentication requirements. By simulating a conversation, the goal is to reproduce a real use of authentication, and collect ecologically sound data. Participants also perform each authentication method in a standalone manner, which allows for comparison of the usability under two different cognitive loads. The authentication techniques evaluated represent each of the three main authentication factors, specifically password, fingerprint, and coauthentication. The three aspects of usability used to compare authentication techniques in this framework are efficiency, effectiveness, and satisfaction. An experiment with 43 participants enrolled was conducted to collect data pertaining to these aspects. The results show that fingerprint and coauthentication (both laptop and phone) are the more usable techniques evaluated. [Abstract]
  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, and Dmitry Goldgof. Proceedings of the ACM Symposium on Applied Computing (SAC), April, 2019. This paper introduces and evaluates collaborative authentication, or coauthentication, a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication provides security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication secret, without some of the inconveniences of multi-factor techniques, such as having to enter passwords or biometrics. Coauthentication provides additional security benefits, including: preventing phishing, replay, and man-in-the-middle attacks; basing authentications on high-entropy secrets that can be generated and updated automatically; and availability protections against, for example, device misplacement and denial-of-service attacks. Coauthentication is amenable to many applications, including m-out-of-n, continuous, group, shared-device, and anonymous authentications. The principal security properties of coauthentication have been formally verified in ProVerif, and implementations have performed efficiently compared to password-based authentication. [Abstract]
  • Cybersecurity in Public Transportation: A Literature Review. Kevin Dennis, Maxat Alibayev, Sean Barbeau, and Jay Ligatti. Proceedings of the 98th Transportation Research Board Annual Meeting (TRB), January, 2019. Transportation information technologies (IT) have significantly developed in recent years from individual nodes to large, interconnected networks of devices, similar to those seen in modern IT systems. With this rapid development comes security concerns that have typically been constrained to classical computer systems. This paper reviews the existing literature regarding the state of cybersecurity in public transportation, focusing on the technical aspects of security previously published in technical venues. In particular, the paper examines transit technologies, equipment, and protocols for known vulnerabilities and defenses. Existing attack and vulnerabilities were identified for the following technologies: connected vehicles (CVs), autonomous vehicles (AVs), electronic ticketing systems, traffic signal controllers, traffic signal priority/preemption (TSP), and dynamic message signs (DMS). No known vulnerabilities were found in the literature for AVL/CAD systems, online trip planners, mobile fare payment, onboard Wi-Fi, CCTV, and APCs, but given their complexity, their wide attack surfaces, and the known vulnerabilities in related technologies, the authors believe that it is reasonable to expect that security vulnerabilities do exist in these technologies as well. Several directions for future work are discussed, including better employee training, architecture of on-board Wi-Fi systems used for critical operational purposes, and data encryption and sharing policies at the agency, especially as related to customer data. [Abstract]
  • A Theory of Gray Security Policies. Donald Ray and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2015. This paper generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates---in a practically useful way---specifications of how secure systems are from specifications of how secure users require their systems to be. [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the 17th International Information Security Conference (ISC), October 2014. This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Fingerprinting Far Proximity from Radio Emissions. Tao Wang, Yao Liu, and Jay Ligatti. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2014. As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable "fingerprint" of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario. [Abstract]
  • Defining Code-injection Attacks. Donald Ray and Jay Ligatti. Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), January 2012. This paper shows that existing definitions of code-injection attacks (e.g., SQL-injection attacks) are flawed. The flaws make it possible for attackers to circumvent existing mechanisms, by supplying code-injecting inputs that are not recognized as such. The flaws also make it possible for benign inputs to be treated as attacks. After describing these flaws in conventional definitions of code-injection attacks, this paper proposes a new definition, which is based on whether the symbols input to an application get used as (normal-form) values in the application’s output. Because values are already fully evaluated, they cannot be considered “code” when injected. This simple new definition of code-injection attacks avoids the problems of existing definitions, improves our understanding of how and when such attacks occur, and enables us to evaluate the effectiveness of mechanisms for mitigating such attacks. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Proceedings of the European Symposium on Research in Computer Security (ESORICS), September 2010. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • A Packet-classification Algorithm for Arbitrary Bitmask Rules, with Automatic Time-space Tradeoffs. Jay Ligatti, Josh Kuhn, and Chris Gage. Proceedings of the IEEE International Conference on Computer Communication Networks (ICCCN), August 2010. We present an algorithm for classifying packets according to arbitrary (including noncontiguous) bitmask rules. As its principal novelty, the algorithm is parameterized by the amount of memory available and can customize its data structures to optimize classification time without exceeding a given memory bound. The algorithm thus automatically trades time for space efficiency as needed. The two extremes of this time-space tradeoff (linear search through the rules versus a single table that maps every possible packet to its class number) are special cases of the general algorithm we present. Additional features of the algorithm include its simplicity, its open-source prototype implementation, its good performance even with worst-case rule sets, and its extendability to handle range rules and dynamic updates to rule sets. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Proceedings of the International Conference on Trust Management (IFIP-TM), June 2010. Few tools exist for decomposing complex security policies into simpler modules. The policy-engineering tools that do exist either encapsulate entire policies as atomic, indecomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities. This paper briefly presents PoliSeer, the first tool we are aware of that allows complex policies to be specified, visualized, modified, and enforced as compositions of simpler policy modules. [Abstract]
  • Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Proceedings of the ACIS International Conference on Software Engineering Research, Management, and Applications (SERA), December 2009. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • LoPSiL: A Location-based Policy-specification Language. Jay Ligatti, Billy Rickey, and Nalin Saigal. Proceedings of the International ICST Conference on Security and Privacy in Mobile Information and Communication Systems (MobiSec), June 2009. This paper describes the design of LoPSiL, a language for specifying location-dependent security and privacy policies. Policy-specification languages like LoPSiL are domain-specific programming languages intended to simplify the tasks of specifying and enforcing sound security policies on untrusted (i.e., potentially insecure) software. As far as we are aware, LoPSiL is the first imperative policy-specification language to provide abstractions specifically tailored to location-dependent policies for mobile-device applications. We have implemented a proof-of-concept compiler that inputs a LoPSiL policy P and a mobile-device application program A and outputs a new application program A’ equivalent to A, except that A’ contains inlined enforcement code that ensures that A’ satisfies P at runtime. We report our experiences using this compiler to design and implement several policies for mobile-device applications. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2007. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Static Typing for a Faulty Lambda Calculus. David Walker, Lester Mackey, Jay Ligatti, George Reis, and David August. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), September 2006. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. These faults do not cause permanent damage, but may result in incorrect program execution by altering signal transfers or stored values. While the likelihood that such transient faults will cause any significant damage may seem remote, over the last several years transient faults have caused costly failures in high-end machines at America Online, eBay, and the Los Alamos Neutron Science Center, among others [6, 44, 15]. Because susceptibility to transient faults is proportional to the size and density of transistors, the problem of transient faults will become increasingly important in the coming decades.

    This paper defines the first formal, type-theoretic framework for studying reliable computation in the presence of transient faults. More specifically, it defines λzap, a lambda calculus that exhibits intermittent data faults. In order to detect and recover from these faults, λzap programs replicate intermediate computations and use majority voting, thereby modeling software-based fault tolerance techniques studied extensively, but informally [10, 20, 30, 31, 32, 33, 41].

    To ensure that programs maintain the proper invariants and use λzap primitives correctly, the paper defines a type system for the language. This type system guarantees that well-typed programs can tolerate any single data fault. To demonstrate that λzap can serve as an idealized typed intermediate language, we define a type-preserving translation from a standard simply-typed lambda calculus into λzap.
    [Abstract]
  • Control-Flow Integrity: Principles, Implementations, and Applications. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), November 2005. Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Proceedings of the 7th International Conference on Formal Engineering Methods (ICFEM), November 2005. Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Proceedings of the 10th European Symposium on Research in Computer Security (ESORICS), September 2005. We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • Composing Security Policies with Polymer. Lujo Bauer, Jay Ligatti, and David Walker. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), June 2005. We introduce a language and system that supports definition and composition of complex run-time security policies for Java applications. Our policies are comprised of two sorts of methods. The first is query methods that are called whenever an untrusted application tries to execute a security-sensitive action. A query method returns a suggestion indicating how the security-sensitive action should be handled. The second sort of methods are those that perform state updates as the policy’s suggestions are followed.

    The structure of our policies facilitates composition, as policies can query other policies for suggestions. In order to give programmers control over policy composition, we have designed the system so that policies, suggestions, and application events are all first-class objects that a higher-order policy may manipulate. We show how to use these programming features by developing a library of policy combinators.

    Our system is fully implemented, and we have defined a formal semantics for an idealized subset of the language containing all of the key features. We demonstrate the effectiveness of our system by implementing a large-scale security policy for an email client.
    [Abstract]
  • Types and Effects for Non-interfering Program Monitors. Lujo Bauer, Jarred Ligatti, and David Walker. In M. Okada, B. Pierce, A. Scedrov, H. Tokuda, and A. Yonezawa, editors, Lecture Notes in Computer Science: Software Security - Theories and Systems (Revised Papers of the 2002 Mext-NSF-JSPS International Symposium), Vol 2609, pp 154-171. Springer-Verlag, November 2003. A run-time monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors. Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values or other policies, and we define operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. [Abstract]
  • A Theory of Aspects. David Walker, Steve Zdancewic, and Jay Ligatti. Proceedings of the ACM SIGPLAN International Conference on Functional Programming (ICFP), August 2003. This paper defines the semantics of MinAML, an idealized aspect-oriented programming language, by giving a type-directed translation from its user-friendly external language to its compact, well-defined core language. We argue that our framework is an effective way to give semantics to aspect-oriented programming languages in general because the translation eliminates shallow syntactic differences between related constructs and permits definition of a clean, easy-to-understand, and easy-to-reason-about core language.

    The core language extends the simply-typed lambda calculus with two central new abstractions: explicitly labeled program points and first-class advice. The labels serve both to trigger advice and to mark continuations that the advice may return to. These constructs are defined orthogonally to the other features of the language and we show that our abstractions can be used in both functional and object-oriented contexts. The labels are well-scoped and the language as a whole is well-typed. Consequently, programmers can use lexical scoping in the standard way to prevent aspects from interfering with local program invariants.
    [Abstract]
  • A Preliminary Study on Using Large Language Models in Software Pentesting. Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Lawrence Hall, Jay Ligatti, S. Raj Rajgopalan, Armin Ziaie Tabari. Proceedings of the Workshop on SOC Operations and Construction (WOSOC), February 2024. Large language models (LLM) are perceived to offer promising potentials for automating security tasks, such as those found in security operation centers (SOCs). As a first step towards evaluating this perceived potential, we investigate the use of LLMs in software pentesting, where the main task is to automatically identify software security vulnerabilities in source code. We hypothesize that an LLM-based AI agent can be improved over time for a specific security task as human operators interact with it. Such improvement can be made, as a first step, by engineering prompts fed to the LLM based on the responses produced, to include relevant contexts and structures so that the model provides more accurate results. Such engineering efforts become sustainable if the prompts that are engineered to produce better results on current tasks, also produce better results on future unknown tasks. To examine this hypothesis, we utilize the OWASP Benchmark Project 1.2 which contains 2,740 hand-crafted source code test cases containing various types of vulnerabilities. We divide the test cases into training and testing data, where we engineer the prompts based on the training data (only), and evaluate the final system on the testing data. We compare the AI agent's performance on the testing data against the performance of the agent without the prompt engineering. We also compare the AI agent's results against those from SonarQube, a widely used static code analyzer for security testing. We built and tested multiple versions of the AI agent using different off-the-shelf LLMs -- Google's Gemini-pro, as well as OpenAI's GPT-3.5-Turbo and GPT-4-Turbo (with both chat completion and assistant APIs). The results show that using LLMs is a viable approach to build an AI agent for software pentesting that can improve through repeated use and prompt engineering. [Abstract]
  • Enforcing More with Less: Formalizing Target-aware Run-time Monitors. Yannis Mallios, Lujo Bauer, Dilsun Kaynar, and Jay Ligatti. Proceedings of the International Workshop on Security and Trust Management (STM), September 2012. Run-time monitors ensure that untrusted software and system behavior adheres to a security policy. This paper defines an expressive formal framework, based on I/O automata, for modeling systems, policies, and run-time monitors in more detail than is typical. We explicitly model, for example, the environment, applications, and the interaction between them and monitors. The fidelity afforded by this framework allows us to explicitly formulate and study practical constraints on policy enforcement that were often only implicit in previous models, providing a more accurate view of what can be enforced by monitoring in practice. We introduce two definitions of enforcement, target-specific and generalized, that allow us to reason about practical monitoring scenarios. Finally, we provide some meta-theoretical comparison of these definitions and we apply them to investigate policy enforcement in scenarios where the monitor designer has knowledge of the target application and show how this can be exploited to make more efficient design choices. [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Foundations of Computer Security Workshop (FCS) '02 (associated with LICS '02), July 2002. We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]
  • Policy Enforcement via Program Monitoring. Jarred Adam Ligatti. PhD thesis, Princeton University, June 2006. One way to guarantee that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This thesis considers the space of policies enforceable by monitoring the run-time behaviors of programs and develops a practical language for specifying monitors’ policies.

    In order to delineate the space of policies that monitors can enforce, we first have to define exactly what it means for a monitor to enforce a policy. We therefore begin by building a formal framework for analyzing policy enforcement; we precisely define policies, monitors, and enforcement. Having this framework allows us to consider the enforcement powers of program monitors and prove that they enforce an interesting set of policies that we define and call the infinite renewal properties. We show how, when given any reasonable infinite renewal property, to construct a program monitor that provably enforces that policy.

    In practice, the security policies enforced by program monitors grow more complex both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composeable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composeable policies on Java applications. We also formalize the central workings of Polymer by defining an unambiguous semantics for our language.
    [Abstract]
  • Composition of Atomic-Obligation Security Policies. Danielle Ferguson, Yan Albright, Daniel Lomsak, Tyler Hanks, Kevin Orr, and Jay Ligatti. USF Technical Report CSE-SEC-021719. February 2019. Revised September 2019. Existing security-policy-specification languages allow users to specify obligations, but open challenges remain in the composition of complex obligations, including effective approaches for resolving conflicts between policies and obligations and allowing policies to react to the obligations of other policies. This paper presents PoCo, a policy-specification language and enforcement system for the principled composition of atomic-obligation policies. PoCo enables policies to interact meaningfully with other policies' obligations, thus preventing the unexpected and insecure behaviors that can arise due to partially executed obligations or obligations that execute actions in violation of other policies. This paper also presents and analyzes the PoCo language's formal semantics and implementation. [Abstract]
  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, Dmitry Goldgof. USF Technical Report CSE-SEC-092418. September 2018. This paper introduces and evaluates collaborative authentication, or coauthentication, a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication provides security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication secret, without some of the inconveniences of multi-factor techniques, such as having to enter passwords or biometrics. Coauthentication provides additional security benefits, including: preventing phishing, replay, and man-in-the-middle attacks; basing authentications on high-entropy secrets that can be generated and updated automatically; and availability protections against, for example, device misplacement and denial-of-service attacks. Coauthentication is amenable to many applications, including m-out-of-n, continuous, group, shared-device, and anonymous authentications. The principal security properties of coauthentication have been formally verified in ProVerif, and implementations have performed efficiently compared to password-based authentication. [Abstract]
  • Coauthentication. Jay Ligatti, Cagri Cetin, Shamaria Engram, Jean-Baptiste Subils, Dmitry Goldgof. USF Technical Report Auth-7-17-17. July 2017. Collaborative authentication, or coauthentication, is a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication aims to provide security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication device, without the inconveniences of multi-factor techniques, such as having to enter passwords or scan biometrics. Coauthentication can provide additional security benefits, including: preventing phishing and man-in-the-middle attacks, basing authentications on high-entropy secrets that can be generated and updated automatically, and availability protections against, for example, device misplacement or denial-of-service (DoS) attacks. This paper introduces coauthentication and discusses and evaluates applications, example protocols, and implementations. [Abstract]
  • Induction on Failing Derivations. Jay Ligatti. Technical Report PL-Sep13. University of South Florida, March 2016. (This is a major revision to the September 2013 version.) A proof technique, called induction on failing derivations, is introduced. We wish to prove properties of judgments in deductive systems. Standard techniques exist for proving such properties on valid judgments; this note defines a technique for proving such properties on invalid (underivable) judgments. [Abstract]
  • On Subtyping-Relation Completeness, with an Application to Iso-Recursive Types. Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. Technical Report. University of South Florida, March 2016. (This is a major revision to the August 2014 version of the technical report, which itself was a major revision of technical report CSE-071012, "Completely Subtyping Iso-Recursive Types", from July 2012.) Well-known techniques exist for proving the soundness of subtyping relations with respect to type safety. However, completeness has not been treated with widely applicable techniques, as far as we're aware.

    This paper develops techniques for stating and proving that a subtyping relation is complete with respect to type safety and applies the techniques to the study of iso-recursive subtyping. A new proof technique, induction on failing derivations, is provided that may be useful in other domains as well.

    The common subtyping rules for iso-recursive types---the "Amber rules"---are shown to be incomplete with respect to type safety. That is, there exist iso-recursive types t1 and t2 such that t1 can safely be considered a subtype of t2, but t1<=t2 is not derivable with the Amber rules.

    New, algorithmic rules are defined for subtyping iso-recursive types, and the rules are proved sound and complete with respect to type safety. The fully implemented subtyping algorithm is optimized to run in O(mn) time, where m is the number of mu-terms in the types being considered and n is the size of the types being considered.
    [Abstract]
  • Defining Injection Attacks. Donald Ray and Jay Ligatti. Technical Report CSE-TR-081114. University of South Florida, August 2014. This paper defines and analyzes injection attacks. The definition is based on the NIE property, which states that an application's untrusted inputs must only produce Noncode Insertions or Expansions in output programs (e.g., SQL queries). That is, when applications generate output programs based on untrusted inputs, the NIE property requires that inputs only affect output programs by inserting or expanding noncode tokens (e.g., string and float literals, lambda values, pointers, etc). This paper calls attacks based on violating the NIE property BroNIEs (i.e., Broken NIEs) and shows that all code-injection attacks are BroNIEs. In addition, BroNIEs contain many malicious injections that do not involve injections of code; we call such attacks noncode-injection attacks. In order to mitigate both code- and noncode-injection attacks, this paper presents an algorithm for detecting and preventing BroNIEs. [Abstract]
  • Modeling Runtime Enforcement with Mandatory Results Automata. Egor Dolzhenko, Jay Ligatti, and Srikar Reddy. Technical Report USF-CSE-1213, University of South Florida, December 2013. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform securi ty-relevant actions and their results. Because previous work could not model general security monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple operational semantics that makes it straightforward to define concrete MRAs. Moreover, the de finitions of policies and enforcement with MRAs are simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by c haracterizing the policies deterministic and nondeterministic MRAs can and cannot enforce. [Abstract]
  • A Theory of Runtime Enforcement, with Results. Jay Ligatti and Srikar Reddy. Technical Report USF-CSE-SS-102809, University of South Florida, October 2009, revised June 2010. This paper presents a theory of runtime enforcement based on mechanism models called MRAs (Mandatory Results Automata). MRAs can monitor and transform security-relevant actions and their results. Because previous work could not model monitors transforming results, MRAs capture realistic behaviors outside the scope of previous models. MRAs also have a simple but realistic operational semantics that makes it straightforward to define concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are significantly simpler and more expressive than those of previous models. Putting all these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies MRAs can and cannot enforce. [Abstract]
  • PoliSeer: A Tool for Managing Complex Security Policies. Daniel Lomsak and Jay Ligatti. Technical Report CSE-SSec-112509, University of South Florida, November 2009. Complex security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many software-engineering tools exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few policy-engineering tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulate entire policies as atomic, indescomposable modules or allow fine-grained modularization but are complicated and lack policy-visualization capabilities.

    This paper presents PoliSeer, the first tool we are aware of that allows engineers to specify, visualize, modify, and enforce complex as compositions of simpler policy modules. We describe the design and implementation of PoliSeer, as well as a case study in which we have bootstrapped PoliSeer by using it to specify and enforce a policy on itself.
    [Abstract]
  • IVCon: Inline Visualization of Concerns. Nalin Saigal and Jay Ligatti. Technical Report CSE-110909-SE, University of South Florida, November 2009. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This paper describes IVCon, a tool with a novel approach for completely modularizing CCCs. IVCon enables users to create, examine, and modify their code in two different views: the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon provides an interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Defining and Visualizing Many-to-many Relationships between Concerns and Code. Nalin Saigal and Jay Ligatti. Technical Report CSE-090608-SE, University of South Florida, September 2008. Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. In this paper, we describe IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views. [Abstract]
  • Fault-tolerant Typed Assembly Language. Frances Perry, Lester Mackey, George Reis, Jay Ligatti, David August, and David Walker. Technical Report TR-776-07, Princeton University, April 2007. A transient hardware fault occurs when an energetic particle strikes a transistor, causing it to change state. Although transient faults do not permanently damage the hardware, they may corrupt computations by altering stored values and signal transfers. In this paper, we propose a new scheme for provably safe and reliable computing in the presence of transient hardware faults. In our scheme, software computations are replicated to provide redundancy while special instructions compare the independently computed results to detect errors before writing critical data. In stark contrast to any previous efforts in this area, we have analyzed our fault tolerance scheme from a formal, theoretical perspective. To be specific, first, we provide an operational semantics for our assembly language, which includes a precise formal definition of our fault model. Second, we develop an assembly-level type system designed to detect reliability problems in compiled code. Third, we provide a formal specification for program fault tolerance under the given fault model and prove that all well-typed programs are indeed fault tolerant. In addition to the formal analysis, we evaluate our detection scheme and show that it only takes 34% longer to execute than the unreliable version. [Abstract]
  • Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-18, Microsoft Research, February 2005 (revised June 2005). Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. [Abstract]
  • A Theory of Secure Control Flow. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. Technical Report MSR-TR-2005-17, Microsoft Research, February 2005 (revised June 2005). Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees. [Abstract]
  • Enforcing Non-safety Security Policies with Program Monitors. Jay Ligatti, Lujo Bauer, and David Walker. Technical Report TR-720-05, Princeton University, January 2005 (revised June 2005). We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness. [Abstract]
  • A Language and System for Composing Security Policies. Lujo Bauer, Jay Ligatti, and David Walker. Technical Report TR-699-04, Princeton University, January 2004. We introduce a new language and system that allows security architects to develop well-structured and easy-to-maintain security policies for Java applications. In our system, policies are first-class objects. Consequently, programmers can define parameterized meta-policies that act as policy combinators and policy modifiers, so that complex security policies can be implemented by composing simple base policies. We demonstrate the effectiveness of our design by building up a library of powerful policy combinators and showing how they can be used. We also describe some issues we encountered while implementing our system and provide performance results. [Abstract]
  • Edit Automata: Enforcement Mechanisms for Run-time Security Policies. Jay Ligatti, Lujo Bauer, and David Walker. Princeton University Technical Report TR-681-03, May 2003. We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: They may terminate the application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.

    After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce and we provide examples of policies that can be enforced by one sort of automaton but not another.
    [Abstract]
  • A Calculus for Composing Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-655-02, Princeton University, August 2002. A runtime monitor is a program that runs in parallel with an untrusted application and examines actions from the application's instruction stream. If the sequence of program actions deviates from a specified security policy, the monitor transforms the sequence or terminates the program. We present the design and formal specification of a language for defining the policies enforced by program monitors.

    Our language provides a number of facilities for composing complex policies from simpler ones. We allow policies to be parameterized by values, or other policies. There are also operators for forming the conjunction and disjunction of policies. Since the computations that implement these policies modify program behavior, naive composition of computations does not necessarily produce the conjunction (or disjunction) of the policies that the computations implement separately. We use a type and effect system to ensure that computations do not interfere with one another when they are composed. We also present a preliminary implementation of our language.
    [Abstract]
  • More Enforceable Security Policies. Lujo Bauer, Jarred Ligatti, and David Walker. Technical Report TR-649-02, Princeton University, July 2002. We analyze the space of security policies that can be enforced by monitoring programs at runtime. Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy. The simplest such automaton truncates the action sequence by terminating a program. Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies. We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program. We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies. [Abstract]

Teaching

Compilers: Spring'24 Spring'23 Spring'22 Spring'21
Spring'20 Fall'17 Fall'16 Fall'15
Fall'13 Fall'11 Fall'09 Fall'07
Foundations of Software Security: Spring'24 Spring'23 Spring'22 Spring'21
Fall'19 Spring'19 Spring'18 Spring'17
Spring'16 Spring'15 Spring'14 Spring'13
Spring'12 Spring'10 Spring'08 Spring'07
Programming Languages: Spring'19 Spring'18 Spring'17 Spring'16
Fall'14 Fall'12 Fall'10 Fall'08
Advanced Programming Languages: Spring'15 Spring'11
Operating Systems: Fall'06
Secure Coding: Fall'23 Fall'22 Fall'21 Fall'20
Fall'19 Fall'18

Current graduate students

Kevin Dennis (PhD student 2018-present)
Parisa Momeni (PhD student 2021-present)
Gabriel Laverghetta (Master's student 2023-present)

Former graduate students

▶ Jennifer Adorno

▶ Brennan Ward

▶ Shamaria Engram

▶ Danielle Ferguson

▶ Hernan Palombo

▶ Yan Albright

▶ Jean-Baptiste Subils

▶ Cagri Cetin

▶ Michael Quintero

▶ Ivory Hernandez

▶ Kimberly Bursum

▶ Jacob Venne

▶ Donald Ray

▶ Bader AlBassam

▶ Clayton Whitelaw

▶ Cory Juhlin

▶ Grant Smith

▶ Daniel Lomsak

▶ Zach Carter

▶ Stan Naspinski

▶ Matt Spaulding

▶ Nalin Saigal

▶ Josh Kuhn

▶ Brandy Eyers

▶ Srikar Reddy

Talks

CLOSE X