Automatically crafting test scenarios for REST APIs helps deliver more reliable and trustworthy web-oriented systems. However, current black-box testing approaches rely heavily on the information available in the API's formal documentation, i.e., the OpenAPI Specification (OAS for short). While useful, the OAS mostly covers syntactic aspects of the API (e.g., producer-consumer relations between operations, input value properties, and additional constraints in natural language), and it lacks a deeper understanding of the API business logic. Missing semantics include implicit ordering (logic dependency) between operations and implicit input-value constraints. These limitations hinder the ability of black-box testing tools to generate truly effective test cases automatically. The goal of this project is to develop a novel black-box approach for automatically testing REST APIs that leverages deep reinforcement learning to uncover implicit API constraints, that are, constraints hidden from API documentation. Curiosity-driven learning guides an agent in the exploration of the API and learns an effective order to test its operations. This helps identify which operations to test first to take the API in a testable state and avoid failing API interactions later. At the same time, experience gained on successful API interactions is leveraged to drive accurate input data generation (i.e., what parameters to use and how to pick their values).
This project is in collaboration with the Software Engineering Lab of the Georgia Institute of Technology, USA, and IBM.
Web APIs are commonly documented using OpenAPI specifications. Although numerous automated testing techniques have been proposed that leverage the machine-readable part of these specifications to guide test generation, their human-readable part has been mostly neglected. This is a missed opportunity, as natural language descriptions in the specifications often contain relevant information, including example values and inter-parameter dependencies, that can be used to improve test generation. The goal of this project is to develop a novel approach that applies natural language processing techniques to assist web API testing by extracting additional OpenAPI rules from the human-readable part of the specification.
Mass assignment is one of the most prominent vulnerabilities in web APIs that originates from a misconfiguration in common web frameworks. This allows attackers to exploit naming convention and automatic binding to craft malicious requests that (massively) override data supposed to be read-only. The goal of this project is to develop a completely automated approach to automatically detect mass assignment vulnerabilities in web APIs.
In literature, we can find research tools to automatically generate test cases for web APIs, addressing the specificity of this particular programming domain. However, no direct comparison of these tools is available to guide developers in deciding which tool best fits their web API project. The goal of this project is to conduct a thorough empirical comparison of automated black-box test case generation approaches for web APIs.
Web APIs represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. The goal of this approach is to develop a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification) to address the challenges peculiar to this domain: operation ordering, input generation, and the oracle problem.
The ecosystem in which mobile applications run is highly heterogeneous and configurable. All layers upon which mobile apps are built offer wide possibilities of variations, from the device and the hardware, to the operating system and middleware, up to the user preferences and settings. Testing all possible configurations exhaustively, before releasing the app, is unaffordable. As a consequence, the app may exhibit different, including faulty, behaviours when executed in the field, under specific configurations. The goal of the project is to develop a framework that can be instantiated to support in-vivo testing of a mobile app. The framework will monitor the configuration in the field and triggers in-vivo testing when an untested configuration is recognized.