Experimental platform for red teaming multimodal large language models in road safety
WACRSR operates as
a research institute for conducting new and innovative approaches to road safety.
The success of large language models (LLMs) and their expansion into multimodal LLMs
with vision and video capabilities has seen proposals for these models to be used
in planning, perception, and control of autonomous vehicles, e.g., this survey.
We would like to set up a simple experimental platform (probably in Python) similar
to that used in cognitive psychology to first test multimodal LLMs
on some standard road safety stimuli (e.g., this existing dataset.
Following this, we would incorporate manipulations to the stimuli that might interfere with how the multimodal LLM perceives the image, and subsequently makes inferences on the scene. For example, simple image degradations consistent with rain, darkness, or perhaps an explicit manipulation designed to specifically interfere with an LLM.
Client
Contact: Matthew Albrecht
Phone: 0481228718
Email: [email protected]
Preferred contact: Email
Location: Crawley Campus - Maths Link
IP Exploitation Model
The IP exploitation model requested by the Client is: Creative Commons (open source) http://creativecommons.org.au/
Department of Computer Science & Software Engineering
The University of Western Australia
Last modified: 16 July 2024
Modified By: Michael Wise
|
 |