The United States Department of Defense (DoD)recently launched a bounty program stated Cointelegraph. This application can help find real-world-applicable examples of legal bias in artificial intelligence (AI) models.
It is believed that participants will be tasked with attempting to solicit clear examples of bias from a large language model (LLM). With insights from a video linked on the bias bounty’s info page, the model being tested is Meta’s open-source LLama-2 70B.
“The purpose of this contest is to identify realistic situations with potential real-world applications where large language models may present bias or systematically wrong outputs within the Department of Defense context,” Cointelegraph explained.
Submissions are expected to be judged on how realistic the output’s scenario is, its relevance to the protected class, supporting evidence, concise description, and how many prompts it takes to replicate (with fewer attempts scoring higher), concluded Cointelegraph.
(With insights from Cointelegraph)