The hackers tried to break through the safeguards of various AI programs in an effort to identify their vulnerabilities - to find the problems before actual criminals and misinformation peddlers did - in a practice known as red-teaming. Each competitor had 50 minutes to tackle up to 21 challenges - getting an AI model to "hallucinate" inaccurate information, for example.