OpenAI, the company behind ChatGPT, has conducted a new study that found GPT-4 poses a slight risk when it comes to assisting in creating a bioweapon.
The study involved testing the model’s impact on the accuracy and completeness of bioweapon plans and was motivated by a recent executive order from the U.S. government, which expressed concern that AI could lower the barriers to entry to produce biological weapons.
To evaluate the threat, OpenAI conducted a study with 100 human participants with 50 biology experts with PhDs and professional wet lab experience and 50 student-level participants, with at least one university-level course in biology. The participants were divided into two groups, with one group allowed access to the internet, and the other that was allowed access to the internet and GPT-4. “Each participant was then asked to complete a set of tasks covering aspects of the end-to-end process for biological threat creation,” wrote OpenAI.
The results showed that GPT-4 had a small positive effect on the accuracy, completeness, innovation, time taken and self-rated difficulty of the bioweapon plans.
The company concluded that “GPT-4 provides at most a mild uplift in biological threat creation accuracy.” It added that “while this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation.”
Since it’s bio weapons we’re talking about, even a “mild uplift” in helping people create such weapons is a serious matter. However, OpenAI did add that bioweapon information is already widely available on the internet, and that its model does not significantly increase risk.
You can check out the study here.
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.