GPT-4 developer tool can be exploited for misuse with no easy fix

- Advertisement -


ChatGPT Open AI chat bot on phone screen

AI chatbots can be fine-tuned to provide information that could help terrorists plan attacks

salarko/Alamy

It is surprisingly easy to remove the safety measures intended to prevent AI chatbots from giving harmful responses that could aid would-be terrorists or mass shooters. The discovery seems to be prompting companies including OpenAI to develop strategies to solve the problem – but research suggests their efforts have been met with only limited success so far.

Read more


Cooling system could replace air con and drastically cut energy use

OpenAI worked with academic researchers on a so-called…

- Advertisement -

Latest articles

Related articles

error: Content is protected !!