Aim policies at ‘hardware’ to ensure AI safety, say experts
14 February 2024Chips and datacentres – the “compute” driving the AI revolution – may be the most effective targets for risk-reducing AI policies, according to a new report.
Chips and datacentres – the “compute” driving the AI revolution – may be the most effective targets for risk-reducing AI policies, according to a new report.
Launched during Open Cambridge, a new self-guided trail, created by researchers at Cambridge’s Centre for the Study of Existential Risk (CSER), takes the public on an altogether different tour of the city.
A global hacker 'red team' and rewards for hunting algorithmic biases are just some of the recommendations from experts who argue that AI faces a 'tech-lash' unless firm measures are taken to increase public trust.
Two Cambridge risk researchers discuss how national governments are still stuck on "old problems", and run through the things that should be keeping our leaders awake at night.