by Malcolm Murray at 3 Quarks Daily: As AI evolves, so do the risks it poses on society. The risks of AI today are already unrecognizable from those of a few years ago. As a dual-use technology, similar to nuclear power, the capabilities of AI bring great benefits as well as great risks. As opposed to nuclear power, however, the risks from AI are borne by society as a whole even as the benefits may only accrue to parts of society. This means we need to start focusing on societal AI risk management.
AI capabilities are set to only continue to evolve in the years to come. Naturally, there is considerable uncertainty regarding how exactly they will evolve. However, massive investments have already been made and are continuing to be made. By some estimates, there have already been hundreds of billions of dollars invested in AI, and future discrete projects are now in the $100 billion range, such as the rumored Microsoft and OpenAI Stargate project. The scaling laws that have been observed over the past years have meant that money alone (in the form of compute and data) has inexorably led to capability advances. The significantly different capabilities between GPT-4 and GPT-3 were not due to size alone, but it played a large role.
Therefore, even if no further conceptual breakthroughs were to happen (which is unlikely), capabilities will continue to advance. Given what we already know about the risks involved, it is incumbent on us as a society to manage these risks so that society as a whole can reap the benefits from AI and not suffer from the risks. This necessitates broad, interdisciplinary efforts under the banner of societal AI risk management.
The first aspect in risk management is understanding the risk, i.e. risk assessment. In order to be able to begin to manage the risks, we need to understand what they are, how likely they are, how large they are and what factors drive them. Some AI risk assessment efforts have started to take place. Most of it has, however, been focused on model capability evals – the systematic prompting of models to see if dangerous capabilities can be elicited. Examples of this can be seen in e.g. the system card OpenAI released for GPT-4.
This is good and necessary work, but it does not suffice in terms of risk assessment, since the capabilities of the AI models are only one part of the puzzle. A risk scenario starts with a hazard – in this case a model possessing dangerous capabilities – but it also contains actors, incentives, resources and harms. For example, a risk assessment would contain an analysis of the opportunities and incentives for an actor to cause a risk event from the hazard. Further, there should be an analysis of the type of harm that would ensue from the risk event (e.g. fatalities or economic damage).
Over the past year, I have been leading a study to conduct more comprehensive assessments of AI risks. Results are not ready yet, but we are starting to see interesting patterns. The bad news emerging from the study is that most risks do increase when AI is used compared to when it is not. The good news is that the risks are not yet changing in their nature.
More here.