skip to content
 

A New model for oversight of technology companies? 

Working with DeepMind Health: a review

Dr Julian Huppert, Intellectual Forum, Jesus College

People are becoming far more aware of the dangers that can be caused by overwhelming power from the big technology companies. The Cambridge Analytica scandal and much else has highlighted the huge societal harms that can be caused.

I think the case for stronger legislation is very strong. Any overly powerful organisation can cause immense harm, either deliberately, or inadvertently – and even if you are sure that an organisation’s current leadership is benevolent, how sure can you be that that will continue for the future?

But stronger legislation can only take us so far. It is a blunt instrument, no matter how hard you try to tweak it, it is almost impossible to eliminate bad outcomes, without preventing good ones or generating other problems. Indeed, the more arcane and byzantine laws get, the easier it can be for large organisations to find ways to game the system.

There is also a problem that legislation can only ever set a minimum standard. I would like to see a reason for companies to aspire to do more than the legal minimum, whatever that may be. For that reason I am very committed to driving enhancements in technology ethics, not as a way of avoiding legal regulation, but as a way of driving above and beyond mere legal compliance.

One specific example of this has been the work I did with an organisation called DeepMind Health (DMH). DeepMind, now owned by Alphabet, who own Google, is possibly the world’s leading deep Artificial Intelligence company. They knew that when they went into healthcare, this would attract a lot of attention and criticism – aside from the normal sensitivities around health information, the idea that Google could get even more data quite rightly concerned many people.

As a result, they decided to try a new ambitious approach to oversight and governance, bringing in a panel of Independent Reviewers to keep an eye on them, and act as a sort of watchdog, giving advice and drawing public attention to any concerns and failings. I and eight others, all with some public prominence, were brought in as reviewers, and I was asked to Chair the group.

A core underpinning idea was that if you want to have trust, the best way to do that is to demonstrate trustworthiness. As a result, rather than relying on press coverage to argue you should be trusted, you find ways to be appropriately open and transparent, and hence demonstrate why you should be trusted – that should then lead to the trust deserved. Or, if trust isn’t deserved, that will also be highlighted.

There were a number of features of this process that go well beyond the often-seen advisory groups, and mean that it was more than just ethics-washing. We were under no confidentiality requirements, but had access to any information we wanted (other than confidential patient data, for obvious reasons). We were explicitly free to share anything we wanted to share with the press and public if we felt that was appropriate. Indeed, our only real obligation was to produce an annual report in public – and DMH had no say on what we wrote.

Additionally, we had completely free rein in what we chose to look at – nothing was off limits, and we had a budget to commission our own work, in whatever we felt was worth investigating.

Two examples from our first year perhaps illustrate how remarkable our freedom was. DMH were using an app called Streams to help clinicians at the NHS Royal Free Hospital identify acute kidney injury faster, potentially saving many lives. We wanted to see how securely the data was held, and how secure the app was, the coding environment, and anything else. We therefore commissioned an external security firm to go through everything from the code to the physical security of the data centre, and we then published their report – in full, including identifying the handful of failings that were noticed, none of which were serious.

How many companies would agree to have their code audited in this way, with the results published openly? This is normally unique to open source projects. I did ask Microsoft Health if they would consider this, and they said that they didn’t need to, because they knew it was secure. I know if I was commissioning a major piece of software, I’d trust the people who openly admitted to some minor failings over those who asserted without proof that they had none.

Another example of our freedom was in regard to the legal position of the data sharing agreement that DMH had with the Royal Free Hospital. There were complaints that, among other things, DMH were operating beyond the role of a Data Processor, and had more control over the data that was legal. This led to an investigation by the Information Commissioner’s Office, that lasted over a year.

In the meantime, we commissioned our own independent legal advice, paid for by DMH, although I don’t think they knew whom we commissioned until afterwards. That advice concluded that DMH had not broken the law – a view later reached by the ICO and others, who noted serious failings at the Royal Free. We would have published their conclusions whatever they had said – again few companies would voluntarily take that risk.

Our work developed in many other areas, such as looking at the clinical evidence base, the nature of public and patient involvement, both of which transformed as a result. We also set out a set of 10 ethical principles that we felt ought to apply to any technology company working in healthcare – and many would apply much wider.

 

After we had been going for 2 and a half years, our work was brought to an end by the end of DeepMind Health itself. Internal reorganisation meant that the research part of DMH work reverted to core DeepMind; applied work became part of Google Health. Neither group has used an equivalent approach.

Did we succeed? I think it was a mixed bag. We definitely caused a number of improvements in the way DeepMind Health operated, and some of that has carried on in its new incarnations.

We didn’t succeed in demonstrating trustworthiness, though I think we did go somewhere along that line. One problem is the press – many similar approaches are just ethics washing, and many independent reports are far from that, being sanitised before release. As a result, they sometimes over emphasised any criticisms we did make, seeing them as the hints of a bigger iceberg underneath, whereas we kept to a warts-and-all approach.

I also think we didn’t have long enough, nor sufficient profile for people to get used to looking at our work. There are also challenges around our own structure – why should people have had trust in us as appropriate proxies? We also discovered the limits of having had no confidentiality clauses – it meant that while we had the right to be told things in DMH, we couldn’t be told things happening at the Alphabet level or with some others, where NDAs were needed for other reasons.

Overall, I think it was an excellent experiment. Like many experiments, it had successes and failings – and points out how to improve this approach next time.

 

2017 DMH Independent Review, Annual Report 

2018 DMH Independent Review, Annual Report

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us