By Marie Thirlwall, VP Digital and Data Commercialisation, Ploughshare
Now that the dust has settled a little after the inaugural AI Safety Summit, we can start to take a little bit of stock and have a look at what we know – via the media – has been discussed. Credit where credit is due – the UK Government got themselves out in front and started this off when it could have been a languishing idea, and made a cracking effort of getting a large international delegation to attend and add their own expertise. Important topics were discussed, as you’d expect, and an ongoing discussion was committed to. Sifted has a great write up that I would encourage everyone to read.
Safety with AI tools is an ongoing discussion, and not one that is going to go away anytime soon – officially it was being discussed in the 3rd Pillar of the National AI Strategy way back it seems in 2021. More generally, it started in Sci-Fi, and it’s going to continue now that we are in the future – albeit potentially not as far ahead as Asimov might have envisaged.
There is a lot still to do, and that was acknowledged by all parties – public and private. Certainly, in regard to concrete things that will actually impact the AI industry as a whole, and there is a lot still to work out. Each country will move at its own pace – regardless of if they signed the Bletchley Declaration or not – and of course they will have their own priorities in terms of investment or not into the safety aspect over the business aspect.
What I would like to see more discussion on in public forums, would be the need to have more people involved in the AI-driven decision-making process and the impact that AI would have on any decision that needs to be made. Furthermore, which types of high impact, high consequence decisions need to be audited or assured to maintain public trust. The Centre for Data Ethics and Innovation’s work is a good starting point for this, for sure, but it does need to trickle out to the private sector too – and be at the centre of everything data and AI.
At Ploughshare we work on making MOD inventions – so by their very nature destined for defence and security – a commercial reality. AI in the space of ethics on the battlefield is a known problem. Ethics is a very human thing, and AI is not human – Artificial is in the name after all.
Which begs a very specific complex question. The question isn’t whether or not AI can ever be safe – that has been asked multiple times by multiple people. The question I had in mind is more: can fully AI-augmented decision making ever be safe? Can these decisions made ever have that accountability or moral-aspect that a purely human decision chain has? Do we need to be focusing more to blend human cognition with machine-led intelligence as in the Human-Centred AI approach? (Dstl as a great biscuit book on HCAI which is worth a read.)
At what point do you find accountability when using AI to help you decide? If it goes wrong, who gets their proportional share of blame? Is it squarely on the human? What about the person that wrote the programming to intake data and look at probability of action with the input that was provided? What about the data-labeller that gave a bunch of things the same label that perhaps weren’t particularly accurate but did the job? It’s a difficult one to ask and a difficult one to answer – the rate that AI and Machine Learning (ML) technology is advancing is, frankly, nuts.
Ensuring that the most possible people are working on tools gives the best chance for things to be as accessible or transparent as possible. More expertise also makes things as safe as possible, for the most people possible. Open source has its limitations, but so does keeping things privately-held. Collective research, collective effort, and collective expertise are the ways forward here to ensure that we have at the very least, a set of safety measures that come with AI-augmented decision making.
Given we really don’t know how AI/ML will play out – this is perhaps one time when a safety net to innovation is needed.