Artificial Intelligence (AI) has the potential to improve human rights. From improving medical diagnoses to helping tackle climate change. At BT, we already use AI for things like tackling nuisance calls or detecting cyber threat patterns. And we’re exploring things like modelling our energy usage to help reduce our carbon footprint.

Like any new technology AI brings opportunities and raises new questions. How do we make sure decisions made by AI aren’t biased towards some people? How do we prevent our networks being used for unethical AI applications? And how will AI impact jobs in the future?

We take these questions seriously. We’ve set up a Responsible AI Working Group to explore the risks and opportunities of AI for our business and develop a responsible, human rights-respecting approach to AI throughout our value chain.

As with other issues, we take a collaborative approach and work with others – in government, industry and civil society – to develop best practice and make sure any new rules are fit for purpose and encourage responsible innovation.

We’re actively engaged in the debate on AI and ethics at a national, European and global level. We regularly respond to policy and regulatory proposals and participate in industry working groups that support a responsible approach to AI innovation, including via techUK, Digital Catapult and the Digital Leadership Forum.