Map of Lambeth and Southwark

Data and analytics

6 things funders can do to support the responsible use of data

13 October 2021
|
4 min read

Using data is a crucial part of the way we make decisions as a partner and funder to other organisations. Our Interim Data and Analytics Director, Claire Bénard, reflects on how to use data responsibly.

Claire Benard
Claire Bénard
Interim Data and Analytics Director

Whether Musk’s claims about Zuckerberg’s “limited understanding of AI” are justified is debatable. But two tech leaders disagreeing about the merits and risks of Artificial intelligence is at least a testament that this technology is the sharp double-edged sword of the 21st century. 

AI and, more broadly, data can be a dangerous accelerator of discrimination and biases: skin diseases were found to be less effectively diagnosed on black skin by AI-powered software, and search engines advertised lower-paid jobs to women. Staying away from it might sound like a safer choice, but this would mean missing out on the huge potential it offers: New York City is using models to spot landlords who discriminate against tenants using housing vouchers, and satellite imagery demonstrated that industries pollute more when the regulator is not watching 

At Impact on Urban Health, we are not just trying to avoid contributing to the problem, we want to be part of the solution. This blog outlines what we are doing and what we have learned so far to make cities and other urban areas healthier places to live, inviting others to join us in this work. 

1: Ensure data ethics is embedded at all stages   

Ethics approval is often a step in a project development. We want it to be part of all stages of our work. That’s why we have created principles for responsible data use to ensure that all our analysts, consultants and partners are aligned on how we work with data.  

We are also creating tools to support conversations about the limitations of our work, both internally and with partners. Ultimately, it is our responsibility as an organisation to ensure the projects we fund and the partnerships we are part of are safe for the people in our place. 

2: Seek external feedback 

To draw on specific expertise and get diverse perspectives, we created a Data Advisory Group. The group is composed of people with experience of data and ethics, from a range of organisations and varying levels of seniority. Our aim is not for this group to bring solutions to particular projects, but to help us develop our ethics principles and tools so we can design better partnerships in the future. 

3: It’s not just about when things go wrong 

Conversations about ethics are often centered on the unintended consequences of a project or potential risks. An even harder question to ask is: ‘who might be worse off if we are successful?’

For example, a debt advice charity could consider using case management notes and Natural Language Processing (NLP) to identify beneficiaries that are struggling with mental health and automatically refer them to an appropriate service. But what if beneficiaries who are not settled here were actively avoiding being flagged to statutory services because they feared their immigration status might be disclosed?

This hypothetical example illustrates why our real-life projects need to place the beneficiary of the model at the heart of the decision-making.  

Conversations about ethics are often centered on the unintended consequences of a project or potential risks. An even harder question to ask is: ‘who might be worse off if we are successful?’

4: Focus on the problem to pick the best tool, not the other way around 

If all you have is a hammer, everything looks like a nail. And some technologies, like Machine Learning (ML), are powerful, versatile, and – to many – fashionable hammers. Inspiring examples bring the temptation to fit the problem to the tool, rather than selecting the best tool for the problem at hand.  

Our Data Advisory Group has helped us to focus on the problem rather than the solution. Defining the challenge and the desired impact before thinking of tools is a good way to avoid hammering a screw. 

In our hypothetical example: is NLP the best way to identify mental health needs? If the organisation wants to design a mental health intervention, is it possible – and better – to just ask beneficiaries directly for their needs? 

5: When testing and scaling, the bar is different 

Building a model that flags mental health needs in a test environment has no impact on people’s lives. When scaling, the real-life implications of the solution – and its failures – need to be considered, involving the people whose data is used. Maybe this leads to re-thinking the design of the solution: instead of automating decisions, the model’s insights can be used to influence strategy or aggregated predictions can help assess changes in estimated mental health needs. 

As a funder, we want to make sure we build our capacity for the future by constantly innovating, exploring and testing. We should keep testing and acknowledge that sometimes,  a project could be scaled, but maybe it shouldn’t be scaled. 

Changing scaling strategy is not a failure of the feasibility study, it’s the success of a responsible decision-making framework on data use. 

6: Share learning with others

Our approach to data and analytics has been shaped by learning from others. Therefore, we are committed to sharing our learnings to support others on their data ethics journey. In addition to generating important conversations, being public about our work on ethics hold us accountable to our commitments and actions.