Digital monitoring is growing in South Africa’s public service – regulation needs to catch up

Technology


Government departments across South Africa are increasingly relying on digital tools to evaluate public programmes and monitor performance. This is part of broader public-sector reforms. Their aims are to improve accountability, respond to audit pressure and manage large-scale programmes with limited staff and budgets.

Here’s an example. National departments tracking housing delivery, social grants or infrastructure rollout rely on digital performance systems rather than periodic paper-based reports. Dashboards – a way of showing visual data in one place – provide near real-time updates on service delivery.

Another is the use platforms that collect mobile data. These allow frontline officials and contractors to upload information directly from the field.

Both examples lend themselves to the use of artificial intelligence (AI) to process large datasets and generate insights that would previously have taken months to analyse.

This shift is often portrayed as a step forward for accountability and efficiency in the public sector.

I am a public policy scholar with a special interest in monitoring and evaluation of government programmes. My recent research shows a worrying trend, that the turn to technology is unfolding much quicker than the ethical and governance frameworks meant to regulate it.

Across the cases I’ve examined, digital tools were already embedded in routine monitoring and evaluation processes. But there weren’t clear standards guiding their use.

This presents risks around surveillance, exclusion, data misuse and poor professional judgement. These risks are not abstract. They shape how citizens experience the state, how their data is handled and whose voices ultimately count in policy decisions.

When technology outruns policy

Public-sector evaluation involves assessing government programmes and policies. It determines whether:

  • public resources are used effectively

  • programmes achieve their intended outcomes

  • citizens can hold the state accountable for performance.

Traditionally, these evaluations relied on face-to-face engagement between communities, evaluators, government and others. They included qualitative methods that allowed for nuance, explanation and trust-building.

Digital tools have changed this.

In my research, I interviewed evaluators across government, NGOs, academia, professional associations and private consultancies. I found a consistent concern across the board. Digital systems are often introduced without ethical guidance tailored to evaluation practice.

Ethical guidance would provide clear, practical rules for how digital tools are used in evaluations. For example, when using dashboards or automated data analytics, guidance should require evaluators to explain how data are generated, who has access to them and how findings may affect communities being evaluated. It should also prevent the use of digital systems to monitor individuals without consent or to rank programmes in ways that ignore context.

South Africa’s Protection of Personal Information Act provides a general legal framework for data protection. But it doesn’t address the specific ethical dilemmas that arise when evaluation becomes automated, cloud-based and algorithmically mediated.

The result is that evaluators are often left navigating complex ethical terrain without clear standards. This forces institutions to rely on precedent, informal habits, past practices and software defaults.

Surveillance creep and data misuse

Digital platforms make it possible to collect large volumes of data. Once data is uploaded to cloud-based systems or third-party platforms, control over its storage, reuse and sharing frequently shifts from the evaluators to others.

Several evaluators described situations where data they’d collected on behalf of government departments was later reused by the departments or other state agencies. This was done without participants’ explicit awareness. Consent processes in digital environments are often reduced to a single click.

Examples of other uses included other forms of analysis, reporting or institutional monitoring.

One of the ethical risks that came out of the research was the use of this data for surveillance. This is the use of data to monitor individuals, communities or frontline workers.

Digital exclusion and invisible voices

Digital evaluation tools are often presented as expanding reach and participation. But in practice, they can exclude already marginalised groups. Communities with limited internet access, low digital literacy, language barriers or unreliable infrastructure are less likely to participate fully in digital evaluations.

Automated tools have limitations. For example, they may struggle to process multilingual data, local accents or culturally specific forms of expression. This leads to partial or distorted representations of lived experience. Evaluators in my study saw this happening in practice.

This exclusion has serious consequences especially in a country with inequality like South Africa. Evaluations that rely heavily on digital tools might find urban, connected populations and make rural or informal communities statistically invisible.

This is not merely a technical limitation. It shapes which needs are recognised and whose experiences inform policy decisions. If evaluation data underrepresents the most vulnerable, public programmes may appear more effective than they are. This masks structural failures rather than addressing them.

In my study, some evaluations reported positive performance trends despite evaluators noting gaps in data collection.

Algorithms are not neutral

Evaluators also raised concerns about the growing authority granted to algorithmic outputs. Dashboards, automated reports and AI-driven analytics are often treated as the true picture. This happens even when they conflict with field-based knowledge or contextual understanding.

For example, dashboards may show a target as on track. But in an example of a site visit, evaluators my find flaws or dissatisfaction.

Several participants reported pressure from funders or institutions to rely on the analysis of the numbers.

Yet algorithms reflect the assumptions, datasets and priorities embedded in their design. When applied uncritically, they can reproduce bias, oversimplify social dynamics and disregard qualitative insight.

If digital systems dictate how data must be collected, analysed and reported, evaluators risk becoming technicians and not independent professionals exercising judgement.

Why Africa needs context-sensitive ethics

Across Africa, national strategies and policies on digital technologies often borrow heavily from international frameworks. These are developed in very different contexts. Global principles on AI ethics and data governance provide useful reference points. But they don’t adequately address the realities of inequality, historical mistrust and uneven digital access across much of Africa’s public sector.

My research argues that ethical governance for digital evaluation must be context-sensitive. Standards must address:

  • how consent is obtained

  • who owns evaluation data

  • how algorithmic tools are selected and audited

  • how evaluator independence is protected.

Ethical frameworks must be embedded at the design stage of digital systems.

The Conversation

Lesedi Senamele Matlala is affiliated with the South Africa Monitoring and Evaluation Association (SAMEA). I am the chairperson



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *