Skip to main content
Recrutiment & Employment Confederation
Research

Data-driven tools in recruitment guidance

Practical guides

The use of data-driven tools is rising across the recruitment sector*. This presents important opportunities for the sector, including improved efficiency of time-intensive processes. However, there are also a number of significant risks, with high-profile case studies highlighting the risk of discriminatory and unfair results if these tools are not designed and used with care.

The CDEI’s review into bias in algorithmic decision-making highlighted these issues and recommended that guidance be developed to help recruiters make informed decisions on their use of data-driven tools. This guidance aims to give recruiters who are looking to buy data-driven tools the mechanisms to effectively evaluate and responsibly deploy them, ensuring that appropriate steps have been taken to mitigate risks and maximise opportunities.

This guidance has been developed jointly by the Recruitment and Employment Confederation (REC) and the Centre for Data Ethics and Innovation (CDEI). The CDEI is an expert government body enabling the trustworthy use of data and AI. More information about the CDEI can be found on the CDEI web page or by contacting cdei@cdei.gov.uk.

Section A is an introduction to the data-driven tools used in recruitment processes, covering existing types of tools available in the recruitment funnel - across sourcing, sifting and interviewing - and the kinds of risk they may bring.

Section B provides guidance on incorporating and using data-driven tools, including:

  • evaluating effectiveness and value-add for your recruitment process
  • key considerations for data-driven tools
  • communicating your approach to promote trust and transparency.

We use the term “data-driven tools'' to refer to technologies and techniques that help to generate useful insights from data. This could range from the use of software for big data analysis, to process automation systems, and on to machine learning or other forms of artificial intelligence (AI).

This is predominantly a buyer’s guide: it assumes that recruiters are deploying tools bought from external technology suppliers. However, much of the guidance will still be applicable if the tool is built in-house.

The focus is AI and data-driven tools: this guidance should be used in addition to existing practice for compliance in recruitment.

* (CDEI Review into Bias in Algorithmic Decision Making Chapter 3: Recruitment)


Summary of recommendations

This table summarises the guidance provided in the rest of this document and is particularly applicable for tools procured for the sifting and interviewing stages of the recruitment process. For more details and explanations, please see the full guidance below.

Before purchasing

During purchasing

Before use

During and after use

Articulate requirements

Seek information from the vendor

 Verify legal compliance

Consider running a pilot

Put transparency processes in place

Monitor and assess the tool’s performance

Section 1: Evaluation

Set out your objective for the tool in a procurement document

Seek information from the vendor about how the tool works and articulate any concerns

Ensure the tool works correctly, using a pilot to test this where appropriate

Monitor against original business case.

Section 2: Key considerations

Non-discrimination and accessibility

Develop internal baselines for protected groups and agree acceptable limits for disparity between groups

 

Seek information from the vendor about the underlying assumptions and testing for bias and accessibility. Ask for copies of auditing documentation and check how often audits take place.

Consider completing an Equalities Impact Assessment

Assess the equalities outcomes of the tool in a pilot test

 

 

Continue to test the equalities outcomes of the tool in practice

Assess any disparity between different groups

Data protection, transparency and due process

Begin completing a Data Protection Impact Assessment (DPIA)

 

Check for compliance with relevant legislation: UK GDPR and Equality Act 2010

Give candidates appropriate notice for data collection

Provide a means for human review

Ensure ongoing legality (e.g. maintaining DPIA)

Reflect on any issues and discuss with the supplier

Section 3: Communicating and promoting trust

Engage with vacancy holders or clients to understand where automation will fit into existing processes

Consider engaging with protected groups and representatives to understand potential risks and respond to them

Provide information to candidates, making them aware of the use of a data-driven tool and its purpose

Seek continual feedback from candidates and vacancy holders

 


Section A: data-driven tools used in recruitment processes

In this section, we introduce existing types of tools available in the recruitment funnel and the kinds of risk they may bring. Guidance on how to approach and mitigate these risks can be found below.

Sourcing     ➔ Screening     ➔ Interview     ➔ Selection
Job description review software Qualifying screening tools Voice and face recognition in video interviewing  Background check software
Targeted advertising CV matching   Offer predicting software
Recruiting chatbots Psychometric tests and games    
Headhunting software Ranking algorithm    

 

Stage 1: sourcing tools

Sourcing tools are used across the recruitment sector to attract a wide pool of high-quality candidates and begin to engage individuals with employment opportunities. For example, the audience for a job advert will have a substantial impact both on the quality of the pool of candidates, and the individual rights of those who may be interested in applying.

On social media for example, targeted advertising can reach suitable or new candidates using specific criteria. 

These technologies present opportunities to make the recruitment process more effective, reach a wider pool of candidates, and provide a more bespoke service to potential applicants. In some cases, these technologies can be used to improve the diversity of an applicant pool and some recruitment professionals have highlighted the underutilisation of these tools as a barrier to diverse recruitment.

However, there are also a number of risks. The sourcing stage dictates which candidates are aware of opportunities and, without proper usage, these tools may present unforeseen barriers - in particular to protected groups - and may have discriminatory or otherwise unfair consequences.

Example tools in the sourcing stage

Recommendation systems on platforms and job boards which prioritise job vacancies for candidates based on their criteria.

Chatbots or conversational AI to engage with candidates: these are often used to guide applicants through the stages of the process and can improve efficiency.

Multi-database candidate sourcing: enabling recruiters to search for the most relevant candidates either through a simple keyword search or more complex identification of active and passive candidates.

 

Deep dive: targeted online advertising

Social media companies collect information on the users of their platform, such as name, education, interests, as well as patterns of location and usage. Platforms can then use this data to train machine learning models, which are able to make inferences about users and group similar types of accounts together.

Based on the information provided and the inferences made, a recruiter can use online platforms to target advertisements at users with relevant characteristics. They can use demographic qualities to target users, or use “lookalike” features where an ad is targeted at profiles similar to one identified by the client.

These features can be used to positive effect: for example, a company with low levels of diversity could consider using targeting to ensure job ads are seen by a wider or more diverse audience than their typical applicant pool. However, there are also examples of these targeting tools replicating historical biases and limiting some groups’ access to opportunities.

Stages 2 and 3: sifting and interviewing tools

The sifting and interviewing stages are the key decision-making moments in a recruitment process and data-driven tools to support those choices are available. In most cases these tools are not yet sophisticated enough to make decisions on recruitment without any human intervention at all. It is likely to be more appropriate to see these tools as providing useful assistance in making recruitment decisions.

Human bias is a known issue in the sifting and interviewing processes. Automated tools have the potential to reduce this human bias by providing a standardised approach to evaluating applications. However, badly designed or implemented tools - for example those trained on biased historical datasets or which do not adequately take into account the needs of protected groups like disabled people - risk perpetuating existing biases and creating new barriers.

The introduction of automated systems can lead to concerns around transparency, fairness, and accuracy as candidates are not clear on how they have been assessed. Ensuring compliance with data protection and equalities law is key to maintaining trust and legal compliance when deploying data-driven tools.

 

The sifting stage: The interviewing stage:
CV screening and evaluation software Video screening software including facial, voice, and emotion expression recognition (these tools are very high risk)
Structured question assessments based on experience Audio transcription
Game-based assessments including psychometric testing Screening and evaluation of transcription material
Task-based assessments  

Case study: Amazon’s hiring algorithm and gender bias

In 2018, it was reported that Amazon had been using an algorithm that favoured male candidates in its internal recruitment process. Their recruitment algorithm was trained on 10 years of historical employment data, and the majority of the candidates had been male. From this pattern, the algorithm “learnt” to favour men at the expense of women.

As reported by Reuters, the algorithm downgraded applicants who attended women-only universities, as well as those with CVs that included the word “women’s”, as in “women’s sports team”. Upon discovering this unintentional bias against women, Amazon stopped using the algorithm. This incident illustrates the broader issue of algorithmic bias: when historical data is used to inform future decisions, patterns of exclusion can be repeated or even exacerbated.

https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G


Section B: guidance on deploying tools responsibly

Implementation of the guidance

The level of assurance you apply to the procurement and usage of a data-driven tool must be proportional to the risk involved. The risk level exists on a spectrum; it is guided by the law, but based on your professional judgement, and this guidance should be implemented to a degree that is proportional to that risk.

You need to make sure that you know who in your organisation is responsible for taking each of the steps laid out below, as well as who has overall responsibility for the process. This is likely to include a mixture of procurement teams, legal or compliance teams, data protection officers, and others. It might be that you identify someone internally to upskill and give them overall responsibility for ensuring compliance with the guidance.

Where there are any questions or steps you do not feel qualified to answer or undertake, and you do not have the expertise in-house to support you, you should seek external expertise to ensure your procurement decision-making process is robust.

Before purchasing the tool:
Identify who will have overall responsibility for the purchase and use of the tool.
Identify who will be responsible for each of the steps in the process outlined in the guidance

1.    Evaluation: will the tool be valuable and effective in your business context?

1.1. Tool use within the wider recruitment process

Set a clear vision of what you’re hoping to achieve in using the tool, and where it will sit within the recruitment process overall. This should include deciding whether you want the tool to make fully automated recruitment decisions, or whether you intend to use the tool to support recruitment decision-making. If it is the former, specific data protection legislation (Art 22 UK GDPR) is triggered and additional provisions need to be made (see 2.4).

Some tools may also help standardise assessments and address any bias in existing recruitment processes if used strategically - for example, to standardise the bar for entry at the first round of testing. They may also widen access to talent pipelines, help source candidates with particular skill sets and reduce operational costs. Think carefully about what the intended benefit of using the tool is and therefore how it should be incorporated into the recruitment funnel.

These are important exercises to undertake before starting to procure a tool to ensure that the product will meet your specific business needs and provide value for money.

Before purchasing the tool:

Write down your proposed use case clearly before starting procurement, this should include:

If and how the automated element will interact with manual steps and integrate with the wider process.

What the business need and objective is for procuring the tool (such as  addressing biases in current recruitment practices; widening access to talent pipelines; sourcing candidates with a particular skill set; reducing operational costs).

Use this information to assess the tool when deciding whether it meets your specifications during procurement.

1.2 Information about functionality

If a tool or it’s interface does not function as intended, the benefits of using the tool will be limited and it may hinder applicants unfairly, or put them off. Vendors should be able to provide you with information about the tool’s functionality and take you through this in a demo.

Before purchasing the tool:

Seek information from the vendor on the functionality and processes behind the tool.Ask for a demo of the tool, particularly if the tool will have a substantial impact on decisions made in the recruitment process or candidates will interact directly with the tool. Examples of questions you could ask are:

  • What testing has been done to ensure the interface is accessible and functional for end users?
  • What additional features or modifications are they planning to make in future versions of the tool?
  • Can they detail examples of other customers who have used the tool effectively?
Ensure you have appropriate technical support built into the contract so there is the option for troubleshooting support from the vendor, should any technical problems arise.
Explore the option of running a pilot of the tool to understand how it operates in the context of your organisation.

 


2.   Key considerations for data-driven tools

Data-driven tools provide opportunities to make hiring processes more transparent and equitable. However they are not without risks and legal requirements; these must be carefully considered and addressed alongside existing methods to mitigate against discrimination, promote transparency and meet legal requirements.  

2.1 Accuracy and legitimate underlying science

It is important to understand what decision-making process the tool is automating (e.g. CV sifts, assessments, or interviews) and understand how the tool makes decisions in order to judge whether your organisation is comfortable with the underlying assumptions made by the technology. For instance, if the tool is using game-based assessments to evaluate candidates, then ensure that the underlying science of those games is sound. 

The method of candidate evaluation should not be based on arbitrary factors or aspects of human behaviour. Above all, assessments must be linked to the skills needed for the job.

Check for a clear scientific evidence base before purchasing the tool:

Seek information from the vendor about the underlying science behind the given decision-making process.

What is being evaluated, and is there a clear evidence base? You could ask the vendor to cite the scientific studies that the tool is based upon.

Be wary of any methods of assessment that are not based on high-quality scientific methods, in particular tools that claim to infer human emotions from video interviews, as evidence suggests these have substantial inaccuracies and are potentially discriminatory.

Ask for copies of auditing documentation in order to understand whether the outcomes produced by the tool (for example, who is recommended for recruitment) have been evaluated in order to make sure that those outcomes are accurate.


2.2 Bias and discrimination

In data-driven tools, bias may emerge in two separate but related ways which can compound to create cumulative bias.

a) Learnt bias: where data-driven tools “learn” how to make hiring decisions based on existing employment data. If there are patterns of bias in that data, then the data-driven tool is likely to repeat those patterns, which can lead to unfairness. 

b) Inaccuracy: tools that analyse face, speech, or voice may be less accurate for particular demographic groups and introduce discrimination through simply working less well for those groups. Research has shown that facial recognition technology can be less accurate for people with darker skin, and particularly for women with darker skin. Tools that analyse aspects of voice or tone or that transcribe audio to text may be less accurate for certain accents and manners of speech. 

Good vendors will test their technology, for example: auditing and validating the underlying algorithms to protect against potential bias. Recruiters should demand this testing when looking to procure such tools. However, be aware that this testing may have been conducted in a different country, such as the US. It’s important to be aware of the differences in demographics and legal standards, such as the four fifths rule.

Relevant frameworks and principles 

  • The Equality Act 2010 prohibits discrimination on the basis of nine protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation) which applies to the use of data-driven tools. More information is available on the Equality and Human Rights Commission’s website

  • Direct discrimination occurs when someone is treated worse than another person because they have - or are perceived to have - a protected characteristic, or because of an association they have with a person with a protected characteristic. For example, if a data-driven sifting tool ranked women lower than men based on their recorded gender.

  • Indirect discrimination occurs where an apparently neutral practice, policy or rule puts a group with a protected characteristic at a particular disadvantage, unless it can be objectively justified. An objective justification would exist if the employer/recruiter can show that it is a proportionate means of achieving a legitimate aim. [The legitimate aim must first be clearly identified, followed by consideration of proportionality. The legitimate aim will be looked at on a case-by-case basis  (can include protecting health and safety, running an efficient service, or economic and operational requirements). Proportionality requires striking a balance between the importance of the employer’s aims and the seriousness of the impact on the employee.​​​​​​]. For example, if a tool prioritised candidates solely on the number of years they had in paid employment, which is likely to be a proxy for age (and also potentially have ramifications for groups with other protected characteristics). 
  • Under the Public Sector Equality Duty in the Equality Act 2010, public sector bodies are required to have due regard to the need to eliminate discrimination. This is likely to include running their own tests to detect and mitigate bias - which remains best practice for all organisations (details under “how to assess statistical bias” below).
  • The duty not to discriminate is on the recruiter or employer, not the vendor.

How to test for bias

Test for bias before purchasing the tool:

Seek information from the vendor: vendors will likely have tested tools for bias in their own relevant testing contexts. Use this as a basis for assessing whether you are comfortable with the tool. 

Be aware that UK populations may be different, and that UK legal standards may not have been used. In particular, many vendors use the US-relevant “four-fifths'' rule (see above) to determine how much adverse impact is permissible. Decide if your team is comfortable with the vendor’s results, keeping in mind that the context may be different. 

Seek information on the diversity of the vendor’s engineering and product development team including the composition of their team. Check if the team has been trained on unconscious bias.  

Ask for copies of auditing documentation and check how often audits take place, in order to understand whether the outcomes produced by the tool (for example, who is recommended for recruitment) have been evaluated in order to make sure that those outcomes are non-discriminatory.

For an additional level of assurance, you may wish to commission independent evaluation of the tool.

 

During purchasing the tool:
If the analysis (method set out in the box below) suggests that the tool produces bias and you want to continue with procurement, you should provide an objective justification for this decision (a proportionate means for achieving a legitimate aim). This standard is unlikely to be met unless you can evidence that the assessment tests for required attributes and there are no alternative measures for testing candidates on these requirements. 

 

Before, during and after using the tool:

Consider running tests for bias yourself, in pilots of the tool and throughout its use: to assess any potential bias arising from the tools used in relation to the actual hiring process, consider running tests of the system using existing applications on file from candidates with a diverse range of backgrounds. Note that:

Collecting and analysing demographic data for the purpose of testing fairness is permitted within UK data protection law where explicit consent has been provided (see 2.3).

Statistical analysis may not reveal all forms of discrimination. Please see the box below for more information. 

An Equality Impact Assessment may be a useful tool for considering the important issues relating to discrimination and is strongly recommended for tools that are likely to have a significant impact on decision making, particularly where a large number of applicants are involved. 

The Four Fifths Rule

This rule is used to evidence the presence of “adverse impact” in the United States. Adverse impact is present where a selection rate for a protected group is less than ⅘ (80%) of that of the group with the highest selection rate. This is not a test that is applied in law in the United Kingdom where a smaller but persistent degree of adverse impact may also need to be justified.

While this may be helpful context, this standard doesn’t hold the same legal weight in England and Wales. A tool might be lawful in the US but lead to discrimination in the UK.

Positive action

Section 159 of the Equality Act 2010 allows an employer to treat an applicant with a protected characteristic more favourably in a hiring decision than someone without that characteristic, as long as they are equally qualified for the role.

This mechanism can be used to improve the diversity of an organisation and remedy underrepresentation in senior roles.

Setting an organisational benchmark

Statistical testing for bias

Statistical testing is a recommended part of ensuring data-driven tools are non-discriminatory. This testing is designed to highlight disparity between groups and therefore concerns about the tool, it is not designed to guide decision making about individual applicants.

The testing works by setting a baseline for your organisation and then testing the tool against this baseline to assure you that the tool is operating as it should, or to alert you of any issues. This testing should be conducted at the pilot stage and monitored throughout the lifecycle of the tool (note that this will require sustainable internal resource).

1.    Articulate your organisation’s baseline: before introducing a data-driven tool, you should produce a baseline of progression for applicants from groups with different characteristics. [If there is enough data to draw meaningful conclusions, use intersectional categories, such as “Asian women” or “women above age 50.”​​​​] To start this process, you will need to identify the range of characteristics that are relevant in your context (you will likely be drawing from the nine protected characteristics).  The information you could consider in developing your baseline includes:

a.    The progression of applicants in previous recruitment rounds, where you are not using the tool.

b.    If data on previous recruitment rounds is unavailable, consider ways to identify a baseline such as running data collection on a new recruitment round.

c.    In addition to your baseline from previous recruitment rounds, you may wish to consider a number of factors which set a higher bar for combating bias in recruitment; this is particularly important if your organisation is not demographically diverse. Factors include:

●    The demography of the population you are recruiting from (e.g. the London workforce).

●    The industry baseline.

Note: you will need to consider which of these data sets are accessible to you and adjust your baseline accordingly.

2.    Measure the impact of the tool in new recruitment rounds: conduct the same analysis (sift, interview, offer) to understand the trends for the same protected characteristics whilst using the tool.

3.    Evaluate: compare how different demographic groups progress when the recruitment tool is used in comparison to the position previously. If the analysis shows worrying trends, for example fewer women are now progressing, then this should alert you to a problem with the new tool which requires assessment to ensure that no discrimination is taking place. Alternatively, if the analysis shows parity of outcomes for different demographic groups, this is evidence that the tool is functioning appropriately. 

How to check accessibility

Check the accessibility of the tool during purchasing:

Seek information from the vendor: 

Recruiters and employers should seek to understand and anticipate the effect of these tools on candidates with disabilities and older people. Vendors may not have quantitative information or statistics about the effect of tools on people with disabilities given the wide variety of disabilities (see box on engagement with representative groups below).

 

Before using the tool:

Conduct analysis and directly engage with groups that represent the interests of people with disabilities and older people. This could include discussion with trade union equalities officers.

It may be difficult to collect enough data on accessibility for effective statistical analysis so discussing the tool with groups that represent the interests of people with disabilities or older people may give greater insight on potential risks. 

 

During and after using the tool:
Reasonable adjustments and alternatives: be prepared to offer candidates reasonable adjustments or an alternative evaluation process. If offering an alternative process, make sure that the process is as robust as the data-driven approach, and that there is no bias against candidates who use the alternative process. Explain the reasonable adjustments clearly and transparently. 
Test impact: once a tool is in use, monitor its impact in practice. Create a channel for feedback from individuals about their experiences from an accessibility perspective in both the data-driven and alternative processes. 
Reflect on results: note that introducing tools which require people to request adjustments can introduce unfairness simply because candidates with disabilities may not want to declare those disabilities for fear of stigma or disadvantage, or may not understand enough about the system in order to understand whether it places them at a disadvantage such that they need adjustments. Consider whether this will apply to your workplace context (or engage with your client to understand).

 

Accessibility

  • Data-driven tools can introduce accessibility issues for disabled candidates, which may lead to unlawful discrimination. If a tool is not accessible or has functionality which makes it hard to use, this may also create discrimination. Addressing the accessibility of tools is as important as ensuring they don’t make biased decisions. For example:
  • game-based assessments may be inaccessible to people with visual impairments; 
  • interview-based assessments that use speech-to-text transcription or other audio processing may be inaccessible to people with certain speech impediments. 
  • online recruitment tools might be difficult for older people to use if they have not used technology extensively in their working lives.

Relevant frameworks and principles

  • The Equality Act 2010 prohibits discrimination on the basis of disability and age.
  • There is a duty to make reasonable adjustments in relation to disabled people, which means that organisations using recruitment tools need to ensure that disabled people are not unreasonably disadvantaged. For example, if people with a speech impediment on average score lower in a video-based assessment which assesses how someone speaks, there will be a duty to make reasonable adjustments to that tool.
  • Recruiters or employers should not use tools which place older people at a disadvantage unless there is an objective justification.

2.3 Data Protection

Data-driven tools in recruitment are very likely to require personal data and therefore engage data protection law. In addition, as noted above in the discrimination section, in order to monitor the hiring system’s effect on protected groups, it will be necessary for the employer (or the vendor) to collect and analyse protected characteristics data on candidates, which is lawful where appropriate safeguards are in place. 

Relevant frameworks and principles

  • The general data protection regime is covered by the UK General Data Protection Regulation (UK GDPR) and tailored by the Data Protection Act 2018 (DPA). The Information Commissioner’s Office (ICO) regulates data protection in the UK and also provides practical guidance on interpreting the law and good practice online.
  • Under UK data protection law, you must have a lawful basis for processing personal data. The legal responsibility for complying with data protection regulation falls onto the data controller. Who the controller is may vary depending on the setup, but it is important to confirm with the vendor how they intend to use and process candidate data to maintain fairness, privacy, and ensure compliance with data protection law throughout the pipeline. In most cases, the recruiter is likely to be the data controller and the vendor will be a processor, as the recruiter will determine the purposes for which the data are processed and the means of processing (the controller instructs the processor how they will use and process the data). If the vendor also uses the employer’s data to train their algorithm, they would also become a controller, as they are determining the means of processing and so acquire the relevant responsibilities.
  • Collecting data for diversity monitoring: this data is very likely to be “special category data” as it is particularly sensitive and therefore requires additional protection. [Identification of a lawful basis under Article 6 and a condition for processing under Article 9 UK GDPR].  In practice, this means explicit consent from the applicant is required, unless the processing is necessary for reasons of substantial public interest and meets the equality of opportunity or treatment condition set out in Schedule 1 (8) of the DPA. You may wish to take steps that highlight this collection and processing when seeking consent for such processing as part of a larger consent request. The ICO has a guide on Data Protection and AI

How to check for compliance with data protection law

Check for compliance with data protection law before using the tool:
Agree who has the role of data controller (vendor or recruiter or both) before a procurement contract is signed - controllers will be responsible for complying and demonstrating compliance with UK GDPR. 
With the vendor, clarify how candidates’ data will be used and that you have a lawful basis for collecting their data.

Ensure that the vendor’s practices comply with UK GDPR requirements, paying particular attention to the guidance that applies to special category data. This includes:

Understanding how your candidates’ data will be used in the future. For example, if it is used to train the vendor’s model, consent will be needed from candidates before they interact with the tool. 

You could consider whether there may be appropriate use of data minimisation (where only personal data needed for your purpose is collected), de-identification techniques (where identifying data is removed), and privacy enhancing technologies. Data collection should be adequate, relevant, and limited to what is necessary. See ICO anonymisation Code of Practice

Ensuring robust processes are in place for any data storage and mechanisms are in place to delete data.

Complete a Data Protection Impact Assessment (DPIA) - you may want to complete this with the vendor. Beginning the DPIA as early as possible is advisable to assess and mitigate the data protection risks at an early stage and keep them updated throughout. A DPIA must be done before any type of processing that is “likely to result in a high risk”. More information on conducting DPIAs can be found on the ICO website here.

It is good practice to publish the DPIA so it is accessible to applicants and workers.

You may also consider completing an Equality Impact Assessment (see 2.2.) alongside or combined with a DPIA.

Where protected characteristic data is being collected for equalities and fairness monitoring, ensure that:

Candidates are given appropriate notice, are provided with a clear explanation of how data will be used and are also given an option to opt out; and

This data is not accidentally “leaking” into the decision making process; it is being kept ring fenced and separate. 

Key terms

Personal data is information that relates to an identified or identifiable individual -  This could be as simple as a name or a number, or could include other identifiers such as an IP address or cookie identifiers.

Special category data is personal data that needs more protection because it is sensitive - Special category data includes personal data which reveals racial or ethnic origin; political opinions; religious or philosophical beliefs as well as data concerning health, a person’s sex life and their sexual orientation, amongst other things. [Article 9 of UK GDPR]. 


2.4 Transparency and due process

How to build transparency into the process

Build transparency into the process before using the tool:

Understand whether the tool is making decisions based solely on automated processing (whether Article 22 applies): have a good understanding of how the data-driven system works internally (within your organisation), on what basis decisions are made and the level of human oversight. 

If this provision does apply, ensure you have a lawful basis and provide routes for human intervention, the individual to express their point of view, and human review where requested. 

Take appropriate measures to ensure that GDPR duties are met, including conducting a Data Protection Impact Assessment (DPIA). It is good practice to check that the vendor has also undertaken a DPIA of the tool. More information on fulfilling Article 22 duties can be found on the ICO website here.

During and after using the tool:

Regardless of whether the process is solely automated, good practice includes many of the following steps, such as:

Explain that an automated tool is being used and create a clear appeals process, including a point of contact for human intervention and review. 

Prepare to explain the factors that went into data-driven decisions, and to provide documentation from the vendor about the process.

Consider providing a non-automated alternative to the tool with human review. Where feasible, run a dip sample of applications examined by the two systems to ensure equivalence. 

Communicate the broad criteria for success and basis of decisions clearly and meaningfully to candidates. Ensure that vacancy holders are prepared to communicate decisions if need be.

Relevant frameworks and principles

There are legal requirements around transparency set out in data protection law about the use of AI in decision making (as well as wider use of candidate data) which must be complied with. In addition, both candidates and recruiters may be nervous of the “dehumanisation” of the process: clear communication and transparency may help to mitigate some of these concerns (explored further in Building Trust below). 

Relevant frameworks and principles

  • Transparency is a core part of the UK’s data protection regime and there are several provisions that are relevant to transparency which should be complied with in all recruiting practices. Article 22 may be particularly relevant when using AI.
  • Article 22 of the UK GDPR requires suitable safeguards where recruitment decisions are solely based on “automated processing” [With respect to Article 22, recruitment decisions fall under decisions that have legal or similarly significant effects. More information is on the Information Commissioner’s Office website]. Understanding whether this provision is triggered in your use-case must be linked back to the considerations in the evaluation section: is the purpose of the tool to provide fully automated decisions, or to provide decision-making support? If it is the former, Article 22 is engaged. See the ICO guidance on AI and UK GDPR.

 

  • Firstly, recruitment decisions based solely on automated processing are prohibited unless the decision is either necessary for entering into or the performance of a contract with the applicant, or is based on their explicit consent. 
    • Consent: there are limitations as to how far consent can be relied on in the recruitment and employment context as consent must be "freely given", which may not be the case due to the imbalance of power between candidate and prospective employer. 
    • Contractual: decisions made in the recruitment process could be considered “necessary for entering into or performance of a contract”, providing a lawful basis for fully automating the decision. For example, if an organisation receives thousands of applications for a role, making it practically very difficult for the process to rely on human reviewers, it will likely be appropriate to rely on this exception.
  • However, there are limits to relying on this contractual basis. For example, if there is a less intrusive way of processing candidate data to provide the same service then you cannot rely on this lawful basis for the processing of data with AI.
  • Secondly, where Article 22 has been triggered, suitable measures must be in place to safeguard the applicant, which includes - at a minimum - the right to have human intervention, to express their point of view, and to contest any automated decision-making . In addition, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the candidates must be provided. 

3.    Building trust and communicating effectively

When using a data-driven tool in hiring, it can be difficult for vacancy holders to understand why decisions are made and to communicate this process with candidates. Similar to traditional hiring processes, data-driven systems may use factors that are unknown to vacancy holders or candidates. Communicating effectively and proactively about how data-driven tools are used can improve trust in these systems and the organisations that use them. 

Even if in practice a data-driven tool is more standardised and tested than traditional methods, technological tools can elicit public trust concerns about fairness, transparency, and privacy, as well as how to navigate online systems.
 

Before using the tool:
Discuss these issues with your clients: if you are a recruitment agency, raise the above risks and mitigations with your clients. 
Consult with groups that represent marginalised individuals in deciding whether to adopt tools. For example, run challenge panels with diversity and inclusion networks, trade union representatives, or interested staff. 
If you are an REC member, ensure the tool complies with the REC Code of Practice.

 

At the point of application:
Communicate relevant aspects of how the system works in plain language to candidates, explaining how they can prepare for success.
Provide contact information for candidates should they face technical issues: where the tool is making a decision about the applicant, ensure candidates have access to contact details in case the automated system does not work as intended. 
Provide an appeals mechanism with a human reviewing the decisions made. 

 

Following the decision-making process:
Explain how decisions about candidates have been made to them.

Create channels to solicit feedback from candidates and vacancy about their experience.

You may also consider having systems for redress that allow candidates to challenge decisions (see transparency and due process above).

 

During and after use of the tool: 

You could consider publishing documentation around the use of the tool to proactively communicate its purpose, risk mitigations in place and to build trust. This might include:

 

Other guides in this series