Skip to main content
Recrutiment & Employment Confederation
News

AI in recruitment: Avoiding data privacy risks

News from our business partners

This is a guest blog by REC Business partner, Curo Services

As artificial intelligence continues to reshape the recruitment landscape, many agencies are exploring the use of Large Language Models (LLMs) like ChatGPT to streamline CV assessments, candidate matching, and client communications. While the potential for efficiency is undeniable, there’s a growing concern that must not be overlooked: the risk of exposing personal data to external AI systems. 

The hidden risk: Data leakage through LLMs 

When Recruitment Consultants input candidate CVs, client briefs, or other sensitive information into publicly available LLMs, they may unknowingly be sharing personal data with platforms that don‘t guarantee data privacy. Some LLMs leverage user interactions to further train their models, meaning that your candidates’ and clients’ personal information could be retained and reused, a serious breach of GDPR and data protection regulations. 

This risk is particularly acute in recruitment, where CVs often contain detailed personal identifiers, employment history, and contact information. Even anonymised data can be vulnerable if context clues allow re-identification. 

Microsoft Copilot: A safer alternative for recruitment agencies 

Unlike many public LLMs, Microsoft Copilot is built with enterprise-grade security and compliance in mind. Integrated within Microsoft 365, Copilot ensures that: 

  • User prompts and data aren’t used to train the model. 

  • All interactions remain within your organisation’s Microsoft 365 tenant. 

  • Data is governed by the same security, compliance, and privacy controls already in place. 

This makes Copilot a far safer choice for recruitment agencies looking to harness AI without compromising data integrity or breaching privacy laws, especially if you already use Microsoft 365 in your business. See Copilot in action on this video podcast

Why agencies need clear AI policies - now 

The adoption of AI tools in recruitment is accelerating, but many agencies lack formal policies governing their use. Without clear guidelines, consultants may turn to free or consumer-grade tools out of convenience, inadvertently putting the business at risk. 

To mitigate this, agencies should: 

  • Develop and enforce a comprehensive AI usage policy. 

  • Educate staff on the risks of using public LLMs with sensitive data. 

  • Promote secure, approved tools like Microsoft Copilot. 

  • Regularly audit AI usage across the organisation. 

AI offers transformative potential for recruitment, but it must be adopted responsibly. By choosing secure platforms like Microsoft Copilot and implementing robust internal policies, recruitment agencies can innovate confidently, protecting the trust of their candidates and clients while staying compliant with data protection laws. 

Learn more about Microsoft Copilot for Recruitment Agencies on our free webinar for REC members. Sign up here 

Curo_Logo_Purple 600x340.png

Learn more about Curo Services