DORDA Rechtsanwälte GmbH logo

DORDA Rechtsanwälte GmbH

News and developments

Data Protection

The EU Pay Transparency Directive: How companies can prepare for new rules on equal pay

The EU Pay Transparency Directive (Directive (EU) 2023/970) entered into force on 7 June 2023 and must be transposed by all Member States by 7 June 2026. The Directive aims to close the gender pay gap throughout the EU by introducing detailed transparency and reporting requirements, as well as stronger enforcement mechanisms. A. Key Objectives and Scope The Directive aims to eliminate gender-based pay discrimination by requiring greater transparency in remuneration structures and strengthening employee rights to access pay information. It applies to all employers in both the public and private sectors and covers all individuals in an employment relationship under national law, including part-time, fixed-term, temporary agency, and platform workers. The Directive also strengthens the rights to job applicants during the recruitment process. B. Core Transparency Measures Pay transparency before employment. Employers will need to inform job applicants of the initial pay or pay range for a position prior to interviews or employment offers. Job titles and job advertisements must be gender-neutral, and during the application process employers may no longer ask candidates about their current or past remuneration. Transparency of pay structures. Employers must make available to employees the criteria used to determine pay levels and pay progression. These criteria must be objective, gender-neutral, and accessible. Individual right to information. Employees will have the right to request information about their individual pay level and the average pay levels, broken down by sex, for categories of workers performing the same or equivalent work. Employers must provide this information at the latest within two months of a request and may not prevent employees from sharing their pay information for the purpose of enforcing equal pay rights. C. Reporting and Joint Pay Assessments Employers with 100 or more employees will be required to publish data on gender pay gaps: Employers with 250 or more employees: annual reporting from 2027. Employers with 150–249 employees: every three years from 2027. Employers with 100–149 employees: every three years from 2031. Reports must include, amongst others, data on overall and median pay gaps, variable remuneration, the proportion of women and men in each pay quartile, as well as the gender pay gap within each category of workers. A category of workers includes individuals performing the same work or work of equal value, whereas the value of work is determined based on skills, effort, responsibility and working conditions, and, if appropriate, other job-specific factors. Where reporting reveals an unjustified gender pay gap of 5% or more within any category of workers that is not remedied within six months, employers must conduct a joint pay assessment with employee representatives to identify causes and implement corrective measures. D. Enforcement and Sanctions Member States must ensure effective remedies for workers, including full compensation for discrimination-related losses and damages. The burden of proof will shift to employers in cases where pay systems lack transparency or required reporting has not been carried out. National authorities will be empowered to impose effective, proportionate, and dissuasive penalties — including fines and, potentially, exclusion from public procurement procedures. E. Preparing for Implementation Although Austria and several other Member States have not yet transposed the Directive, companies can already take proactive steps to prepare. Employers should: Develop or review internal remuneration systems that ensure equal pay for equal work or work of equal value. Integrate transparency obligations into recruitment and HR processes, ensuring compliance with restrictions on pay history questions and gender-neutral job advertising. Assess data readiness for future pay gap reporting obligations and establish internal mechanisms for data collection and validation. Conduct an internal trial review to identify potential pay gaps of 5% or more and take corrective action early to avoid future compliance risks. F. Outlook In Austria, employers are already subject to certain transparency requirements. Companies that permanently employ at least 150 employees must prepare an income report every two years. The report must show (i) the numbers of women and men in each classification group under the applicable collective bargaining agreement or company-internal salary scheme (if applicable), (ii) the numbers by group years if available, and (iii) the average or median remuneration of women and men in those groups. The report must be anonymised and submitted to employee representation bodies or, in absence thereof, made available to employees. Reporting and transparency requirements under the Directive will be significantly more granular than the ones currently in force in Austria. The Directive signals a paradigm shift in pay transparency and equality. With the transposition deadline approaching in June 2026, early preparation will be essential for companies to meet these new transparency obligations. Author: Florina Thenmayr
14 November 2025
Data privacy and Data protection

Compliance with AI language models from third countries

Challenges when using ChatGPT, DeepSeek & Co How to achieve a balance between compliance and concentrated technology offerings in the global supply chain. The AI arms race is in full swing and is reaching new geopolitical dimensions. While the USA and China have been fighting for their leading position in the technology sector for years, the EU has now also announced its entry into the AI sector at the AI Action Summit in Paris following the DeepSeek shock. It wants to mobilise a total of EUR 200 billion in AI investments. However, until there is a comparable selection of AI models ‘made in Europe’, many companies will continue to be confronted with the compliance challenges of AI components from third countries. The AI Regulation is repeatedly criticised as a new regulation. In fact, the use of language models such as DeepSeek, ChatGPT or Gemini is primarily restricted by data protection and copyright issues. AI purpose determines compliance obligations The GDPR lays down strict rules for the processing of personal data. In addition to the AI Regulation, these also expressly apply to AI systems and models. Therefore, the first step is always to check whether a personal reference is established when integrating language models into your own systems. On the one hand, this depends on the probability of the model to output personal data as results. On the other hand, the intention of the user is decisive. For example, personal data is processed when an intelligent duty rota is used. If, on the other hand, an employee formulates AI-supported advertising brochures, the probability of traceability to a natural person is low and data protection issues are manageable. The purpose for which AI is used therefore determines which obligations must be observed. This also applies, for example, to the risk classification of AI systems in accordance with the AI Regulation. Data flows in the supply chain Once the data flows have been clarified, the next step is to define the role of the controller and processor under data protection law. The provider is the one who has produced the language model and brought it to market; the company then uses it for its own business purposes. The provider therefore carries out the computing processes on behalf of the operating company and becomes the processor, while the company acts as the controller under data protection law. However, there is an exception for processes that are clearly carried out in the operator`s own interest. This is the case, for example, with ongoing optimisation of the entire language model through data training for a new release (e.g. improvement of the OpenAI GPT 3.5 version to 4.0). In this case, the provider is responsible. The party responsible for the individual processing steps must subsequently also justify the data flow and obtain any necessary consent, for example. AI training often a knock-out criterion Providers generally have an interest in continuously optimising their AI models and also accessing user input for training purposes. When using the tools, however, the transfer of data to the provider for this new training purpose must be justified separately. This can also increase the risk of the training data being fed into the AI corpus and causing data breaches. It also favours the outflow of know-how and the loss of trade and business secrets. To rule out these risks, some providers now also offer the exclusion of data training when enterprise solutions are purchased for a fee. Companies should therefore take a close look at the different licence models. Free open source access should be critically scrutinised. Data transfer to third countries Furthermore, due to their global integration, AI tools always raise the question of the (in)permissibility of data transfers to third countries, such as the USA or China. Any transfer outside the EEA must ensure that the EU level of data protection is complied with in the recipient country. This is the case, for example, if there is an adequacy decision with the specific recipient country. Alternatively, standard contractual clauses with additional, suitable guarantees for data protection must be concluded with the data recipient. Companies based in the USA have advantages among the major AI players: With the ‘EU-U.S. Data Privacy Framework’, there is an adequacy decision for certified U.S. companies as a basis for transatlantic data transfer. Anyone can determine whether a specific company has complied with the regulations by consulting the public register (Data Privacy Framework List). However, there is no comparable adequacy decision with China. The implementation of DeepSeek therefore requires the conclusion of standard contractual clauses together with suitable additional guarantees. Copyright - The elephant in the room Furthermore, every language model raises the question of whether it has been improperly trained with copyrighted materials. If this is the case, there is a risk for companies that the AI-generated output represents a straightforward takeover or adaptation of another person's work, infringes third-party rights and may not be used by their employees without the author's consent. It is therefore necessary to examine the extent to which providers and operators of AI systems can rely on the free use of text and data mining. This copyright challenge is being hotly debated not only in the EU, but worldwide. Liability clauses and clear licence provisions for dealing with AI output are suitable for risk management. Keywords: AI regulation Data protection copyright Import Authors Axel Andler is Managing Partner and Head of the IP/IT/Data Protection Practice Group and the Digital Industries Group at DORDA. He specialises in IT contracts, in particular outsourcing and cloud sourcing, e-commerce, data protection and new technologies. He is a leading IT/IP expert in various lawyer rankings (Legal 500, Chambers, Lexology etc). He is also the author of specialist publications, including the books ‘#Blockchain2’ and ‘#Cybersecurity’ published by LexisNexis and ‘IP in der Praxis’ published by Manz, and teaches at the Universities of Vienna, WU Vienna and Krems. Axel Anderl, Managing Partner Tel +43-1-533 47 95-23 [email protected] Alexandra Ciarnau is a lawyer in the IP/IT and data protection team, specialising in artificial intelligence and blockchain. She is also Co-Head of the interdisciplinary DORDA Digital Industries Group. Alexandra is an author and speaker at specialist seminars. Alexandra Ciarnau, Principal Associate Tel +43-1-533 47 95-23 [email protected]  
02 June 2025
Content supplied by DORDA Rechtsanwälte GmbH