Tech Law Talks podcast

AI explained: AI and the German workplace

0:00
26:37
15 Sekunden vorwärts
15 Sekunden vorwärts

In this episode, we explore the intersection of artificial intelligence and German labor law. Labor and employment lawyers Judith Becker and Elisa Saier discuss key German employment laws that must be kept in mind when using AI in the workplace; employer liability for AI-driven decisions and actions; the potential elimination of jobs in certain professions by AI and the role of German courts; and best practices for ensuring fairness and transparency when AI has been used in hiring, termination and other significant personnel actions.

----more----

Transcript: 

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.

Judith: Hello, everyone. Welcome to Tech Law Talks and to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the workplaces in Germany. We would like to walk you through the employment-level landscape in Germany and would also like to give you a brief outlook on what's yet to come, looking at the recently adopted EU regulation on artificial intelligence, the so-called European Union AI Act. My name is Judith Becker. I'm a counsel in the Labor and Employment Group at Reed Smith. I'm based at the Reed Smith office in Munich, and I'm here with my colleague Elisa Saier. Elisa is an associate in the Labor and Employment Law Group, and she's also based in the Reed Smith office in Munich. So, Elisa, we are both working closely with the legal and HR departments of our clients. Where do you already come across AI in employments in Germany and what kind of use can you imagine in the future?

Elisa: Thank you, Judith. I am happy to provide a brief overview of where AI is already being used in working life and in employment law practice. The use of AI in employment law practice is not only increasing worldwide, but certainly also in Germany. For example, workforce planning and recruiting can be supported by AI. Therefore, already a pretty large number of AI tools does exist for recruiting, for example, in the job description and advertisement, the actual search and screening of applicants, as well as in the interview process, the selection and hiring of the right match, and finally the onboarding process. AI-powered recruiting platforms can make the process of finding and hiring talents more efficient, objective, and data-driven. These platforms use advanced algorithms to quickly scan CVs and applications and automatically pre-select applicants based on criteria such as experience, skills, and educational background. This does not only save time, but also improves the accuracy of the match between candidates and vacancies. In the area of employee evaluation, artificial intelligence offers the opportunity to continually analyze performance data and evaluate them. This enables managers to make well-founded decisions about promotions, salary adjustments, and further training requirements. AI is also used in the field of employee compensation. By analyzing large amounts of data, AI can identify current market trends and industry-specific salary benchmarks. This enables companies to adjust their salaries to the market faster and more accurately than with traditional methods. When terminating employment relationships, AI can be used with the social selection process, the calculation of severance payments, and drafting of warnings and termination letters. Finally, AI can support compliance processes, for example, in the investigation of whistleblowing reports received via Ethic Hotline. Overall, it is fair to say that AI has arrived in practice in the German workplace. This certainly raises questions about the legal framework for the use of AI in the employment context. Judith, could you perhaps explain which legal requirements employers need to consider if they want to use AI in the context of employment?

Judith: Yes, thank you, Elisa. Sure. The German legislature has so far hardly provided any AI-specific regulations in the context of employment. AI has only been mentioned in a few isolated instances in German employment laws. However, this does not mean that employers in Germany are in a legal vacuum when they use AI. There are, of course, general and not AI-specific employment laws and employment law principles that apply in the context of using AI in the workplace. In the next few minutes, we would like to give you an overview on the most relevant of these employment laws that German-based employers should have in their mind when they use AI. Now, I would like to start with the General Equal Treatment Act, the so-called AGG. Employers in Germany should definitely have that act in mind as it applies and it can also be violated even if AI is interposed for certain actions. According to this act, discrimination against job applicants and employees during their employments on the grounds of race or ethnic origin, gender, religion or belief, disability, age or sexual orientation is generally speaking prohibited. Although AI is typically regarded as something which is being objective, AI can also have biases and as a result the use of AI can also lead to discriminatory decisions. This may occur when, for example, training data the AI is trained with itself is based on human biases, and also if the AI is programmed in a way that is discriminatory. Currently, for example, as Elisa explained in the beginning, AI is very often used to optimize the application proceedings and when a biased AI is used here, for example, for selecting or for rejecting applicants, this can lead to violations of the General Equal Treatment Act. And since AI is not a legal subject itself, this discrimination would be attributable to the employer that is using the AI. And the result would then be, in the event of a breach of the Act, that the employer is exposed to claims for damages and compensation payments. And in this context, it is important to know that under the German Equal Treatment Act, the employee only has to demonstrate that there are indications that suggest the discrimination. So if the employee is able to do so, then the burden of proof shifts to the employer and the employer must then prove that there was in fact no such discrimination. And when an employer uses AI due to the complexity and the technical complexity that is involved, that can be quite challenging. In this regard, we think that a human control of the AI system is key and should be maintained. As we heard from Elisa in the beginning, AI is not only used in the hiring process, but also in the course of the employment. One question that came up here is whether AI can function as a superior itself and whether AI can give work instructions to employees. So the initial answer here is yes. German law does not stipulate any obligations that work instructions have to be given by a human being. Therefore, just as it is possible to delegate the right to give instructions to a manager or to another superior, it is also possible to enable an AI system to give instructions to the employees. In this context, it is important to recall, however, that the instructions are, of course, again attributable to the employer. And if the AI instructs in a way that is, for example, outside of the reasonable discretion or gives instructions which are outside of the employee's contract, then this instruction would, of course, be unlawful and that would be attributable to the employer as well. One aspect that I would like to point out here is that if an AI system would lead to a decision towards the employee that has legal effects and impacts the employee in a very significant way, then such decisions may not be made exclusively by an AI. This is because of a principle that is to be found in the data protection laws, and Elisa will explain on this in greater detail. Another aspect of AI in the course of employment is whether employers can instruct their employees to use AI. Again, here the answer is yes. This is a part of the employer's right to give instructions, and this right covers not only if employees, should use AI at all or if they are prohibited to use it. It also covers what kind of AI can be used and to avoid any misunderstandings and to provide for clarity here, we advise that employees should have a clear AI policy in place so that the employees know what the expectations are. And what they are allowed to do and what they are not allowed to do. And in this context, we think it is also very important to address confidentiality issues and also IP aspects, in particular, if publicly accessible AI is used, such as chat GPT.

Elisa: Yes, that's true, Judith. I agree with everything you said. In connection with the employer's right to issue instructions, the question also arises as to the extent to which employees may use AI to perform their work. The principle here is that if the employer provides its employees with a specific AI application, they are allowed to use it accordingly. Otherwise, however, things can get more complicated. This is because under German law, employees are generally required to carry out their work personally. This means that they are generally not allowed to have other persons to do their work in their place. The key factor is likely to be whether the AI application is used to support the employee in performing a task or whether the AI application performs the task alone. The scope of the use of AI is certainly relevant here as well. If employees limit themselves to give instructions to the AI application for a work task and simply copy the result, this can be an indication for a breach of the personal work performance. However, if employees ensure that they perform a significant part of the work themselves, the use of AI should not constitute a breach of duty. Employers are also free to expressly prohibit the use of artificial intelligence. It is also possible for employers to set binding requirements as to which tasks employees may use AI for and what they must observe when doing so. In the event of violations, employers can then, depending on the severity of the violation, take action by issuing a warning or giving termination. Even without an express prohibition from the employer, employees are not permitted to use artificial intelligence or must at least inform the employer about the use of AI in order not to violate the obligation arising from the employment contract. Data protection law is particularly important here. For data protection reasons, employees may not be allowed to enter protected personal data in an AI dialog box. This is mainly due to the fact that some AI systems are hosted on servers with lower data protection standards than in the EU. General data protection principles that apply in the context of employee data protection and the information rights of the employees concerned must also be observed when setting up and using AI. As a general rule, data is no longer required must be deleted and incorrect data must be corrected. If the purpose of the data is changed, the employee's concern must be informed too. In addition, the GDPR imposes special information requirement for automated decision making, in particular for profiling. In these cases, employers must inform the data subject of the existence of automated decision making and provide information on the scope and intended effects of such processing for the data subject. This can be a challenge for employers in practice, especially if they are using AI developed by other providers and the employer therefore does not have much knowledge about the function of the AI system. When using AI, companies should also observe the ban on automated individual decision-making in accordance with Article 22 GDPR. This regulation states that decisions that have legal consequences for the data subject or significantly affect them may not be based solely on the automated processing of personal data. Examples of this include selection decisions when recruiting applicants or giving notice of termination. The background to this is the protection of employees who should not be completely subject to a processing system in important matters. Accordingly, decisions on hiring, promotions, terminations, or warnings can generally not be made conclusively by an AI system, but are subject to a human decision-making process. Although it should be permissible to use AI to prepare such decisions, it should be ensured that a human is involved in the final decision-making process for this type of decisions. This human involvement should also be documented by the employer for evidence purposes.

Judith: Okay. I want to briefly take another look at termination of employment, but from a slightly different angle. Many employees may fear now that their job positions will be eliminated because AI will basically take the job- they will be replaced by AI. So we had a quick look at whether AI can be a reason for termination in Germany. Well, we haven't seen any specific case law on this, probably too early, but we think that the German labor courts would apply the general principles that they apply in redundancy scenarios. This means that the employer that uses AI would have to demonstrate that due to the use of AI, the job duties of the affected employee are eliminated in full and thus there's no need for this employment anymore. This can in practice be quite challenging and the German labor courts will fully review whether job duties have in fact been eliminated. The German labor courts, however, won't review whether the decision to use AI is reasonable or not. They will just review whether this decision is obviously arbitrary or, for example, discriminatory. But the decision itself is part of the entrepreneurial freedom and the courts won't assess whether it is a good decision or not. The courts would probably also apply all other general principles in redundancy scenarios with respect to, for example, offering suitable vacancies and with respect to social selection processes. We do not know yet whether courts would apply a stricter standard when it comes to training measures, for example, to get a position holder fit for the new workplace here. We think that the case law should be monitored and we would see how the courts would decide in such scenario. So, Elisa, let's have a look at a German specialty and let's have a look at those German-based employers who have a works council. Are there any specific legal implications here?

Elisa: Sure, Judith. The introduction and use of AI as a technical system in a company is generally subject to co-determination of the works council if there exists one in the respective company. In addition, the Works Council has information and consultation rights and must therefore be involved when using AI. Moreover, it is important to note that the Works Council's right to information already starts with the planning process, so that the Works Council must be informed at an early stage before the AI is actually implemented. The Works Council also generally has the right to consult experts during the implementation of AI. A different assessment with regard to the co-determination rights of the Works Council can arise when using external AI systems. This is because these are generally used by employees via their own account, to which the employer has no access. In such cases, it is not possible for the employer to monitor the performance or behavior of employees. This in turn means that no co-determination rights of the Works Council are triggered. If the employer wishes to use AI to implement selection guidelines, for example for recruitment or the transfer of employees, the consent of the Works Council is also required. In this regard, it should be noted, though, that co-determination rights only exist with regard to employees, not applicable. Last but not least, the implementation of AI could constitute a change in operation within the meaning of the German Works Constitution Act if the statutory conditions are met. The employer must in this case consult with the Works Council about the effects of the AI on the employees and, if applicable, conclude a so-called social plan. In addition to the existing legal requirements under German law, which we have just discussed, employers in Germany and in the European Union should also keep an eye on the recently published AI Act. Judith, what are the relevant provisions of the new EU regulation that employers will have to be aware of in the future?

Judith: Well, the AI Act offers material probably for its own episode, but let me at least briefly give you an overview on the Act. July 12th, the European AI Act that was adopted by the Council of the European Union in May was finally published. And this Act is considered as the world's first comprehensive law regulating AI. The Act applies to both providers of AI and deployers of AI systems. Employers will usually be considered as deployers in the meaning of the Act, unless they are really involved in the development of AI systems themselves. It is important to know that the Act does not stipulate a minimum size of the company for application, so that basically means that the AI Act applies to, in the employment law context, to all employers in the European Union. Just in brief, the centerpiece of this act is a classification of AI systems into different risk levels. And the AI Act then allocates different obligations and compliance requirements to the different risk tiers. So the Act takes risk level approach. The AI systems that are used in the HR departments, just as Elisa described in the beginning of our discussion, they will regularly be classified as so-called high-risk systems in accordance with the Act. And this applies, among others, to AI systems which are used in the course of recruitment, task assignment, performance evaluation, promotion, and termination of employment. And employers using such high-risk AI systems in the EU will face specific compliance requirements under the Act in the future. Besides these risk-specific obligations, there are also general obligations stipulated in the AI Act that apply regardless of the specific risk tier, and these are basically transparency and information obligations, and employers have to face these as well when using AI systems. Violations of the Act can result in severe fines, and although most of the obligations under the Act will only enter into force within two years' time, this means in the course of 2026, we think that it does make sense and would be prudent to deal with the Act at an early stage, so that all the AI systems which are in use and which are planned to be used in the future are implemented in a way that is compliant with the AI Act. Also, we think that works councils probably will demand corresponding information and will deal with the AI Act in greater detail and will ask for information and training measures. Well, having said all this, Elisa, what would you recommend to German-based employers? What measures should they take?

Elisa: Yeah, based on the legal situation just discussed, it is important to keep in mind that employers are free to decide whether AI should be used in the company or not. If the decision is made to implement and use AI, appropriate instructions for employers should be in place. This can be regulated by clear clauses on AI use in employment contracts, work instructions, or even in works agreement if a works council exists. According to our experience so far, it is common for employers in practice to at least have a list of AI systems that are permitted or prohibited in the company. However, beyond that, we advise to define certain framework for the use of AI in the company by means of an AI policy. This policy should contain clear requirements and minimum standards for the use of AI and reflect the core values of the individual company so that employees know what is expected of them and what is allowed and not allowed when using AI. In addition, before AI is implemented and also during its use, employees should receive appropriate training on how to use AI when performing their work. For example, employees should be advised not to enter any personal data into the AI system and not to commit any copyright violations. They should also be made aware that AI systems do not always deliver correct results, but often only something that sounds likely and plausible. So, every result generated with AI should be critically questioned and reviewed in detail. Due to the need for training, it can be assumed that the Works Council, if there exists one, will demand that training courses be offered by the employer. If such training is required, under German law, employers must generally feel the costs incurred. Beside the legal fact that the Works Council does generally have certain information and co-determination rights under German law, as just described. Involving the Works Council at an early stage is also recommended in order to determine the next steps and timetable for the implementation of AI. Moreover, the involvement of the Works Council generally serves to increase employees' acceptance of digitalization in the workplace. As AI is still a fairly new topic that is in flux and constantly evolving. We recommend that employers in Germany monitor future development in legislation and case law and keep an eye on future changes.

Judith: So thank you very much everyone for listening and we keep you posted for other episodes of this podcast.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. 

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. 

All rights reserved.

Transcript is auto-generated.

Weitere Episoden von „Tech Law Talks“