News & Developments
ViewView

Automated Content Recognition: The big brother of creativity

Creativity Under Surveillance Modern creativity lives under constant observation. Every video, sound, or image uploaded to most digital platforms is scanned by automated copyright filters before it ever reaches an audience. These tools –collectively known as Automated Content Recognition (ACR) systems– promise to protect copyright owners from infringement. In practice, they have become powerful, but invisible regulators of online content creation. For advertisers and creators alike, there is a clear implication: success now depends less on legal compliance and more on technological compatibility. The challenge is no longer simply to follow the law, but to outsmart the algorithm that enforces it. In the digital environment, copyright enforcement has been quietly transformed from a legal to a computational issue, reshaping the boundaries of creativity, compliance and market access. The Rise of Algorithmic Copyright Enforcement ACR technology became popular in the early 2010s as a pragmatic response to massive breach on user-generated content platforms. Systems such as the YouTube Content ID or Meta Rights Manager identify matches between uploaded material and databases of registered works. When a match occurs, the platform may automatically block, monetize or mute content, depending on the decision of copyright holders[i]. At first, the use of such technologies was only conceived as a tool for protecting copyrighted content on large-scale platforms but the automation of copyright enforcement has introduced a new logic: platforms now apply a technology test, not a legal one. The algorithm does not ask whether a use is fair, transformative or incidental; it simply detects a match and acts on it. The result is what scholars have called over-blocking: licit or trivial uses of protected material are pre-emptively eliminated, while the user’s ability to challenge that decision remains limited. Exceptions such as citation, parody or incidental inclusion –cornerstones of creative freedom– are not only virtually invisible, but irrelevant to the code[ii]. The Legal vs. the Technological Battle The spread of ACR systems has reoriented the way in which creators, advertisers and even legal teams think about compliance. Before, success depended on navigating the complexities of copyright law. Today, it depends on understanding how the platform filter "thinks". Creators adjust sound frequencies and edit images to evade detection. Marketing teams buy redundant licenses "just in case." Agencies modify the artwork or choose replacements generated by AI to avoid overlaps with existing content. None of these decisions responds to a legal need: they are acts of algorithmic self-defence[iii]. This change has transformed copyright from a normative issue into a technical one: whoever understands the system wins the war of visibility. The practical result is a "technological chilling effect". Creativity is not held back by fear of demands, but by the opaque logic of automated moderation. For the in-house legal adviser, this means that traditional intellectual property manuals are no longer sufficient. Risk management in the digital economy requires both legal and technological fluidity. Companies that dominate both will not only meet but remain visible in markets increasingly dominated by algorithms. Governance and Accountability: Who watches the watcher? ACR systems operate with minimal external oversight. There is no legal framework that defines the standards for technology accuracy, reporting obligations or resources available to users to challenge decisions made by platforms. Rights holders can register reference files without verification; algorithms can generate false positives with no consequences. This dynamic has produced a privatized form of copyright enforcement in which platforms act as regulators, algorithms as judges and users as defendants. Meanwhile, governments have even gone so far as to promote the use of such technologies as the European Union has done through the Digital Markets Act. For companies, the consequences are tangible. A mistakenly blocked campaign can generate lost revenue, reputational harm, and breach-of-contract claims. An over-zealous takedown may distort competition if one brand’s content remains online while another’s disappears. To mitigate these risks, corporate legal departments could: Audit platform policies to identify how ACR systems operate. Include ACR contingencies –such as compensation or re-posting rights– in marketing contracts. The Human Role in a Machine-Moderated Market Automation was supposed to facilitate copyright enforcement. However, it has introduced new layers of uncertainty because creativity cannot –at least for the moment– be interpreted by code or algorithm. Human oversight remains essential, not only to correct errors but to interpret culture, humor, and intent. A meme, a parody, or a cinematic reference can be illegal to an algorithm yet lawful and socially valuable to a human reviewer. In this context, it is important to implement hybrid compliance models that combine legal expertise with an understanding of how these platforms work. There has never been a greater need to integrate intellectual property lawyers into the marketing and product design teams; train creators to assess risk at the storyboard stage or evaluate the implications of using AI tools before publishing content. Such investments may seem more expensive in the short term, but they ensure brand resilience and integrity. Understanding the functioning of platforms will ensure not only greater reach to target audiences, but better management of intellectual property rights and risk prevention. [i] Lester, T., & Pachamanova, D. (2017). The Dilemma of False Positives: Making Content ID Algorithms more Conducive to Fostering Innovative Fair Use in Music Creation. UCLA Entertainment Law Review, 24(1). [ii] Carruitero, S. (2023) El sistema de Content ID de YouTube frente a la excepción de uso honrado del derecho peruano. dissertation. [iii] Guzman-Zavaleta, Z. J., & Feregrino-Uribe, C. (2016). Towards a video passive content fingerprinting method for partial-copy detection robust against non-simulated attacks. En PLOS ONE, 11(11).
Rodríguez Angobaldo Abogados - November 6 2025
Labour and employment

Workplace Romance: What if the Coldplay Affair Would Have Taken in Peru?

Imagine this scene: two top executives from a Peruvian company appear on Kiss Cam at a concert, revealing not only a secret romance but also a marital affair. The organization faces a perfect storm of media sensationalism, moral dilemmas, and a collapsing workplace as the video goes viral and the headlines blow up. This hypothetical situation, which was inspired by the well-known "Coldplay Affair" that cost the CEO of an American corporation his job, poses a pressing issue for employers: how should they strategically and legally react when a romantic scandal upends the management structure?  The response requires navigating a delicate balance between managerial authority and employees’ fundamental rights, where a misstep can trigger lawsuits, a loss of talent, or irreparable reputational damage. The Power of Management: Limits and Scope within the Peruvian Framework According to Article 9 of the Peruvian Law on Productivity and Labour Competitiveness (Supreme Decree 003-97-TR), which protects freedom of enterprise, Peruvian employers have the authority to manage and impose sanctions.  This power empowers the organisation to establish internal rules, monitor employee performance, and, sanction noncompliance within appropriate bounds. Thus, in the face of a scandal such as the Coldplay Affair, the company could reasonably start an investigation and demand explanations from those involved. However, this power is not unlimited. The private life of employees, including romantic relationships, belongs to their intimate sphere. Dismissing an executive solely for having an undisclosed romantic relationship could be considered disproportionate and illegal, as neither the Peruvian Labour Productivity and Competitiveness Act nor the Peruvian Constitution — which protects the free development of personality and privacy — classify consensual love as a major offence. Since love relationships in the workplace are not illegal, intervention is only warranted when it results in tangible consequences such as decreased productivity, conflicts of interest, preferential treatment, or sexual harassment. Internal Work Regulations: A Compass in the Storm Internal Work Regulations (IWR) become a crucial instrument in this situation. IWR functions as an "internal constitution" that outlines the guidelines for cooperation, responsibilities, and sanctions. It is required for companies with over 100 employees (but it is strongly advised for all).  A well-constructed IWR must address three crucial fronts in situations like the viral affair: Transparency in hierarchical relationships: Make it mandatory for staff members to discreetly notify HR of any romantic relationships that include conflicts of interest or direct subordination. This allows for preventive measures, such as reassigning reporting lines. Explicit behavioural limits: To further emphasize the distinction between personal and professional life, forbid excessive shows of affection during working hours or the use of company resources for personal gain. Classification of major offenses: Include intentional opacity, nepotism, and violations of conflict-of-interest policies as grounds for sanction. For instance, if the IWR has been properly socialized, an executive who hides a romance with a subordinate may be fired for "breach of good faith in the workplace" (Article 25 of Peruvian Supreme Decree 003-97-TR). The strict implementation of the IWR is necessary for its efficacy.  Any sanction must adhere to due process, which includes proportionality, impartial investigation, and the right to self-defence.  Despite this, senior managers typically use mutual disagreement or "withdrawal of trust" as grounds for dismissal. Unsurpassable Boundaries: Conflict of Interest and Sexual Harassment There are two significant legal risks associated with a love relationship between superiors and subordinates.  The first one is conflict of interest.  The company could fire the executive for "breach of trust," which is a legitimate reason for managerial positions, or even sue him/her for monetary damages if it turns out that s/he gave his/her partner unwarranted promotions, discretionary salary increases, or perks after the scandal. The second risk is sexual harassment, which is more serious.  Peruvian companies with over 20 employees are required by Peruvian Law Number 27942, also known as the Law on Preventing and Punishing Sexual Harassment, to establish anti-harassment policies that include intervention committees, yearly training, and confidential reporting channels.  Herein lies a paradox: even if a relationship starts out consensually, it may turn into quid pro quo harassment, which is pressure to obtain sexual favours or post-breakup revenge. Conclusion The "Coldplay Affair" exposes an unsettling reality: companies cannot control the hearts of their employees, but they must manage the institutional consequences that may arise. Building a culture that discourages passionate impulses via transparency, meritocracy, and respect is a safer legal solution than blindly banning relationships. Even the most severe crisis can be addressed without bringing down the organization if executives set an example by following the rules they create and if reporting procedures are free from fear of retaliation.  Because in the end, what is being judged is not love per se, but rather the integrity with which a company upholds its most precious resource: the trust of its people and its reputation in the world.
Rodríguez Angobaldo Abogados - October 14 2025
Press Releases

CMS Grau asesora a Colbún en la toma de control absoluto de Fénix Power / CMS Grau advises Colbún on acquiring full control over Fénix Power

Lima, 28 de agosto de 2025 - CMS Grau asesoró a Colbún Perú S.A. (“Colbún Perú”), subsidiaria de Colbún S.A. (“Colbún”) en la adquisición de la participación indirecta de Platinum Bolt A 2015 RSC Limited (“Platinum Bolt”) en Fénix Power Perú S.A. (“Fénix”), a través de la adquisición del 41.38% de las acciones emitidas por Inversiones de Las Canteras S.A. (“Las Canteras”). De esta manera, Colbún Perú se convirtió en titular de la totalidad de acciones emitidas por Las Canteras; y, por lo tanto, controlador absoluto de Fénix. El valor de la transacción superó los US$ 70 millones. Fénix es uno de los principales generadores del mercado eléctrico peruano, con una participación de alrededor del 8%. Su central termoeléctrica es la única de ciclo combinado dual en Perú (gas natural y vapor de agua) y tiene una potencia instalada de 572 MW. Colbún es una empresa generadora con 39 años de experiencia, siendo el tercer mayor operador eléctrico en el mercado chileno, con una cartera de más de 350 clientes industriales y empresas, cerca de 1.300 trabajadores y una potencia instalada de más 5.000 MW a través de 29 centrales de generación en Chile y Perú. El cierre de la operación se produjo el pasado 21 de agosto de 2025. Asesores Legales Externos de Colbún: CMS Grau: Socios Juan Carlos Escudero y Miguel Viale; y, equipo conformado por Cinthia Cánepa y Pablo Martínez. Barros & Errázuriz: Fernando Garrido. Asesores Legales Internos de Colbún: Rodrigo Pérez y Verónica Vergara. Asesores Legales Externos de Platinum Bolt: Rebaza, Alcázar y de las Casas: Socio Luis Miguel Elías y equipo conformado por Rafael Santín y Derek Ortmann. Linklaters: Socio Rupert Cheyne y Maxi Niefer. Lima, August 28, 2025 - CMS Grau advised Colbún Perú S.A. (“Colbún Perú”), subsidiary of Colbún S.A. (“Colbún”) in the acquisition of Platinum Bolt A 2015 RSC Limited’s (“Platinum Bolt”) indirect participation in Fenix Power Perú S.A. (“Fenix”), through the acquisition of 41.38% of the shares issued by Inversiones de Las Canteras S.A. (“Las Canteras”). Thus, Colbún Perú has become the holder of all of Las Canteras’ equity shares and, as a consequence thereof, sole controller of Fénix. The transaction value exceeded US$70 million. Fenix is one of the main generators in the Peruvian energy market, with a share of approximately 8% of the market. Its thermoelectrical power plant is the only one with a dual combined cycle in Peru (natural gas and water steam) and has an installed capacity of 572 MW. Colbún is a power generation company with 39 years of experience. It is the third largest electricity operator in the Chilean market, with a portfolio of more than 350 industrial and corporate customers, nearly 1,300 employees and an installed capacity of over 5,000 MW across 29 power plants in Chile and Peru. The closing of the transaction occurred on August 21. External Legal Advisors for Colbún: CMS Grau: Partners Juan Carlos Escudero and Miguel Viale, and team composed of Cinthia Cánepa and Pablo Martínez. Barros & Errázuriz: Fernando Garrido. Internal Legal Advisors for Colbún: Rodrigo Pérez and Verónica Vergara. External Legal Advisors for Platinum Bolt: Rebaza, Alcázar y de las Casas: Partner Luis Miguel Elías, and team composed of Rafael Santín and Derek Ortmann. Linklaters: Partner Rupert Cheyne y Maxi Niefer.
CMS - September 2 2025
Dispute resolution – Litigation

Medical AI: A Cure with Legal Side Effects?

The Rise of AI in Healthcare In recent years, artificial intelligence (AI) has become a commonplace tool across various sectors, and healthcare is no exception. From algorithms that detect diseases with greater accuracy than an experienced radiologist[i], to systems that predict medical complications by analyzing thousands of medical records[ii], AI is profoundly transforming the way human health is diagnosed and understood. The enthusiasm is understandable: faster results, lower margin of error, and potentially democratized access to quality diagnoses. However, behind this innovation, lie unresolved legal questions. What happens if an algorithm makes a mistake? Who is responsible? Who owns the data used to train these systems? Can a company claim that its medical software "heals better" than a human professional? Intellectual Property, Data, and Liability: Who Is Responsible for Errors? From the perspective of Peruvian intellectual property law, AI-generated results do not –at least in principle– qualify as protected works if there is no direct human creative involvement. However, when these diagnoses are part of commercial solutions, alternative protection mechanisms arise, such as trade secrets or rights over the underlying database. This brings up the issue of algorithmic bias[iii]: if the data is poorly distributed –for example, if a model has been trained only on medical records from certain population groups– the diagnostic results may be inaccurate or even dangerous. This is another key legal dimension, as it affects both the product’s reliability and the potential liability in case of harm. In traditional medicine, the healthcare professional who provides a diagnosis is liable for their actions according to medical standards (lex artis). However, when AI is used as a support tool –or in some cases, as a system that autonomously proposes diagnoses– a new scenario of shared responsibility emerges among physicians, healthcare institutions, and technology developers. The main challenge lies in the opacity of many AI models, especially those based on deep learning, which do not always allow for an understanding of how a particular conclusion was reached—this is known as the “black box” problem[iv]. In case of mistake, this complicates both the traceability of the failure and the assignment of responsibility. Broadly speaking, three possible approaches to liability can be outlined: Medical liability: when AI acts as diagnostic support and the professional is responsible for accepting or rejecting its recommendation. Manufacturer liability: when the software is marketed as a product with a specific diagnostic accuracy, and accountability may arise under warranty or misleading advertising standards. Institutional liability: when healthcare providers integrate AI into their services without properly training their staff or implement it poorly within their systems. For now, most countries operate under analog legal frameworks, which leads to uncertainty. In this context, transparency, clinical validation, and algorithm traceability will be essential not only to improve the technology, but also to ensure that legal systems can fairly assign responsibility when the inevitable occurs: a medical AI fails. Advertising and Information Provided to Patients Beyond technical and liability issues, the deployment of medical AI systems raises questions about how these products are presented to the public, particularly when diagnostic solutions are offered directly to patients or healthcare professionals. In Peru, advertising in the healthcare sector is subject to strict legal regulations, aimed to protect public health, preventing consumer deception, and ensuring that information about the products and services offered is truthful, verifiable, and not misleading, given that the recipient is making decisions that may directly affect their health. In the case of medical AI systems, misleading advertising scenarios may arise if users are led to blindly trust the algorithm or if the technology is compared to human medical performance without solid scientific evidence. This becomes even more problematic when AI systems are marketed in environments lacking rigorous validation standards, because it can give an unfair advantage to companies with more aggressive and less ethical marketing strategies over those that are more cautious. Thus, the regulation of commercial communication on medical AI should ensure that innovation is not built on exaggerated promises or at the expense of the consumer and should be sustained not only on the basis of the communication of the success of the system, but also to their technical limitations. Conclusions and perspectives The incorporation of artificial intelligence-based applications in the field of medical diagnosis represents one of the most profound and promising transformations in the healthcare sector in recent decades. However, as with any disruptive innovation, its benefits come with substantial legal challenges that cannot be overlooked. From the point of view of liability, the technical nature of medical AI requires an in-depth analysis of each individual case to ensure the correct attribution of damages in case of errors. While in terms of advertising and consumer relations, it is essential to avoid unfair or misleading practices that could undermine confidence in the health system. In this context, current regulatory frameworks are, in many cases, insufficient. There is a need to move toward adaptive regulatory models that combine flexibility –to foster innovation– with clear safeguards –to protect patients’ rights and ensure market transparency–. Initiatives such as regulatory sandboxes and algorithmic traceability standards are steps in that direction. The challenge for lawyers specializing in technology, healthcare, and competition is clear: to accompany the development of these tools with a critical, constructive, and multidisciplinary approach that ensures a proper and ethical use of this technology. While medical AI can offer countless opportunities and practical applications with a direct impact on the health of the population, only proper attention to its legal implications will prevent it from becoming a new source of systemic risk. Author: Sebastián Carruitero Cárdenas [i]       Abadia, Andres F. PhD*; Yacoub, Basel MD*; Stringer, Natalie BSc*; Snoddy, Madalyn BA*; Kocher, Madison MD*; Schoepf, U. Joseph MD*; Aquino, Gilberto J. MD*; Kabakus, Ismail MD, PhD*; Dargis, Danielle BSc*; Hoelzer, Philipp PhD†; Sperl, Jonathan I. PhD†; Sahbaee, Pooyan PhD†; Vingiani, Vincenzo MD*,‡; Mercer, Megan MD*; Burt, Jeremy R. MD*. Diagnostic Accuracy and Performance of Artificial Intelligence in Detecting Lung Nodules in Patients With Complex Lung Disease: A Noninferiority Study. Journal of Thoracic Imaging 37(3):p 154-161, May 2022. | DOI: 10.1097/RTI.0000000000000613 [ii]      Kraljevic Z, Bean D, Shek A, Bendayan R, Hemingway H, Yeung JA, Deng A, Baston A, Ross J, Idowu E, Teo JT, Dobson RJB. Foresight-a generative pretrained transformer for modelling of patient timelines using electronic health records: a retrospective modelling study. Lancet Digit Health. 2024 Apr;6(4):e281-e290. doi: 10.1016/S2589-7500(24)00025-6. Erratum in: Lancet Digit Health. 2024 Oct;6(10):e680. doi: 10.1016/S2589-7500(24)00195-X. PMID: 38519155; PMCID: PMC11220626. [iii]      Min, A. (2023). Artifical Intelligence and Bias: Challenges, Implications, and Remedies. Journal of Social Research, 2(11), 3808–3817. https://doi.org/10.55324/josr.v2i11.1477 [iv]     Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, 31, 889.
Rodríguez Angobaldo Abogados - August 28 2025