CSAM: Is It Time to Age Gate the Internet?

TMT Law Practice | View firm profile

As digital platforms cater to an ever-increasing user base, service providers and regulators are increasingly cognizant of the addictive nature of the services online, and their delivery to the end users. Children are now the focal point of discussions attending practices, which has catapulted regulatory scrutiny and policy-making initiatives.

As digital platforms cater to an ever-increasing user base, service providers and regulators are increasingly cognizant of the addictive nature of the services online, and their delivery to the end users. Children are now the focal point of discussions attending practices, which has catapulted regulatory scrutiny and policy-making initiatives.

The precipitation of digital support structure, owing to the social distancing measures and onset of Covid-19 related lockdowns, was felt across social media channels, digital gaming, online chatrooms, wearable and connected devices. The mere ease and convenience afforded by these “alternatives” induced reliance, dependance and consequential addiction to these new trends. For children, these changes are more pronounced, and seemingly renders them susceptible to irreversible physical, psychological, social, and economic harms.

The Humans Rights Watch issued a dedicated report[1] on the data collection practices of EdTech platforms which indicated excessive data collection and data sharing practices employed by EdTech platforms, without any sight of user or guardian consent. These findings are ever-present across territories and demonstrate the use of conscious or passive use of invasive technologies to profile children and facilitate them as targets for personalized marketing schemes. All the information that is being generated to this end, creates a vulnerability for the user [in this case a child] to become searchable and reachable.

Child Sexual Abuse Material

At this juncture, it is highly improper to no longer consider the absolute and real threat of availability of child sexual abuse material (CSAM) over the internet. At a time when the internet is evolving into a system which replicates a physical experience onto the virtual world, good and bad experiences will co-exist, and the susceptibility to be caused harm, in the form of sexual humiliation, is real. These concerns are exacerbated in cases of interactions for children, where the end objective is to ensure that their psychological and physical well-being is preserved.

Recent studies state that there is a spike of about 25% in demands for child sexual abuse material (CSAM); which could be attributed to a rise in the number of digital products and services directed at and availed by children. With increased ease of access provided to sex offenders online, to engage with such children; these findings point towards the urgent need for coaction between guardians, online service providers and regulators alike to sanitize the digital space and create an age-appropriate environment for all users.

As regulators prepare for discussions on pending statutory proposals to ensure online safety, there is an urgent need to assess the technologies available at hand to address such concerns, against the risks it will carry to the fundamental right to data privacy of children online.

Regulatory Landscape

In the United States of America, the US providers are obliged to report to the National Center for Missing and Exploited Children (NCMEC) under US law when they become aware of child sexual abuse on their services. The EU law as it stands today [also as an interim measure till August 03, 2024][2], necessitates voluntary reporting, and hence, member states took into onto themselves to create and prepare for national rules to fight against online child sexual abuse. Unlearning from these experiences, the Indian legislators implemented the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules) which mandate social media intermediaries with a significant user base, to deploy automated tools to identify CSAM and [similar] profane images, while having regard to the interests of free speech and expression, privacy of users. To acknowledge the need for accelerated redressal and removal of CSAM, the IT Rules have shortened the timeline for removal of such content and proposes use of technologies to enable the identification of the originator of such content.

The European Commission has proposed a new legislation[3] (EU Proposal) to combat child sexual abuse online and imposes obligation on service providers to assess the risk of their services’ misuse for the dissemination of child sexual abuse materials or for the solicitation of children (grooming) and propose risk mitigation measures. The presence of chat rooms, voice calls, and livestreams is a service-agnostic feature of a website today and allow predators to initiate contact with young children and direct them to indulge in inappropriate acts in real time with minimal digital tracing. Further, designate national authorities in EU, upon review of the risk assessment reports, may issue a detection order, which will require the service provider to use EU recognized technologies to screen their platform for CSAM.  Detection orders are limited in time and intended to target a specific type of content on a specific service. The rules further require application stores to assess the risk of onboarded applications for the purpose of CSAM dissemination and grooming and take reasonable measures to identify child users and prevent them from accessing it. The Regulation imposes the obligation upon service providers, to determine the methods for such detection exercises, and use technologies which are the least privacy-intrusive and are in accordance with the state of the art in the industry.

In addition to such targeted legislations, online service providers are required to comply with their compliances under the General Data Protection Regulation (GDPR), similar data protection statutes, to use privacy-centric tools to ensure legitimate and proportional collection of children information. Drawing from the requirements of the GDPR, the EC member states and entities would resort to conducting impact assessments[4] and related exercises to make these determinations, perform a balancing act.

The detection and removal of CSAM is an issue of public interest, and lawmakers have sought to address concerns and impose obligations, in a targeted manner, having regard to the nature of services, and their intended audience. However, are there technological tools available to address such concerns, or is privacy centric detection a painful oxymoron?

Technology at Hand

Anonymization, encryption, cloud storage allows offenders to circulate CSAM, evade detection by law enforcement agencies; the connectivity presented by Internet of Things (IoT) offer opportunities for interaction between sex offenders and young children, and grant them access to information on the personality traits, behavior, and location of the children.

Platforms implement artificial intelligence and machine learning systems to ensure efficiency and accuracy, in detection, monitoring and removal of CSAM[5]. On the enforcement side, cryptographical hash algorithms are used for file identification and evidence authentication in digital forensics, by assigning a hash or numeric value to the content. By creating databases of hashed CSAM, new material can quickly be matched against already known files.

Search engines and similar service providers use automated web crawlers, to search and index for CSAM content, and implement risk mitigation steps. Anti-grooming technologies evaluate, review conversations between users to detect toxic user behavior, or any potential grooming action, and are being widely used by service providers with child – targeted offerings (re: gaming industry)[6].

Concerns

CSAM detection requirements are service agnostic and call for wide-spread and excessive screening of content by service providers, as per the applicable laws. This requires online platforms to sidestep end-to-end encryption, which will invariably have a negative impact on users’ privacy. Access to communication content on a general basis, to detect CSAM and grooming, is excessive, disproportionate, and liable for mismanagement. Much recently, report[7] of a father being flagged by a CSAM tool deployed by Google, led to an unforeseen circumstance, which unfurled a long winding investigation, and where the father’s activities in connection with his Android/ Google linked accounts (contacts, images, e-mails) were accessed. This was pursuant to the family making an attempt to share an image of their own child’s groin area with the healthcare practitioner for want of medical care; and ended up being ensnared in an algorithmic system which was designed to flag peoples exchanging CSAM. It is important here to note that while the images were explicit in nature, they were surely not exploitative, it was the lost context which led to an innocent man being reported to law enforcement.

It is also important that alongside the deployment of tools, there is human intervention for making effective determination before recommending criminal investigations, law enforcement measures being initiated against an individual for possession of CSAM.

When Apple made suggestions to commence and implement its own new suite of tools for scanning of images on a user’s phone before the same is uploaded onto the cloud, for detection of CSAM, to ensure that the entire device is not always subject to scan and unwarranted intrusions into the user’s device, or their cloud storage accounts[8]; it faced a lot of backlashes. Apple’s proposal to enable a function to scan for CSAM on a user’s handheld device, to match against a database of hashed CSAM images had to be withdrawn amidst surveillance concerns presented by regulators and privacy activists alike[9].

The use of age verification, age assessment measures to identify child users, introduced by the EU proposal, may be a proportionate scheme to address CSAM, grooming concerns; however, identification and enforcement will represent a challenge to authorities, with children, young adults inclined towards misrepresenting their age online, to avail services/ offerings which restrict access. Technologies which facilitate facial, audio recognition to estimate user age, are historically error-prone, whereas age verification against government issued identification, credit information is not a foolproof methodology for age verification. All these methods have varying success, but none have mastered a combination of privacy, efficiency, and affordability yet.

Way Forward

Given the converging manner of services, roles of service providers online, the law must be technology agnostic, future proof in order to create a uniform standard of responsibility upon service providers. To that end, the EU Proposal proposes the establishment of an EU Centre, which will collaborate with industry stakeholders, lawmakers to develop standards, make available technologies for content detection; this will alleviate the burden on the small providers.  Furthermore, the EU Centre will give feedback on the accuracy of reporting and help service providers improve their internal processes.

The usage of age tokens, single use QR code created by verifying the age of an individual against government records is being trialed in Australia to grant users access to gambling, alcoholic beverage, and pornographic websites[10]. Solutions which allow for these codes to be created and implemented across sectors, by interoperable platforms, will enable smaller players to be able to onboard these tools without requiring a comparable market presence, technical wherewithal, or financial capabilities, as that of the tech giants.

Organizations must take proactive steps towards the implementation of a privacy by design technical infrastructure within their networks, to ensure proportionality of data collection of minor and major users alike. Taking a leaf out of the existing policy structure around data privacy which requires that new tools, measures are implemented after a thorough impact assessment is created, it is important that private and public entities carry out similar exercises in making determination about: (i) efficacy of the tool in monitoring and detecting CSAM; (ii) any surveillance, intrusion that is percolating to the users’ lives; (iii) if there is an effective mitigation measure, to overcome any erroneous determinations; and, (iv) if human involvement in making a final determination is necessary.

The requirement for having human involvement is necessary, for the fact that the tech giants are acting as sentinels for the purposes of disclaiming any liability, and in turn allowing their suites of tools make the determination for either flagging an individual, denying them access, or making reports to the enforcement agencies. To perform an appropriate balancing act, is the order of nature, and that too must be replicated into the online realm.


[1] https://www.hrw.org/report/2022/05/25/how-dare-they-peep-my-private-life/childrens-rights-violations-governments; last accessed on September 08, 2022 at 1003 hrs.

[2] Child Sexual Abuse Directive, and Regulation (EU) 2021/1232 on combating online child sexual abuse. Regulation 2021/1232/EU of the European Parliament and of the Council of 14 July 2021 on a temporary derogation from certain provisions of Directive 2002/58/EC as regards the use of technologies by providers of number-independent interpersonal communications services for the processing of personal and other data for the purpose of combating online child sexual abuse (Text with EEA relevance).

[3] Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Rules To Prevent And Combat Child Sexual Abuse; also accessible at: https://eur-lex.europa.eu/resource.html?uri=cellar:13e33abf-d209-11ec-a95f-01aa75ed71a1.0001.02/DOC_1&format=PDF.

[4] https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12726-Fighting-child-sexual-abuse-detection-removal-and-reporting-of-illegal-content-online_en;

https://cdn.netzpolitik.org/wp-upload/2022/03/2022_03_Impact_Assessment_LEAK.pdf.

[5] WWW ’19: The World Wide Web Conference; Rethinking the Detection of Child Sexual Abuse Imagery on the Internet; p/ 2601–2607; also accessible at: https://dl.acm.org/doi/10.1145/3308558.3313482; last accessed on September 10, 2022 at 1445 hrs.

[6] https://medium.com/@4dsight/preventing-child-exploitation-on-streaming-and-gaming-services-407980648e97; last accessed on September 12, 2022 at 1430 hrs.

[7] https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html; last accessed on September 09, 2022 at 1345 hrs.

[8] https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf; last accessed on September 10, 2022 at 1445 hrs.

[9] https://iapp.org/news/a/op-ed-apples-csam-technology-is-gambling-with-security-privacy/; last accessed on September 10, 2022 at 1240 hrs.

[10] https://www.wsj.com/articles/why-age-verification-is-difficult-for-websites-11645829728; last accessed on September 12, 2022, at 1958 hrs.

More from TMT Law Practice