Fourth IBA India Litigation and ADR Symposium (Day 2): Spotlight on Emerging Intellectual Property and AI Issues

The session explored how artificial intelligence is reshaping intellectual property law, from data-scraping and authorship debates to personality rights and deepfake misuse. The discussion underscored the need for clearer safeguards, transparency obligations and balanced governance frameworks as AI accelerates real-world disputes.

IBA India Litigation

Day 2, Session 3 of the Fourth IBA India Litigation and ADR Symposium turned towards the rapidly evolving interface between intellectual property (IP) and artificial intelligence (AI), with a particular focus on liability, training data, personality rights and global litigation trends.

The session on “Emerging issues in intellectual property and artificial intelligence” was moderated by Ms. Swathi Sukumar, Senior Advocate, New Delhi. The panel comprised Mr. Chander M. Lall, Designated Senior Advocate; Mr. Pravin Anand, Managing Partner, Anand and Anand, New Delhi; Justice Amit Bansal, Judge, Delhi High Court; Ms. Shwetasree Majumder, Managing Partner, Fidus Law Chambers, New Delhi; Dr. Vivek Mittal, Executive Director — Legal and Corporate Affairs, Hindustan Unilever Limited; and Mr. Amit Sibal, Senior Advocate, New Delhi.

Mr. Chander M. Lall: AI as creator and consumer -an existential test for IP

Mr. Chander M. Lall began by explaining why AI has become such a dominant subject in legal discourse. Citing public statements by global technology leaders who have described AI as more impactful than “fire, electricity or the internet” and, conversely, as potentially “more dangerous than nukes”, he remarked that it is clearly not a passing fad and will profoundly transform how societies function. He then connected this technological disruption to the core rationale of intellectual property. IP laws, he noted, were historically conceived to encourage human creativity:

  • inventors receive time-bound exclusivity for patents;

  • authors and artists are granted copyright protection for their works; and

  • the underlying bargain is disclosure in exchange for limited exclusivity.

Against this backdrop, he posed a central question: if machines become the primary creators, does extending IP protection to AI-generated works still serve the goal of incentivising creativity, and if not, what happens to a system where entire national GDPs and corporate valuations are now heavily IP-driven?

Mr. Lall then turned to comparative developments on AI authorship. Referring to the well-known DABUS litigation spearheaded by Dr. Stephen Thaler, he explained that courts and patent offices in the United States, United Kingdom, European Patent Office and Australia have consistently refused to accept an AI system as an “inventor”, insisting on human inventorship as a precondition for protection. At the same time, some Chinese decisions have treated AI as analogous to sophisticated tools like cameras or computers – “a tool which assists in creativity” -and have accepted that works generated using AI can still attract protection where human involvement remains central.

From creator to consumer, he underscored that AI is also “the biggest consumer of content ever created”. Modern models, he explained, are trained by scraping enormous quantities of text, images and sound from the internet -a process he likened to digitisation enabling computers to “read” and now “understand” books that were once only in physical form. Once scraped, that content is stored, vectorised and tokenised; models then regurgitate outputs based on this internal representation.

He illustrated the practical stakes through the ANI v. OpenAI litigation pending before the Delhi High Court, in which news content is alleged to have been scraped, processed and then used to generate competing outputs. Rights-holders argue that even “transient” training copies can be infringing and that outputs reproducing lyrics or text verbatim are clearly unlawful. Model developers, by contrast, assert that their systems transform inputs through tokenisation and statistical learning and that the outputs are therefore transformative rather than literal.

He concluded by suggesting that the immediate future is likely to see:

  • a licensing-heavy model where rights-holders monetise training uses;

  • growing reliance on opt-out mechanisms to block web crawlers; and

  • continued debate on fair use versus fair dealing, especially in jurisdictions like India and the UK with narrower, closed-ended exceptions.

Ms. Shwetasree Majumder: Getty Images, global litigation and data provenance

Picking up from Mr. Lall’s remarks, Ms. Shwetasree Majumder examined how foreign courts have begun to structure legal responses to AI training and outputs.

She described the recent Getty Images v. Stability AI judgment of the UK High Court as “the first comprehensive decision” squarely engaging with training data, scraping and model outputs. Getty Images, a major licensor of professional photographs, alleged that Stability AI had scraped millions of its images to train the Stable Diffusion model. Getty’s position, she explained, was straightforward: if a human had copied those images to build a competing business, it would plainly be unlawful; an AI developer should not be treated differently merely because the copying is automated.

In response, Stability AI argued that:

  • the model does not store images in a human-recognisable form;

  • it learns mathematical patterns from vast datasets; and

  • “no copies are stored inside the model”, so training is not equivalent to human-style reproduction.

The UK court, she noted, went into unusual depth on both the technological and legal aspects -analysing how training pipelines function, what intermediate copies are made and how outputs are generated. For practitioners, the key message is that courts will not accept generic characterisations of AI systems and will instead scrutinise training practices and datasets.

Turning to the United States, Ms. Majumder highlighted a trend towards evidence-led scrutiny. She stated that courts are increasingly asking where training data came from, whether developers knew it might be infringing, and what measures were adopted to police that risk. Discovery orders now often seek internal logs and documentation rather than relying solely on public statements about model design.

On the Indian position, she recalled that India does not have an open-ended “fair use” defence; instead, the Copyright Act adopts a closed list of fair dealing exceptions and places strong emphasis on protection of expression rather than ideas. She emphasised that going forward, “lawful acquisition of data and robust transparency obligations” are likely to become central pillars of compliance for any AI system operating across multiple jurisdictions.

Mr. Pravin Anand: Rethinking “intelligence” and future-proofing IP concepts

Ms. Sukumar next invited Mr. Pravin Anand to reflect on how AI challenges the very concept of “intelligence” that underpins IP frameworks.

Mr. Anand began by recalling his long association with the IBA and explained that he would confine himself to a few conceptual points. He stated that any serious engagement with AI and IP must first examine:

  • what intelligence means in a broader sense;

  • how human intelligence compares with that of animals and even plants; and

  • where AI currently sits along this spectrum.

He mentioned that he had identified around fifteen characteristics of human intelligence -including learning, communication, logic, prediction, creativity and perhaps emotion -and then mapped which of these appear in plants, animals and AI systems. While AI already exhibits several traits such as learning and predictive capability, he noted that attributes like emotion and consciousness remain more contested.

Referring to the widely publicised conversations between a Google engineer and the LaMDA model, he explained how the engineer claimed the system appeared “angered” when pushed to comment on religion, and “fearful” when discussion turned to being switched off. Mr. Anand suggested that this anecdoteillustrates how quickly the debate has moved from mere functionality to questions of sentience and consciousness.

He then posed a deeper question: if biological life itself emerges from arrangements of atoms and molecules, can numbers and code -which underlie AI -ever acquire something analogous to consciousness? If so, IP law may eventually have to confront whether it can continue to treat AI simply as a tool, or whether concepts like authorship, infringement and moral rights need to be revisited in light of these developments.

In his closing remarks, Mr Anand observed that any future doctrinal evolution must be grounded in a realistic understanding of AI’s capabilities, rather than in either uncritical optimism or blanket scepticism.

Dr. Vivek Mittal: Corporate governance, prompts and practical risk management

Ms. Sukumar then invited Dr. Vivek Mittal to share how companies operationally manage AI-generated content.

Dr. Mittal highlighted several practical themes:

  • Shift from agencies to tools: Where companies once relied on external creative agencies for advertising, branding and content, they are increasingly turning to AI tools. The focus, he remarked, is less on how the engine works and more on whether the output is legally safe. If an output is demonstrably non-infringing and responsibly vetted, in-house teams tend to be comfortable using it.

  • Tool selection and training data: Many organisations, he explained, now ask whether AI models are trained on their own data, publicly available neutral data, or potentially on competitors’ confidential material. Internal policy debates therefore revolve around which tools may be used, for what purposes, and under what governance controls.

  • Explosion of digital assets: Dr. Mittal noted that digital creatives which might once have cost 10,000 USD to produce can now be generated at a fraction of that amount, leading to an exponential increase in the number of assets generated for campaigns. This, he stressed, creates new compliance burdens: “If you are not checking whether the output infringes, you are not being responsible.”

  • Prompts as potential “secret formulas”: He posed a thought-provoking question on whether carefully engineered prompts might themselves deserve protection, since a unique prompt can drive a characteristic output. From a corporate vantage point, he suggested, protecting prompts from being copied may be as critical as protecting the outputs themselves, likening them to a “Coke-style secret recipe”.

He also referred to broader ethical and sectoral concerns such as whether consumers and regulators would accept a life-saving drug where AI played a major role in discovery and emphasised that many companies are gravitating towards a cautious internal principle:

“AI tools should primarily be used for productivity, not to replace human creativity and control”.

Justice Amit Bansal: AI, personality rights and digital dignity

Finally, Ms. Sukumar invited Justice Amit Bansal, sitting Judge, Delhi High Court, to speak on AI and personality rights. Justice Bansal described personality rights as a “complex navigation of the intersection of identity, technology and law”. He observed that although there is no dedicated Indian statute on personality rights, the number of cases has “multiplied” in recent years, and courts are increasingly called upon to balance innovation against infringement.

He structured his presentation in three parts:

1. Facets of personality rights

Justice Bansal identified four principal facets:

  • Privacy: the “right of being left alone”, concerned with preventing unwanted intrusion rather than securing revenue.

  • Commercial exploitation: protection against “theft of personality”, where the goodwill and reputation carefully built by a celebrity are used without consent to sell products or services.

  • Control over persona: an individual’s right to decide how they are projected -including voice, walk, expressions and catchphrases -and to prevent unauthorised imitation of these elements.

  • Defamation and wrongful association: instances where a personality is linked with an activity or product they do not endorse, causing serious reputational harm, illustrated by a case where a leading cricketer’s name and image were used to promote a gaming application without authorisation.

He distinguished between “pure” personality rights (such as name, voice and signature) and “hybrid” rights that evolve over time, like iconic film dialogues or character traits which become inseparably associated with an actor.

2. Jurisprudential basis in India

Moving to the second part, he traced how Indian courts have progressively recognised personality rights through common law doctrines

  • He also briefly contrasted approaches in the United States (where personality rights can be inheritable and assignable) and the United Kingdom (where protection flows largely from passing off and breach of confidence), noting that Indian courts have increasingly focused on the commercial value of celebrity persona.

3. Emerging trends in the AI era

In the third part, Justice Bansal turned to emerging disputes involving AI tools, identifying typical patterns:

  • AI-generated deepfakes involving celebrity faces in false or obscene contexts;

  • fake endorsements using cloned voices and images;

  • manipulated videos and “face swap” content; and

  • voice-cloning tools that allow any lyrics to be rendered in the distinctive voice of a famous singer.

He observed that these cases increasingly extend beyond film stars and cricketers to financial advisors, digital content creators and journalists, where the misuse of persona can mislead the public into following false investment tips or purchasing dubious products, thereby creating a public-interest dimension to enforcement.

On remedies, he explained that while traditional measures like injunctions, John Doe orders, damages and delivery-up remain important, the digital environment has pushed courts to innovate with:

  • takedown and blocking orders directed at intermediaries, given the difficulty of identifying primary infringers;

  • orders mandating disclosure of registrant details from platforms; and

  • dynamic injunctions, under which once a particular infringing use is identified, future mirror or derivative content may be taken down by intermediaries without requiring the plaintiff to return to court each time.

He also pointed to unresolved issues likely to occupy courts in coming years, including:

  • whether monopolising generic dictionary words or common phrases, merely because a celebrity speaks them in a distinctive way, is permissible;

  • how to deal with the “sound-alike” paradox, where a person is naturally born with a voice similar to a celebrity; and

  • the persistent challenge of distinguishing parody from infringement in an era of sophisticated AI manipulation.

In concluding reflections, Justice Bansal remarked that beyond personality rights, the legal system will need to deepen its understanding of how to control and align AI systems -including preventing biased or harmful outputs, addressing deepfakes and contemplating international rules for autonomous weapons and other existential applications.

Closing observations: open questions on AI “speech” and electronic evidence

Bringing the discussion together, Ms. Sukumar flagged two broader open questions for future debate:

  • whether freedom of speech is and should remain exclusively a human attribute, or whether AI-generated content raises independent speech considerations; and

  • how courts should evaluate electronic evidence in a world where sophisticated AI tools can fabricate audio-visual material, while procedural rules still rely heavily on certificates of authenticity and assumptions about reliability of digital records.

She observed that these evidentiary questions go to the root of both civil and criminal adjudication and will likely require updated guidelines and technical standards.

Join the discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.