Introduction
Can a machine truly be guilty of theft? This is a question that sounds like the opening of a science fiction court case, but in the modern world of the internet, it is a pondering legal issue. With generative artificial intelligence (AI) tools becoming increasingly prevalent, capable of generating poems, generating logos, composing music, or mimicking the style of greater famous artists, the boundaries between inspiration, innovation, and infringement feel more blurred.
In legal terms, this creates a curious gap. Our intellectual property (IP) laws were designed for humans — like authors, inventors, and performers — not for artificial intelligence computers that can be taught on large sets of data and create outputs that look like existing copyrighted works. So, when an AI algorithm like Midjourney creates an image that looks like the work of van Gogh, or ChatGPT creates text that similarises with a copyrighted article, who is legally responsible? The programmer? The user? Or does the law leave us in a muddle with no straightforward answers?
Indian legal tradition, such as the Copyright Act, 19571, still expects a “human author” prior to granting copyright protection. In Eastern Book Co. v. D.B. Modak2, the Supreme Court reiterated the role of skill and creativity but dismissed hard work as a requirement of copyright. But what if there is no human skill involved here? Such content by AI is based on training data and algorithmic computations instead of self-aware skill. And, in R.G. Anand v. Delux Films3, the Court used the basis of distinguishing inspiration and imitation — is an AI capable of even being aware of a difference?
As legal community across the globe view begins to wrestle with question pertaining to AI and intellectual property rights, India stands at the same juncture. Do we update our laws to the digital era, or do we allow creators to remain vulnerable as AI operates in a legal grey area? This essay examines how global regulations and Indian law address this problem and proposes directions on how we can proceed to set legal liability in the era of intelligent machines.
Understanding the problem
The very recent trend of Ghibli-style art — just for fun, I typed: “a woman walking down a rainy Mumbai street, painted in van Gogh’s style”. In seconds, the image materialised. It was lovely, artistic, and somehow familiar. But then I thought: I did not paint it, and van Gogh certainly did not paint Mumbai. So… whose is it? Can I claim that as my original work just because I typed a prompt? This flash of interest became a question I have been curious to find the answer for: if I did not create it with my own hands or mind, do I actually own it or did the machine just pilfer from somebody else’s imagination?
This question encapsulates the essence of the issue that AI-generated works present to the realm of intellectual property law. Generative AI models such as ChatGPT, Midjourney, DALL·E, and Google’s MusicLM now possess the capability to produce poetry, digital artwork, and even musical compositions that seem “original” to the observer. However, in the realm of India’s existing copyright law regulated by the Copyright Act, 1957, originality is understood as the product of human skill, effort, and judgment. The Supreme Court in Eastern Book Co. v. D.B. Modak4 explained that mere effort or time investment is not sufficient. There has to be a “modicum of creativity”, and above all, it must come from a human origin.
This is a significant challenge for content generated by artificial intelligence. In contrast to AI-assisted content, where a human employs AI as a tool to help or generate ideas, AI-generated content is generated autonomously by algorithms with little or no user inputs such as text prompts. AI’s ability to mimic specific artistic styles without authorisation erodes not only copyright protection but an artist’s moral rights under Section 575 of the Copyright Act, 1957. It highlights the fact that individual pieces of art have legal protection, but styles and aesthetics — scents that make a van Gogh more than just another painting — are not given the same protection.6 Therefore, when an AI creates a picture in the “van Gogh-style”, it does not technically violate any law; however, it definitely violates the spirit of authorship.
Also, most such AI tools learn from massive databases — often scraping copyrighted works without permission. That is being contested in Ani Media (P) Ltd. v. Open AI Inc.7, with the Court exploring whether AI companies violated Indian copyright law by training their models from protected works. This case can be a pivot point in India’s handling of AI’s collision with internet protocol (IP), especially since Indian law is modelled on “fair dealing” (narrower than US “fair use”) — providing minimal room for wide-scale, unlicensed training of data to find legal justification.
The real challenge, therefore, is not so much legal as philosophical: can there be creativity without consciousness? Do machines deserve credit for creation? Or does the credit (and blame) always belong to the user, developer, or owner of the dataset? The law has no clear answers yet — but as a law student and a consumer of these tools, I find myself in the middle of this emerging debate. The deeper I go, the more it appears that AI is not replacing creativity—it is redescribing the question of who — or what — is a creator.
Indian laws and its loopholes
India’s IP regime, though challenged by AI, has several positives. The Copyright Act, 1957 protects original work, with Sections 138 and 179 having definite ownership and Section 5110 having definite infringement. The Patents Act, 197011 ensures transparency through Section 10(4)12 and averts monopolies over abstract concepts through Section 3(k)13. The Trade Marks Act, 199914 provides strong brand protection under Section 2915, and the Designs Act, 200016 protects aesthetic innovation. These provisions, while not AI-specific, provide a strong legal basis that can be modified to address future issues owing to artificial intelligence and new technologies.
When we consider legal responsibility of AI in the Indian context, the question that comes to one’s mind is same: Who is responsible when an AI violates someone’s intellectual property? Is it the developer? The operator? Or can the AI itself be held responsible, as strange as that may sound? Indian intellectual property law, unfortunately, has no answer to that. The Copyright Act, 1957, that applies to creative works like literary, musical, and artistic works, is based on the theory of human authorship and human responsibility. That is to say that for any violation, the individual who wilfully and knowingly violates another’s copyright is responsible. But what if it is an algorithm that does the copying?
Indian courts have traditionally used precedents to decide questions of ownership, originality, and infringement. In Coca-Cola Co. v. Bisleri International (P) Ltd.17, the Delhi High Court restated that IP infringement occurs when an unauthorised third party uses a protected work in a manner likely to confuse or mislead the public. These cases, although unrelated to AI, reiterate that intent and human action are central to establishing liability. This gets tricky when an AI which has been trained on humongous databases unknowingly copies copyright-protected material without the user or creator being aware of it.
The lack of legal protection for works generated by artificial intelligence has two significant implications: one, they are not safeguarded by contemporary Indian law, and two, there is no clear mechanism of accountability in cases where they violate established intellectual property rights. This is not merely a deficiency but a rapidly emerging risk. Tools such as Midjourney and ChatGPT can potentially unwittingly generate material that is a duplicate of existing copyrighted material; however, current laws do not clearly impose liability on either the user or the seller of the AI technology.
What should be bothering—and ought to bother all lawmakers and entrepreneurs—is the growing legal uncertainty over artificial intelligence. We are using machines that act like artists, writers, and even musicians; but our legal system is treating them as simple calculators. This mismatch is not just theoretical— it is creating real legal uncertainty, reduced ownership rights, and even enormous market exploitation. As a scholar who examines and monitors both technological innovation and legal concepts, I find myself constantly wondering: Are we trying to cram a square peg (AI) into a round peg (traditional intellectual property law)?
The international framework
World Intellectual Property Organisation (WIPO)18 is at the forefront of the global discourse on AI and intellectual property. It has not set global regulations yet, but WIPO’s public discussions — like the “WIPO Conversation on Intellectual Property and Artificial Intelligence”— suggest that there is a need to account for authorship, ownership, and responsibility for works produced by AI. WIPO understands that the current framework of humans may not work in a future world where machines will be able to produce things without human intervention or replicate existing work. While its main role is to help, WIPO encourages member States to come up with new concepts in law while observing international agreements like the Berne Convention, 188619.
The United States does have a specific rule that only writings of individuals may be copyrighted. The US Copyright Office recently reinforced this in the case of Zarya of the Dawn, where portions of a comic book composed by AI were not subject to copyright. The US does provide more fair use protections, however, particularly when training AI systems on large datasets. For instance, in Authors Guild v. Google Inc.20, the Court affirmed the right of Google to scan books for its search database, a strong precedent for non-commercial uses of AI. This is to say that while works created by AI are not protected, the act of creating them does enjoy some legal backing under US law. The European Union has put rules and regulations first. The new EU Artificial Intelligence Act, 202421 classifies AI systems according to risk and imposes strict rules upon high-risk applications. While not intellectual property in itself, it impacts the application of AI in creative fields and those dealing with sensitive data. For copyright, the EU Directive 2019/79022 on Copyright in the Digital Single Market contains exceptions for text and data mining, particularly for research, so it is legally permissible to train AI on particular data sets. EU law still safeguards creators and demands permission for commercial mining, leading to a more balanced setting.
The United Kingdom differs from other nations in that it clearly defines works produced by computers. Under Section 9(3) of the “Copyright, Designs and Patents Act, 1988”23, the author of the work produced by the computer is the one “by whom the arrangements necessary for the creation of the work are undertaken”. The provision was written many decades prior to the widespread adoption of generative AIs, thus making the UK differ from other nations on how to treat authorship for non-human-generated content. Despite this, it leaves one questioning: who really “arranges” the content—the developer, the user, or the one who gave the prompt?
Different ways of doing things pose a basic question: What should India do? The UK model welcomes the notion that machines can aid creativity, the EU has rules for responsible use of AI, and the US encourages new ideas with flexible fair use. But all of these are not without flaws. India needs to create a hybrid approach that is consistent with its constitutional values, balances new ideas with fairness, and protects its creators while being competitive in the world. The question is—can we create a system that functions well in the world but is also suitable for India?
Analysis of the laws and their implementation
India’s intellectual property legislation, painstakingly crafted and thorough on paper, is growing more out of touch with the digital and algorithmic world they are now being asked to govern. The Copyright Act, 1957, for example, firmly roots authorship in human imagination and infringement in intention and ownership. These maxims have worked well in conventional cases, but their application is confused when the “creator” is not a human but an artificial intelligence algorithm operating over large datasets. This is not a theoretical problem; it is becoming a real one as AI-generated content starts to replicate, remix, and replace human-created content—often without the original authors even being aware.
One of the most basic implementation constraints is the absence of clear legal status or identity for works generated by artificial intelligence. The law expects a human author and does not take into account the effect of prompts, datasets, or algorithms. This produces a paradox where the product is legally unownable yet commercially exploitable. Midjourney, DALL·E, and ChatGPT, among other tools, generate content that appears original, though potentially partially or subtly derived from protected works. Yet, in case of infringement, there is no immediate person made liable unless the user or developer acted knowingly or wilfully—an ingredient rarely provable in the case of algorithmic output.
Moreover, the enforcement mechanisms are still designed for static infringement rather than the dynamic process of content generation. Copyright protection follows the assumption that the infringing work can be traced, compared, and successfully litigated. Yet, AI systems—especially those generating content—generate thousands of varied outputs every day, without documenting the process often or making the training data transparent. This leaves so-called “invisible infringements,” in which a person’s style, work, or structure has been employed, but the harm is irrecoverable. Traditional tools, such as takedown notices or plagiarism scanners, are useless in this case, since AI-generated content is rarely a direct copy; rather, it is more of a smudged mirror.
But another urgent concern arises from jurisdiction and enforcement of cross-border AI activities. In Ani Media (P) Ltd. case24, the concern was not just unauthorised use of protected data for training; it was also the question of attributing liability where the infringing AI model has been developed outside Indian jurisdiction. This concern is increasingly urgent for regulators around the world, particularly in countries like India, where enforcement resources are already in short supply. If an AI company is based elsewhere, and its model has been trained on Indian material without authorisation, how can Indian law compel that infringement? More to the point, who will compel it?
At the level of ideas, the biggest problem is that law and AI base themselves on profoundly different assumptions. Law is built on intent, authorship, evidence, and responsibility—all definite in human controversies. AI works on pattern recognition, probabilistic modelling, and anonymous computation. Attempting to impose rigid legal categories on loose, machine imagination creates a gap that existing legislation is poorly adapted to deal with. In retrospect, the legal system is not weak but simply not equipped to handle what is coming. The laws in place still function on the premise that creators are human beings, that infringement can be detected, and enforcement is within clearly demarcated territorial national borders. None of these presuppositions fully holds in an AI-driven creative economy. If the law does not evolve—not just through legislation but through new conceptual paradigms—India will be permitting AI to evolve in a legal blind spot, where creators are not protected and infringers are not apprehended.
Recommendations and conclusion
As artificial intelligence more and more becomes a staple of the realms of creativity and intellect, it has more and more become apparent that contemporary intellectual property law is struggling to adapt to the intricacies of the technologies that it must govern. Although these laws have by and large been successful in protecting human creativity, they are wanting when the “creator” is not a sentient being but a predictive algorithm developed through large sums of data analysis.
To remedy this, intellectual property law needs to start by recognising the hybrid character of AI-generated works. There needs to be a legal category of “AI-assisted works” to refer to co-participation by human users and algorithmic systems. This would allow ownership and culpability to be attributed without requiring the impossible transformation of awarding legal rights to non-human actors.
Liability in these cases should be articulated in a hierarchical model. Responsibility can lie with the user in cases of clear intent, with the developer in cases of poor protective measures, or with the platform in cases of enabling or profiting from unauthorised sharing. This model shifts the burden of compliance from the end-user to the entire AI ecosystem.
Transparency must become a non-negotiable standard. Developers of generative AI models should be legally bound to disclose their training data sources, particularly when commercial outputs are generated. This measure would protect original creators and help identify when infringement—however subtle—has occurred.
Along with legal reform, there is an increasing demand for soft regulatory measures that encourage responsible AI practice without inhibiting innovation. One proposal is to introduce an Ethical AI Certification scheme. This would be a voluntary but high-reputation standard offered by an IP body or government-accredited organisation. Developers and platforms that adhere to fair training practices, dataset transparency, and content-use compliance could use this badge, as environmental or data protection labels are used. This scheme would not only encourage responsible development but also enable users and creators to use AI tools that maintain ethical and legal standards. Until law is passed, this can be a helpful bridge between compliance and innovation.
A progressive future step would be the enactment of a Creative Attribution by Default policy. Under this policy, AI platforms would automatically attribute created content as “AI-generated with human input,” as well as maintain internal logs of the prompts and dataset elements utilised. Original content creators whose work played a significant role in the training of the AI would be notified or even given future licensing or royalties. This is not a deterrent measure that will stifle content creation but one that encourages accountability and transparency. Over time, this policy would enable a system that acknowledges the human work involved in the datasets on which AI is trained, thereby providing recognition and potential rewards. Combined, these steps enhance a more equitable digital future and reinforce the bridge between human creativity and artificial intelligence.
A court of oversight committed to intellectual property and artificial intelligence issues could provide speedy relief and expert judgment, releasing authors from years of litigation and filling the current gap between technological advancements and enforcement. In short, the challenge we face is not merely one of shoehorning AI into current legal boxes, but one of remaking a just, forward-looking system. A forward-looking legal system needs to reconcile innovation with accountability—treating a world remade by machines in such a way that the rights of human imagination are still safeguarded, seen, and honoured.
Note: This article was selected as one of the winners in an essay writing competition organised by the Centre of Artificial Intelligence and Intellectual Property Rights, Himachal Pradesh National Law University, Shimla.
*Christ University Law, Bangalore. Author can be reached at: japman.bagga@law.christuniversity.in.
3. (1978) 4 SCC 118.
5. Copyright Act, 1957, S. 57.
6. J.V. Abhay, Stealing Styles — Artistic Styles and AI-Generated Art”, 2025 SCC OnLine Blog Exp 31.
8. Copyright Act, 1957, S. 13.
9. Copyright Act, 1957, S. 17.
10. Copyright Act, 1957, S. 51.
12. Patents Act, 1970, S. 10(4).
13. Patents Act, 1970, S. 3(k).
18. World Intellectual Property Organisation, WIPO Intellectual Property Handbook (2nd Edn., 2004).
19. Berne Convention for the Protection of Literary and Artistic Works, 1886.
20. Authors Guild v. Google Inc., 2015 SCC OnLine US CA 2C 1.
21. Artificial Intelligence Act, 2024 (EU).
22. Copyright and Related Rights in the Digital Single Market and Amending Directives 96/9/EC and 2001/29/EC, Directive (EU) 2019/790, dated 17-4-2019.
23. Copyright, Designs and Patents Act, 1988, S. 9(3), Ch. 48 (UK).