
Artificial intelligence is not only transforming industries — it is also challenging the very definition of creativity and ownership. In the United States, a scientist has asked the Supreme Court to review a landmark copyright dispute after the U.S. Copyright Office refused to register a work generated by an AI system known as DABUS. The case raises a fundamental question: can a machine truly be considered an author? Proponents of AI-generated art argue that creativity is no longer a uniquely human privilege.
They claim that systems trained on massive datasets can make original choices, compositions, and concepts that mirror — and sometimes surpass — human inventiveness. Denying them recognition, they say, limits the scope of innovation and slows the progress of technology-assisted creation. On the other hand, critics warn that granting copyright protection to machines could open the door to abuse, flooding markets with synthetic works that lack accountability and traceability.
This legal debate arrives at a time when institutions worldwide are struggling to define ethical frameworks for generative technology. The New York State judicial system recently became one of the first in the world to issue a formal policy governing how judges and court employees may use AI tools.
The guidelines require special authorization, restrict the upload of confidential data, and mandate explicit disclosure when AI contributes to a legal opinion or document. For the U.S. justice system, this is more than a technical adjustment — it’s an attempt to protect the human element in decision-making.
As courts, schools, and corporations embrace automation, the line between assistance and delegation grows thinner. The age of “machine authorship” is not only about art; it’s about redefining responsibility in a world where algorithms can think, write, and decide faster than humans.