In New Delhi, leading global experts are deliberating on how to make the framework of artificial intelligence (AI) safe and secure. At the same time, concerns about the misuse of AI are deepening. In today’s era, the dangers of deepfakes have emerged prominently. Fake images and videos generated through AI are becoming increasingly common. Even lawyers have begun filing petitions drafted with the help of AI. The Supreme Court has expressed concern over this, stating that petitions cannot be prepared in such a manner. Some of the filed petitions reportedly contained references to cases that never even occurred.
Recently, a report revealed that artificial models such as ChatGPT can be used to create fake Aadhaar and PAN cards. The dual-use nature of technology is also posing a threat to national security. Some incidents have come to light that highlight how both hostile nations and criminal or terrorist organizations are exploiting the dual-use capabilities of AI. The biggest question that arises is whether, in the age of AI, everything will lose its originality. Will fake content dominate everywhere—whether documents, art, or literature? AI can even convincingly imitate individuals.
In the field of education, if AI becomes a regular presence in classrooms, what will happen to examination centers? Will students in the future be allowed to carry calculators or other AI-enabled devices into exam halls? The real concern is about children’s intellectual skills and mental development. Could their cognitive abilities decline? Many such questions confront society today.
Modern technology, such as mobile phones or smartphones, has proven immensely useful. However, alongside their benefits, adverse consequences have also surfaced. At times, they entangle individuals in multilayered mental distress; in extreme cases, people have even lost their lives. News reports frequently highlight incidents of children committing suicide while playing games on smartphones.
In recent times, as discussions around the expanding scope and impact of AI have intensified, it has been projected as a tool that will generate jobs in the future. On the other hand, concerns have also been raised about its widespread impact on employment and the risks it poses. One of the objectives of the Delhi conference was to deliberate on concrete measures to maximize employment opportunities through AI, mitigate its risks, and minimize its adverse impact on ordinary people’s lives. AI is being deployed at new levels in defense and business sectors. Clearly, to set new benchmarks of development and secure a place in the modern world, it is essential to stay updated in the field of AI. However, shrinking employment opportunities and multilayered risks affecting people’s lives are emerging rapidly, and addressing these challenges will be the biggest task.
It is well known that the scope of artificial intelligence has expanded across vast domains. The smartphone in our hands is no longer merely a tool to edit photos and gain attention on social media platforms. In the medical field, AI is being used from disease detection to treatment. Its utility in crime control has also been demonstrated.
Given the rapid pace of innovation in education and technology, and the widespread integration of AI across essential sectors, it is necessary to discuss not only its possibilities and expectations but also the concerns associated with it. Ultimately, it is ordinary people who are affected by any form of development, and every technology will be judged by the extent to which it ensures overall human welfare and proves beneficial to the majority of the world’s population.
Additionally, the growing trend of creating and sharing short videos or reels on various social media platforms has raised new concerns. People are increasingly adopting bizarre methods to reach larger audiences or showcase entertaining activities. AI-generated images must now be prominently labeled as such. Since the draft rules were issued in October, some modifications have been made: there is no longer a fixed size requirement for such disclosures, and the rule does not apply to AI-generated images that do not claim to be real. AI-generated images have flooded users’ feeds, and people have the right to know that these images are not authentic. Requiring users to disclose artificially created content is a welcome step.
As the technology to create artificial images evolves rapidly, governments will need to reconsider rules that mandate platforms to actively detect such content. Although technology platforms are generally capable of automatically detecting artificial media, billions of dollars are being invested to overcome flaws in detection mechanisms, meaning that this capability will continue to face significant challenges. In a rapidly changing geopolitical landscape, countries worldwide are prioritizing the development of modern technologies and maximizing innovation in this domain.





