In a groundbreaking move for the legal system, the Supreme People's Court has implemented a virtual assistant to aid judges and secretaries in the litigation process since 2022. This innovative technology aims to enhance the efficiency of the judicial process, reduce costs for litigants, and make legal knowledge more accessible to the public. However, this advancement comes with significant challenges, particularly regarding transparency, knowledge accumulation, and adherence to international standards such as ISO/IEC 25059:2023, which addresses AI quality.
PGS.TS. Lê Vũ Nam, the Vice-Rector of the University of Economics and Law at the National University of Ho Chi Minh City, emphasized the profound impact that emerging technologies like artificial intelligence (AI), blockchain, big data, and the Internet of Things (IoT) are having on society. "The law cannot remain on the sidelines; it must keep pace with technology, even leading the way to establish a suitable legal framework that protects public interests while promoting innovation and sustainable development," he stated. Nam highlighted that as these technologies evolve, issues surrounding data security, privacy rights, and legal accountability in the digital space are becoming increasingly complex and require comprehensive solutions.
In a similar vein, PGS.TS. Đoàn Thị Phương Diệp, a legal expert, noted that the intersection of law and technology has become more pronounced, especially since the emergence of ChatGPT in 2023. Diệp pointed out the pressing need for legal frameworks to adapt to technological advancements, underscoring that the application of technology in law is not merely a matter of adaptation but raises numerous legal, privacy, and ethical questions. For instance, an incident in the United States involving the chatbot Tessa, which provided erroneous dietary advice, has sparked debate over accountability. If AI causes harm, who bears the responsibility—the developer or the user?
As the legal landscape grapples with these challenges, many law schools in Vietnam are gradually integrating technology into their curricula. However, a significant hurdle remains: the outdated legal framework that struggles to keep up with innovations. For example, the process of "real estate securitization" and the application of blockchain technology in Vietnam still lack clear regulations, while other countries have made significant progress in this area.
ThS. Ngô Minh Tín, a lecturer at the University of Economics and Law, pointed out the shortcomings in the draft Law on Digital Technology Industry, particularly concerning the definition of "digital assets." He argued that limiting this definition to blockchain is insufficient, as there are various types of digital assets, such as NFTs and cryptocurrencies, that do not necessarily rely on blockchain technology. Tín advocates for the inclusion of "digital assets" as an independent category within the Civil Code.
Another critical issue is the protection of personal data. Many AI applications currently collect data by default through user agreements, placing users in a passive position. Without effective control mechanisms, the consequences could be severe. Thus, it is imperative to establish regulations for AI applications and develop ethical guidelines for developers to encourage innovation while ensuring societal safety.
One pressing question remains unanswered: who owns the rights to assets created by AI? If an AI-generated work infringes on copyright, who is held accountable? Tín asserts that "the owner must also be the one responsible." Therefore, as technology advances rapidly, creating a suitable legal framework is essential for the healthy development of artificial intelligence while protecting the rights and legitimate interests of citizens.
In a related development, a new trend in AI has emerged, capturing attention for its impressive potential yet raising significant concerns. Users are discovering that the latest models of ChatGPT can infer the location where a photo was taken based solely on the image itself. This capability, stemming from the recently released o3 and o4-mini models by OpenAI, showcases enhanced image reasoning skills.
These models can zoom, crop, rotate, and analyze images, including those that are blurred or distorted, to identify visual cues such as signs, road markings, menus, and even architectural features. When combined with web search capabilities, ChatGPT transforms into a powerful geolocation tool. Users on social media platform X have tested ChatGPT with everyday snapshots, and the AI frequently identifies not only the city but also the specific location.
However, this feature also poses significant risks. There are no barriers preventing users from uploading photos of strangers and asking ChatGPT, "Where is this?" This raises serious privacy concerns. Although using ChatGPT for reverse image searches is not always accurate—AI can sometimes provide vague or incorrect answers—many cases demonstrate that the new o3 model excels in recognizing intricate details.
Currently, this capability is primarily used for entertainment; however, as with any technology, malicious actors may attempt to exploit it. There is hope that OpenAI will soon implement protective measures and safety standards to mitigate these risks. As the legal and technological landscapes continue to evolve, the intersection of AI and law will require ongoing scrutiny, adaptation, and proactive measures to safeguard individual rights and societal interests.