Today : Dec 18, 2025
Technology
30 September 2025

AI Transforms Healthcare And Social Media In Korea

Hospitals, startups, and global tech giants unveil new AI tools for brain health, video creation, and developer productivity in a week of rapid innovation.

In a remarkable series of technological leaps, South Korea’s healthcare and AI sectors are making headlines with innovations that promise to reshape how we diagnose diseases, create content, and interact with digital media. From the corridors of Konkuk University Hospital to the development labs of AI powerhouses like OpenAI and TwelveLabs, the last days of September 2025 have seen a flurry of announcements that underscore the rapid pace and global reach of artificial intelligence.

On September 30, 2025, Konkuk University Hospital’s Department of Radiology announced the full-scale adoption of Neurophet AQUA, an AI-driven brain MRI analysis software, according to Health Chosun. This sophisticated tool is designed to analyze patients’ brain MRI scans at lightning speed, quantifying the degree of brain atrophy, aging, and white matter changes. What sets Neurophet AQUA apart is its ability to provide objective, numerical data on structural changes associated with conditions like Alzheimer’s disease, vascular dementia, and mild cognitive impairment (MCI). For clinicians, this means a more reliable foundation for diagnosis—one that moves beyond subjective image interpretation to a realm of precise, data-driven insights.

Professor Moon Won-jin of Konkuk University Hospital’s Radiology Department described the software’s implementation as a “turning point that raises the department’s diagnostic capabilities and enables patient-centered personalized care.” The benefits are tangible: Neurophet AQUA generates personalized analysis reports for each patient, complete with numbers and graphs that make it easier for individuals to understand their own brain health. These reports can also be used in follow-up exams, allowing doctors and patients to track changes over time with remarkable clarity. “We will provide more precise and systematic diagnostic and management services to patients with increasing dementia and neurodegenerative diseases in our aging society,” Professor Moon emphasized, as quoted by Health Chosun.

But the impact of Neurophet AQUA isn’t limited to those already experiencing symptoms. The software can be integrated into routine health checkup programs, comparing an individual’s brain metrics to standardized data from peers of the same age and gender. This percentile-based approach allows even asymptomatic individuals to recognize early risks and take preventive action—a potential game-changer for public health, where early intervention often makes all the difference.

Meanwhile, in the fast-evolving world of AI-powered content creation, OpenAI is preparing to launch its much-anticipated Sora 2 social app. According to AI Times and Wired, the app is set to debut within days and will allow users to generate dynamic video clips up to 10 seconds long using AI, based solely on user input and preferences. Unlike existing platforms, Sora 2’s entire content feed will be AI-generated—think TikTok, but with every video crafted by artificial intelligence rather than uploaded by users. The app’s design includes a vertical feed, recommendation algorithms, and interactive features like likes, comments, and video remixes.

One standout feature is facial recognition: authenticated users can have their faces incorporated into generated videos, and others can tag users’ images—with notifications sent even if the content isn’t published. This careful attention to privacy and control is matched by OpenAI’s proactive approach to copyright. As reported by Wall Street Journal, the company is introducing an opt-out system for intellectual property rights holders. Movie studios and animation creators, for example, can request that their IP be excluded from Sora-generated content. OpenAI has already begun informing major Hollywood studios and talent agencies about these measures, signaling a new era of cooperation (and negotiation) between AI developers and the creative industries.

From a user’s perspective, Sora 2 is designed to be accessible and engaging, requiring no prior video editing experience. Real-time editing tools and facial recognition lower the barrier to entry, while the app’s integration with platforms like Facebook and TikTok promises a seamless social experience. OpenAI’s “For You” feed, modeled after the addictive recommendation streams of rival apps, aims to keep users engaged and coming back for more. Although OpenAI has not yet disclosed its commercial plans for Sora 2, the company’s strategy is clear: compete head-to-head with the likes of Meta and Google, both of which have recently announced AI-driven video features on their own platforms.

While OpenAI’s Sora 2 is poised to shake up the social media landscape, another South Korean company is making waves in the realm of AI-driven content intelligence. TwelveLabs, led by CEO Lee Jae-seong, officially launched its Model Context Protocol (MCP) server on September 30, 2025, as reported by HelloT. MCP is an open standard protocol—originally proposed by US AI company Anthropic—that standardizes the connection of data and functions between AI systems. TwelveLabs’ implementation enables its advanced video understanding models to work seamlessly with popular AI tools like Claude Desktop, Cursor, and Goose.

This means developers can now integrate powerful video intelligence features—such as natural language video search, automatic content summarization, Q&A, and real-time video exploration—into their applications without complex setup. The underlying technology is built on TwelveLabs’ proprietary multimodal models, Marengo and Pegasus, which can analyze and generate content across text, images, audio, and video through a unified interface. As Lee Jae-seong put it, “What we pursue is not just multi-model but true integrated multimodality, understanding elements of text, image, audio, and video through a single interface.” He added, “This MCP server is the result of that philosophy and will establish video AI as a standard function in the next-generation agent ecosystem.”

TwelveLabs’ rapid ascent has not gone unnoticed. Since its founding in 2021, the company has been named to CB Insights’ Global 100 AI Startups list for four consecutive years, and in April 2025, it became the first Korean AI model provider on Amazon Bedrock. CTO Lee Seung-jun’s inclusion in Forbes North America’s “30 Under 30” for AI last year further underscores the company’s reputation for innovation. With cumulative investment now exceeding $107 million, TwelveLabs is well-positioned to expand its influence in the AI content creation ecosystem, offering developers real-time, latency-free tools to boost productivity and creativity.

These developments, spanning healthcare diagnostics, social video creation, and developer tools, illustrate the breadth and depth of AI’s impact on daily life. Whether it’s helping doctors catch dementia earlier, empowering users to create viral videos in seconds, or giving developers the ability to build smarter, more intuitive applications, the message is clear: AI is not just the future—it’s the present, and it’s changing fast.