Today : Apr 20, 2025
Technology
16 April 2025

Novgorod Companies Lead In AI Assistant Development

As demand for AI solutions grows, concerns over software security emerge

In a remarkable surge of innovation, companies from the Novgorod region have emerged as key players in the development of artificial intelligence (AI) assistants, frequently conducting tenders for their creation throughout 2024. According to analysts from the electronic trading platform TenderPro, these companies have made significant strides, ranking third in the number of tenders initiated, with Moscow and the Moscow region leading the pack at 39.6% and St. Petersburg trailing at 5.7%. The Novgorod region accounts for 12.7% of the total tenders, showcasing its growing influence in the tech landscape.

AI assistants, often referred to as smart assistants, are designed to aid employees across various sectors by performing a range of tasks. These tasks include answering customer inquiries, selecting products, monitoring prices, and even assisting in the recruitment of new specialists. The financial sector and retail companies have shown a particularly strong interest in these technologies, with many seeking solutions to enhance sales, reduce production errors, optimize customer support costs, and streamline the hiring process.

As reported by Vedomosti Yug, AI assistants are crucial in guiding customers through their purchasing journey, from their initial visit to a website all the way to completing a purchase. These assistants often take the form of chatbots, widgets, pop-up windows, mobile applications, or browser plugins, seamlessly integrating into the consumer experience.

One of the standout features of AI assistants is their ability to provide personalized recommendations. By analyzing search queries and previous purchases, these smart tools can suggest products tailored to individual customer needs. Moreover, they help minimize incomplete purchases by quickly addressing customer questions, offering personalized discounts, and reminding users about price drops on items in their shopping carts.

AI assistants also play a vital role in order processing. Chatbots can handle requests around the clock, answering frequently asked questions and facilitating delivery arrangements. Additionally, they contribute to visual merchandising by analyzing images from online storefronts and suggesting optimal product placements. Retailers can also leverage AI to analyze customer reviews, allowing for prompt responses to complaints and feedback.

Beyond retail, the applications of AI are vast. In programming, AI assists with automatic code writing and correction, documentation generation, and algorithm optimization. In marketing and copywriting, AI is utilized for creating articles, advertising content, and product descriptions, as well as generating ideas for social media management.

Meanwhile, the rise of AI coding assistants has sparked concerns within the software development community. As highlighted by Alexander, COO of a data analytics SaaS platform, the proliferation of tools that generate code based on large language models (LLMs) is transforming software development while simultaneously introducing new risks to the software supply chain.

These AI coding assistants are known to exhibit a phenomenon referred to as "hallucination," where they generate package names that do not exist. Security experts and researchers have noted that around 5.2% of package suggestions from commercial models are fictitious, a number that rises to 21.7% for open-source models. This issue poses a significant risk, as running code that incorporates these non-existent packages will lead to errors during import.

Hackers have recognized the potential to exploit these hallucinations by creating malicious packages under fictional names and uploading them to package registries like PyPI or npm. When an AI assistant hallucinates a package name, it can inadvertently lead to the installation of malicious software.

Research conducted by the security firm Socket indicates that certain hallucinated package names tend to recur when prompts are repeated, suggesting that specific queries consistently generate the same phantom packages. This exploitation of fictitious package names is known as "slopsquatting," a term coined by Seth Michael Larson, a security developer at the Python Software Foundation, to describe the deceptive results produced by AI models.

Larson emphasizes the urgency of addressing this issue, stating, "We are at the very early stage of studying this problem at the ecosystem level. It's difficult, and probably impossible, to quantitatively assess how many installation attempts occur because of LLM hallucinations without greater transparency from LLM providers. Users of code, packages, and information generated by LLMs should verify the results before applying this information in practice, otherwise there may be real consequences."

Feross Aboukhadijeh, CEO of Socket, further underscores the shift in how developers approach coding. As AI tools become standard assistants, many developers resort to making requests to AI, copying the suggestions, and moving on without thorough verification. This practice raises the stakes, as the code generated often includes hallucinated package names that sound legitimate but do not exist.

Aboukhadijeh warns that these counterfeit packages can appear remarkably convincing, often accompanied by realistic README files, plausible GitHub repositories, and even supportive blogs that lend an air of legitimacy. Unfortunately, when developers search for these slopsquatting package names on Google, they may encounter AI-generated summaries that inaccurately endorse the packages as useful and stable, creating a false sense of security.

In a notable incident earlier this year, an AI overview tool from Google suggested a malicious npm package that was an imitation of a legitimate package. This highlights the potential for AI-generated content to inadvertently validate harmful entities within the software ecosystem.

As the Python Software Foundation continues to address these challenges, Larson stresses the importance of vigilance among users. He advises that developers should verify the existence and integrity of packages before installation. He also advocates for organizations to mirror subsets of package registries within their systems, allowing for greater control over the packages available to developers.

In conclusion, the intersection of AI technology and software development presents both exciting opportunities and significant risks. As Novgorod companies lead the charge in AI assistant development, the broader implications of AI in coding and software supply chains must be carefully navigated to ensure a secure and efficient technological landscape.