Cybercrime in Uzbekistan is undergoing rapid transformation, with traditional phone-based fraud methods losing ground to sophisticated, cross-border technologies that operate with minimal human involvement. These trends were discussed during a Tashkent-Moscow video bridge organized at the Sputnik multimedia press center, featuring representatives from the Cybersecurity Center, Tashkent City Police Department, and information security expert Alexey Lukatsky.
Madina Mamadaliyeva, a representative of the Department for Combating Crimes in the Field of Information Technology at the Tashkent City Police Department, noted that fraudsters increasingly use VoIP telephony to make calls via the internet, concealing their real location. Such calls typically originate from abroad, with numbers possibly belonging to other countries, complicating tracking efforts. A key tool remains number spoofing—users see a familiar or local number but are connected to a different person upon callback. However, the primary weapon is not technology but psychology: scammers exploit fear, urgency, and trust in authority figures.
Experts emphasized that phone fraud cases are declining, but this does not mean the problem is solved. One reason is that scammers often speak Russian, raising suspicions among some audiences. Alexey Lukatsky added that the reduced effectiveness of phone schemes is also linked to measures taken by telecom operators. Some countries impose restrictions on international calls or block suspicious numbers. Yet, fraudsters are simply shifting tools—they are rapidly moving to messengers, particularly Telegram and other platforms. This allows them to send not only text but also voice messages, images, and videos, making schemes more convincing and harder to detect.
An even more significant change is the use of artificial intelligence (AI) and deepfake technologies. Alexey Lukatsky highlighted that AI is already actively used by malicious actors, and this is not a theoretical threat but a growing reality. A key risk is the rise of deepfakes, which enable the forgery of faces, voices, or behaviors. As digital services and biometrics expand, the importance of such tools will only increase. AI is also employed to generate personalized phishing messages that account for the victim's age, language, cultural traits, and even psychological profile, making attacks far more precise.
The automation of attacks represents another serious shift. According to Lukatsky, fraudsters are beginning to use agent systems capable of executing action chains without human involvement. Such a system can gather information about a victim from open sources, formulate a tailored message, send it, analyze the response, and adjust strategy. He suggested that by 2026, such attacks could reach a new level and become commonplace, increasing the burden on defense systems and law enforcement while making attacks faster and more complex.
Traditional methods of combating cybercrime are losing effectiveness. Alexey Lukatsky stressed that efforts must be multi-level: law enforcement work, technical threat detection, financial flow monitoring, and, crucially, prevention. He placed special emphasis on education—training should start early, as children already actively use digital devices. Simultaneously, work with adults and the elderly is essential, as they often fall victim to scams.
Amid rising threats, the issue of corporate responsibility was raised. Albert Valiyev noted that within the strategy, requirements for legal entities and their managers are planned to be strengthened. Cybersecurity should no longer be perceived as a secondary task. Many executives view it as a cost center that does not generate direct profit, leading to underfunded protection. The new approach implies that if an incident occurs due to insufficient safeguards, responsibility may fall on the organization or its leadership. Compensation for victims was also discussed if it is proven that a company failed to ensure adequate protection levels.
Source: podrobno.uz