A sophisticated fraud scheme weaponizing WhatsApp’s screen sharing feature has cost victims millions of dollars globally, with cybersecurity experts warning that scammers are increasingly exploiting the legitimate communication tool to steal banking credentials and personal information in real time. Singapore police documented at least 46 cases since June 2025, with combined losses exceeding three million dollars, while security researchers report similar attacks spreading across multiple continents.
The scam follows a methodical pattern that exploits human psychology and trust. Fraudsters initiate unexpected video or voice calls through WhatsApp, masquerading as bank representatives, government officials, or technology support agents. They fabricate urgent problems requiring immediate attention, such as suspicious account activity, security breaches, or pending policy cancellations requiring verification.
Once they establish a crisis atmosphere, scammers request that victims activate WhatsApp’s built in screen sharing function, claiming the step is necessary to diagnose or resolve the fabricated issue. When victims comply, perpetrators gain live surveillance access to everything displayed on the device screen, including login credentials, one time passwords sent via text message, credit card verification codes, and bank account balances.
Singapore authorities detailed how scammers typically pose as representatives from NTUC Income, NTUC Union, or UnionPay, citing problems with insurance policies victims supposedly purchased. When targets deny making such purchases, callers transfer them to accomplices impersonating Monetary Authority of Singapore (MAS) officers who accuse victims of money laundering and demand immediate fund transfers to purported safety accounts.
Security analysts emphasize that the threat extends beyond initial screen surveillance. In numerous documented cases, fraudsters pressure victims into downloading additional remote access applications such as AnyDesk or TeamViewer, or installing malicious software disguised as legitimate tools. These programs enable attackers to maintain persistent control over devices, log keystrokes, intercept communications, and conduct unauthorized transactions long after the initial call concludes.
The financial impact reflects both the scheme’s sophistication and its psychological effectiveness. Scam losses in Singapore reached nearly five hundred million dollars during the first half of 2025, with authorities recording approximately twenty thousand cases. The nation documented its highest annual scam report total in 2024, logging over fifty one thousand cases representing a seventy percent increase from the previous year.
Meta, WhatsApp’s parent company, has begun implementing security measures designed to interrupt the fraud chain. The technology firm introduced warning alerts that appear when users attempt to share screens with contacts not saved in their address books during video calls. These notifications encourage recipients to reconsider before proceeding, particularly when strangers or unexpected callers make such requests.
The company simultaneously rolled out scam detection capabilities for Messenger that utilize on device behavioral analysis supplemented by optional cloud based artificial intelligence review. When the system identifies suspicious messages from unknown accounts, it flags them with warnings and provides users options to block or report contacts. Those who opt for artificial intelligence analysis temporarily forfeit end to end encryption protection while Meta’s system evaluates message content for fraud indicators.
Meta reported removing over sixty eight thousand Facebook accounts and three thousand Instagram accounts in Singapore during the first half of 2025 for violating fraud and deceptive practices policies. Globally, the company detected and disrupted approximately eight million accounts associated with organized scam operations across Myanmar, Laos, Cambodia, the United Arab Emirates, and the Philippines during the same period.
Cybersecurity researcher observations indicate that screen mirroring fraud has evolved into a standard component of technical support style scams targeting vulnerable populations. Elderly individuals face disproportionate risk from these schemes, prompting Meta to join the National Elder Fraud Coordination Center (NEFCC), a collaborative initiative bringing together law enforcement agencies and corporations including AARP, Amazon, Capital One, Google, Microsoft, and Walmart.
Singapore’s government responded to the escalating threat by issuing its first directive under the Online Criminal Harms Act in 2025, mandating Meta implement enhanced facial recognition systems and prioritize review of user reports involving government official impersonation. The directive, requiring compliance by September thirtieth, reflects official recognition that digital platform vulnerabilities enable large scale fraud operations.
Protection strategies recommended by cybersecurity professionals center on skepticism and verification. Individuals should categorically refuse screen sharing requests from unknown or unexpected contacts, regardless of claimed urgency. Financial institutions, government agencies, and legitimate technology companies never request screen sharing during unsolicited calls or demand immediate fund transfers to alternative accounts.
Security experts advise immediate termination of suspicious calls followed by independent verification through official customer service channels listed on company websites or account statements rather than numbers provided by callers. Enabling two step verification on WhatsApp and financial applications creates additional authentication barriers that complicate unauthorized access attempts.
Installation of unverified software, particularly remote access tools or Android Package Kit (APK) files suggested during video calls, should be refused without exception. These applications frequently contain malware enabling keystroke logging, data extraction, and persistent device control extending far beyond the initial interaction.
The scam’s effectiveness derives partly from sophisticated social engineering techniques that exploit natural human responses to perceived authority and manufactured urgency. Fraudsters display fake documentation including policy statements, official letters, and identity credentials bearing victims’ personal information to bolster credibility. Some pose as uniformed law enforcement or regulatory officials via video calls, threatening arrest or legal action to coerce compliance.
Regional cooperation has intensified as authorities recognize that fraud networks operate across multiple jurisdictions. TikTok launched educational resources through its Scam Prevention Edition platform, while e commerce site Carousell implemented seller verification against government issued identification to reduce marketplace fraud. These initiatives reflect growing industry acknowledgment that platform security requires continuous adaptation to evolving criminal tactics.
The broader implications extend beyond individual financial losses to encompass eroding trust in digital communications, increased vulnerability among populations with limited technical literacy, and the challenge of balancing privacy protection with fraud prevention. Meta’s artificial intelligence based detection systems require users to temporarily sacrifice end to end encryption, raising questions about appropriate trade offs between security measures and privacy preservation.
Law enforcement agencies emphasize that combating screen sharing scams demands sustained public education campaigns alongside technological countermeasures. Authorities encourage immediate reporting of suspicious calls to enable pattern recognition and network disruption, noting that many victims hesitate to file reports due to embarrassment or uncertainty about whether they experienced fraud attempts.


