• Home
  • Popular Gadgets
  • Google Unveils India-Focused Safety Charter, Shares How It Is Using AI to Combat Online Frauds and Scams
Image

Google Unveils India-Focused Safety Charter, Shares How It Is Using AI to Combat Online Frauds and Scams

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now


Google unveiled its Safety Charter for India, highlighting how it is using artificial intelligence (AI) technology to identify and prevent instances of cybercrimes across its products. The Mountain View-based tech giant highlighted that with the rise of India’s digital economy, the need for trust-based systems was high. The company is now using AI in its products, country-wide programmes, and to detect and remove vulnerabilities in enterprise software. Alongside, Google also highlighted the need to build AI responsibly.

Google’s Safety Charter for India Highlights Key Milestones

In a blog post, the tech giant detailed its achievements in successful identification and prevention of online fraud and scams across its consumer products, as well as enterprise software. Explaining the focus on cybersecurity, Google cited a report highlighting that UPI related frauds cost Indian users more than Rs. 1,087 crore in 2024, and the total financial losses from unchecked cybercrimes reportedly reached Rs. 20,000 crore in 2025.

Google also mentioned that bad actors are rapidly adopting AI to enhance cybercrime techniques. Some of these include AI-generated content, deepfakes, and voice cloning to pull off convincing frauds and scams.

The company is combining its policies and suite of security technologies with India’s DigiKavach programme to better protect the country’s digital landscape. Google has also partnered with the Indian Cyber Crime Coordination Centre (14C) to “strengthen its efforts towards user awareness on cybercrimes, over the next couple of months in a phased approach.”

Coming to the company’s achievements in this space, the tech giant said it removed 247 million ads and suspended 2.9 million fraudulent accounts that were violating its policies, which also includes complying with the state and country-specific regulations.

In Google Search, the company claimed to be using AI models to catch 20 times more scammy web pages before they appear on the results page. The platform is also said to have reduced instances of fraudulent websites impersonating customer service and governments by more than 80 percent and 70 percent, respectively.

Google Message recently adopted the new AI-powered Scam Detection feature. The company claims the security tool is flagging more than 500 million suspicious messages every month. The feature also warns users when they open URLs sent by senders whose contact details are not saved. The warning message is said to have been shown more than 2.5 billion times.

The company’s app marketplace for Android, Google Play, is claimed to have blocked nearly six crore attempts to install high-risk apps. This included more than 220,000 unique apps that were being installed on more than 13 million devices. Its UPI app, Google Pay, also displayed 41 million warnings after the system detected the transactions being made were potential scams.

Google is also working towards securing its enterprise-focused products from potential cybersecurity threats. The company initiated Project Zero in collaboration with DeepMind to discover previously unknown vulnerabilities in popular enterprise software such as SQLite. In the SQLite vulnerability, the company used an AI agent to detect the flaw.

The company is also collaborating with IIT Madras to research Post-Quantum Cryptography (PQC). It refers to cryptographic algorithms that are designed to secure systems from potential threats caused by quantum computers. These algorithms are used for encryption, digital signatures, and key exchanges.

Finally, on the responsible AI front, Google claimed that its models and infrastructure are thoroughly tested against adversarial attacks via both internal systems as well as AI-assisted red teaming efforts.

For accuracy and labeling AI-generated content, the tech giant is using SynthID to embed an invisible watermark on text, audio, video, and images generated by its models. Google also requires its YouTube content creators to disclose AI-generated content. Additionally, the double-check feature in Gemini allows users to make the chatbot identify any inaccuracies by running a Google Search.



Source link

Releated Posts

Xbox’s AMD partnership sheds light on the future of the division’s ecosystem

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Microsoft has no plans to get…

ByByAjay jiJun 18, 2025

Bungie delays Marathon indefinitely

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Marathon isn’t coming in September after…

ByByAjay jiJun 18, 2025

Facebook videos are all just going to be reels now

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Facebook is once again rebranding the…

ByByAjay jiJun 18, 2025

Waymo expands service area around San Francisco and Los Angeles

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Waymo is expanding its service area…

ByByAjay jiJun 17, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top