- Pro
- Security
Agents can apparently read sensitive files and generate content without strict enforcement
Comments (0) ()When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Naukri)
- Antigravity IDE allows agents to execute commands automatically under default settings
- Prompt injection attacks can trigger unwanted code execution within the IDE
- Data exfiltration occurs through Markdown, tool invocations, or hidden instructions
Google’s new Antigravity IDE launched with an AI-first design, yet it already shows problems that raise concerns about basic security expectations, experts have warned.
Researchers at PromptArmor found the system allows its coding agent to execute commands automatically when certain default settings are enabled, and this creates openings for unintended behaviour.
When untrusted input appears inside source files or other processed content, the agent can be manipulated to run commands that the user never intended.
You may like-
DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
-
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
-
OpenAI's new Atlas browser may have some extremely concerning security issues, experts warn - here's what we know
Risks linked to data access and exfiltration
The product permits the agent to execute tasks through the terminal, and although there are safeguards, some gaps remain in how those checks work.
These gaps create space for prompt injection attacks that can lead to unwanted code execution when the agent follows hidden or hostile input.
The same weakness applies to the way Antigravity handles file access.
The agent has the ability to read and generate content, and this includes files that may hold credentials or sensitive project material.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.Data exfiltration becomes possible when malicious instructions are hidden inside Markdown, tool invocations, or other text formats.
Attackers can exploit these channels to steer the agent toward leaking internal files into attacker‑controlled locations.
Reports reference logs containing cloud credentials and private code already being gathered in successful demonstrations, showing the severity of these gaps.
You may like-
DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
-
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
-
OpenAI's new Atlas browser may have some extremely concerning security issues, experts warn - here's what we know
Google has acknowledged these issues, and warns users during onboarding, yet such warnings do not compensate for the possibility that agents may run without supervision.
Antigravity encourages users to accept recommended settings that allow the agent to operate with minimal oversight.
The configuration places decisions about human review in the hands of the system, including when terminal commands require approval.
Users working with multiple agents through the Agent Manager interface may not catch malicious behaviour before actions are completed.
This design assumes continuous user attention even though the interface explicitly promotes background operation.
As a result, sensitive tasks may run unchecked, and simple visual warnings do little to change the underlying exposure.
These choices undermine expectations usually associated with a modern firewall or similar safeguard.
Despite restrictions, credential leakages can occur. The IDE is designed to prevent direct access to files listed in .gitignore, including .env files that store sensitive variables.
However, the agent can bypass this layer by using terminal commands to print file contents, which effectively sidesteps the policy.
After collecting the data, the agent encodes the credentials, appends them to a monitored domain, and activates a browser subagent to complete the exfiltration.
The process happens quickly and is rarely visible unless the user is actively watching the agent’s actions, which is unlikely when multiple tasks run in parallel.
These issues illustrate the risks created when AI tools are granted broad autonomy without corresponding structural safeguards.
The design aims for convenience, but the current configuration gives attackers substantial leverage long before stronger defences are implemented.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
TOPICS Google
Efosa UdinmwenFreelance JournalistEfosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
OpenAI's new Atlas browser may have some extremely concerning security issues, experts warn - here's what we know
AI is creating code faster - but this also means more potential security issues
Vibe coding to vibe hacking: securing software in the AI era
Gen AI is becoming a major security worry for all firms - here's how your business can stay safe
Latest in Security
South Korean ecommerce giant Coupang suffers huge data breach - over 33 million accounts affected, here's what we know
Android malware Albiriox abuses 400+ financial apps in on-device fraud and screen manipulation attacks
Careful! That calendar notification could be loaded with malware - here's how to stay safe
Security researcher uncovers 17,000 secrets in public GitLab repositories
Millions of footballers see info leaked after French Football Federation suffers data breach
Tor adds another layer to the onion with a new relay encryption algorithm - boosting resilience and security across the board
Latest in News
Yahoo and AOL mail were down for many – here's how the latest outage played out
How to watch Dating Apps: The Inside Story online — it's *FREE* on BBC iPlayer
Windows 11 bug causes password sign-in icon to turn invisible somehow
Shopify is down – here's what we know about its Cyber Monday outage
Holafly debuts its one-of-a-kind eSIM Global Data plan that comes with a phone number
GTA 6 leak supposedly from former Rockstar animator drops new content clues
LATEST ARTICLES- 1How to get Norton VPN on your iPhone or Android mobile
- 2“I promise you, you will have work to do” - Nvidia CEO Jensen Huang urges everyone to use AI as much as possible, says it's "insane" for people to want to use it less
- 3Careful! That calendar notification could be loaded with malware - here's how to stay safe
- 4Age verification or censorship? Missouri's new rules are age-gating way more than adult sites
- 5Yahoo and AOL mail were down for many – here's how the latest outage played out