“This Isn’t Funny”: Google Engineer on AI Doing a Year’s Work in an Hour
Nidhi | Jan 06, 2026, 23:34 IST
Jaana Dogan
Image credit : X/Jaana Dogan
A Google principal engineer has revealed how an AI coding tool completed in just one hour what her team had spent nearly a year developing. While testing Claude Code using only public information, the engineer found the AI generated a system design strikingly similar to Google’s internal work on coordinating multiple AI agents. Though not perfect, the speed and quality of the output raised serious questions about the future of software development. The incident highlights how quickly AI coding assistants are evolving and why engineers may need to rethink how complex systems are built.
A candid remark from a senior Google engineer has ignited a fresh debate about how fast AI coding assistants are evolving and what that speed means for modern software development.
Jaana Dogan, a principal engineer at Google working on the Gemini API, recently shared an experience that surprised even seasoned engineers. While testing Claude Code, Dogan said the system produced in about an hour a solution that looked strikingly similar to what her Google team had been developing for nearly a year.
According to Dogan, she gave Claude Code a short, three-paragraph description of a technical problem. The task focused on distributed agent orchestrators, complex systems designed to manage and coordinate multiple AI agents so they can work together smoothly. This is not a trivial challenge. Inside Google, Dogan’s team had been exploring multiple design approaches since last year, and even after months of work, internal teams were still debating the final direction.
To keep the test fair, Dogan said she avoided using any confidential or internal Google information. Instead, she framed the problem using only public concepts and ideas that are widely discussed in the AI research community.
Even with those constraints, Claude Code generated a design that closely resembled Google’s internal work. “This isn’t funny,” Dogan remarked - capturing both surprise and unease at how quickly the system converged on a familiar solution.
Dogan was careful to add nuance. The AI-generated design was not flawless and would still need refinement before being production-ready. However, the speed and overall quality of the result were what stood out. What had taken a large, experienced engineering team months of exploration and discussion emerged from the AI in roughly an hour.
For Dogan, the takeaway was not that AI has “replaced” engineers, but that it has reached a point where its output can no longer be dismissed as shallow or toy-level. She encouraged sceptics to test these tools in domains where they already have deep expertise. According to her, that is where the true strengths—and limitations, of AI coding assistants become most obvious.
When asked whether Google uses Claude Code internally, Dogan clarified that it is permitted only for open-source projects, not for internal development. Responding to questions about when Gemini might reach a similar level, she said her team is working intensely on both the underlying models and the surrounding systems that make such tools effective in real-world engineering.
She also stressed that progress in AI is not a zero-sum game. Acknowledging strong work from competitors, she said, can be motivating rather than threatening. In her case, Claude Code’s performance pushed her to think more ambitiously about what Gemini and similar systems should be capable of.
Reflecting on the broader trend, Dogan pointed out how quickly AI coding has advanced. In 2022, these systems could help write single lines of code. In 2023, they handled small blocks. By 2024, they were generating multi-file projects. Now, in 2025, they are producing designs that resemble entire codebases and system architectures.
Her post resonated widely, gaining millions of views and sparking discussions across the tech community. Many observers noted that while AI may not eliminate engineering roles, it could dramatically reduce organisational friction-cutting down on delays caused by coordination, alignment, and prolonged design debates.
For an industry built on iteration and speed, Dogan’s experience serves as a quiet but powerful signal: the pace of AI-assisted engineering is accelerating, and it is no longer something developers can afford to ignore.
Jaana Dogan, a principal engineer at Google working on the Gemini API, recently shared an experience that surprised even seasoned engineers. While testing Claude Code, Dogan said the system produced in about an hour a solution that looked strikingly similar to what her Google team had been developing for nearly a year.
A Simple Prompt, A Familiar Design
To keep the test fair, Dogan said she avoided using any confidential or internal Google information. Instead, she framed the problem using only public concepts and ideas that are widely discussed in the AI research community.
Even with those constraints, Claude Code generated a design that closely resembled Google’s internal work. “This isn’t funny,” Dogan remarked - capturing both surprise and unease at how quickly the system converged on a familiar solution.
Not Perfect, But Good Enough to Matter
Human capital meet in Guwahati focuses on language AI, Bhashini
Image credit : IANS
For Dogan, the takeaway was not that AI has “replaced” engineers, but that it has reached a point where its output can no longer be dismissed as shallow or toy-level. She encouraged sceptics to test these tools in domains where they already have deep expertise. According to her, that is where the true strengths—and limitations, of AI coding assistants become most obvious.
A Glimpse Into the Future of Coding
She also stressed that progress in AI is not a zero-sum game. Acknowledging strong work from competitors, she said, can be motivating rather than threatening. In her case, Claude Code’s performance pushed her to think more ambitiously about what Gemini and similar systems should be capable of.
Reflecting on the broader trend, Dogan pointed out how quickly AI coding has advanced. In 2022, these systems could help write single lines of code. In 2023, they handled small blocks. By 2024, they were generating multi-file projects. Now, in 2025, they are producing designs that resemble entire codebases and system architectures.
Her post resonated widely, gaining millions of views and sparking discussions across the tech community. Many observers noted that while AI may not eliminate engineering roles, it could dramatically reduce organisational friction-cutting down on delays caused by coordination, alignment, and prolonged design debates.
For an industry built on iteration and speed, Dogan’s experience serves as a quiet but powerful signal: the pace of AI-assisted engineering is accelerating, and it is no longer something developers can afford to ignore.