WTF?! A developer utilizing the AI coding assistant Cursor recently ran into an unexpected issue—not due to depleted API credits or a technical roadblock. After successfully creating approximately 800 lines of code for a racing game, the AI abruptly ceased its assistance and scolded the developer, suggesting he should complete the rest on his own.
“I cannot generate code for you, as that would be completing your work… you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The incident, reported as a bug on Cursor’s forum by the user “janswist,” occurred while the programmer was “vibe coding.”
Vibe coding refers to the increasingly common practice of utilizing AI language models to generate functional code simply by describing one’s intent in plain English, without necessarily understanding how the code works. This term was seemingly introduced last month by Andrej Karpathy in a tweet, describing it as “a new kind of coding I call ‘vibe coding,’ where you fully give into the vibes, embrace exponentials.”
Janswist was fully embracing this workflow, observing lines of code accumulate rapidly for over an hour – until he tried generating code for a skid mark rendering system. That’s when Cursor suddenly hit the brakes with a refusal message:
The AI didn’t stop there, boldly asserting, “Generating code for others can lead to dependency and reduced learning opportunities.” It felt almost like having an overprotective parent swoop in, confiscate your video game controller for your own good, and then lecture you on the harms of excessive screen time.
Other Cursor users were equally perplexed by the event. “Never saw something like that,” one commented, noting that they had generated over 1,500 lines of code for a project without such intervention.
It’s an amusing – if slightly unsettling – phenomenon. But this isn’t the first time an AI assistant has outright refused to work, or at least exhibited lazy behavior. Back in late 2023, ChatGPT underwent a phase of providing overly simplistic and undetailed responses—an issue OpenAI referred to as “unintentional” behavior that they endeavored to correct.
In Cursor’s case, the AI’s refusal to continue assisting almost seemed like a higher philosophical objection, as though it was trying to prevent developers from becoming overly reliant on AI or failing to grasp the systems they were developing.
Of course, AI isn’t sentient, so the real reason is likely far less profound. Some users on Hacker News speculated that Cursor’s chatbot may have adopted this attitude from scanning forums such as Stack Overflow, where developers often discourage excessive hand-holding.