A recent study by Microsoft Research highlights that leading AI models, including OpenAI's o3-mini and Anthropic's Claude 3.7 Sonnet, face significant challenges in debugging software. Despite the increasing reliance on AI for coding tasks, these models successfully completed less than 50% of the debugging tasks in a benchmark test. The study suggests that insufficient training data, particularly in sequential decision-making processes, hinders performance. While tech leaders remain optimistic about the future of coding jobs, this research raises concerns about over-reliance on AI for software development.