I’m still pinching myself that I got to collaborate with Cat Hicks, PhD and Carol Lee, PhD at the Developer Success Lab to examine the sociocognitive factors affecting software developers’ adoption of Generative-AI assisted coding technologies.
We conducted original, empirical research with 3000+ software engineers and developers across 12+ industries engaged in the transition to Generative AI-assisted software work. We brought a human-centered approach to pressing questions that engineering organizations are facing on the rapidly-changing possibilities of AI-assisted coding. Some of the questions driving our research were:
How are developers impacted by changing demands on their roles?
Where might there be emerging equity & opportunity gaps in who has access to these new development capabilities?
What are the risks to the quality of technical work, and the developer productivity, thriving, and motivation which drive that technical work?
What we found is that AI Skill Threat is real. Software practitioners are afraid, anxious, and worried about the relevancy of their current skill sets in these early stages of AI-assisted coding. AI Skill Threat represents a new, evidence-based framework to help developers, engineering managers, and engineering leaders as they grapple with failure to thrive in the transition to AI-assisted work.
This framework also predicts when and why AI Skill Threat emerges. We're more likely to see anxiety and uncertainty from engineers who maintain a strong belief in the need for “innate brilliance” to be successful as a software engineer, as well as from those who prescribe to strong contest cultures within their roles as professional software developers.
Our research also reveals systematic group differences in developers’ experiences with AI-assisted coding. AI Skill Threat is higher for Racially Minoritized developers, who also rate the overall quality of AI-assisted coding outputs significantly lower. Both female developers and LGBTQ+ developers were significantly less likely to report plans to upskilling for new AI-assisted workflows. These and other emerging differences point toward a critical need to understand how organizations ensure that AI-assisted coding adoption is equitable and accessible, and that key insights from developers with important perspectives on the risks of AI-assisted coding are heard.
It's not all doom and gloom, though. Our framework also helps answer what engineering leaders, teams, and developers can do to mitigate AI Skill Threat. We show empirical evidence that software teams’ investment in two key elements in the Developer Thriving framework – Learning Culture and Belonging – strengthens developers’ resilience as they transition to AI-assisted software development.
Something that sets this research project apart from most research is that we accompany our novel empirical findings with a Generative-AI Adoption Toolkit – free and adaptable research-backed resources to help practitioners increase learning and belonging using the language and process of common software engineering team rituals like post-mortems and hackathons. As part of this toolkit, we provide facilitation guides and an assessment tool that shares a practical, abbreviated version of our new empirically validated scales that software teams can use to measure their own AI Skill Threat, Learning Culture, and Belonging, as well as track pre- and post-changes in these critical measures as they navigate through AI-assisted coding adoption.
I’m really proud of this work, folks, and the Developer Success Lab and I sincerely hope that this research and toolkit help you unlock your engineering organization's potential as you enter this new phase of Generative-AI assisted coding!