As organizations continue to integrate generative AI into their tech stacks, concerns over the security of software development roles have increased due to the technology's ability to generate and execute code.
The generative AI platform ChatGPT considers itself "good" or "excellent" at more than 95% of all skills specified in software development job postings, according to a report from Indeed's HiringLab, putting human software developers at the bottom of the AI food chain.
Technology skills represent 82% of skills noted in a typical software development post, while business operations skills — which GenAI rates itself "good" at — represent just 7% of skills in a typical software developer posting. These are mentioned in 71% of all software development job postings.
However, while many organizations are already leaning into AI-generated code, ChatGPT is still wrong more than half the time when answering software questions.
Communication skills — which GenAI rates itself "excellent" at — are mentioned in almost half (43%) of all software development job postings.
The Indeed report noted that GenAI admits it is merely "fair" at engineering skills and leadership skills — talents that are found in 20% and 12%, respectively, of all software development job postings.
Do Engineering Job Postings Focus Too Much on Hard Skills?
Tony Lee, chief technology officer at Hyperscience, said engineering job postings tend to over-index on skills in the requirements, as those are the most quantifiable.
"It's easy to list a dozen technologies and underrepresent the human collaboration aspects and importance of necessary soft skills in modern engineering — for example, the ability to work collaboratively with designers, product managers, other engineers, and customers to creatively problem-solve," he explained.
There are a couple of areas where Lee would like to see more in-depth analysis.
"First is to not have ChatGPT assess its own skill level — that seems like a flaw in the methodology overall," he said. "Second, consider the societal constraints on generative AI for these roles."
Lee said developers are rightfully worried about their job safety as organizations look toward AI to cut costs and increase operational efficiency.
If large language models (LLMs) like ChatGPT rank themselves as extremely proficient in software development skills, it's reasonable to assume organizations will begin to consider implementing AI in software development.
"However, developers shouldn't be looking for new career paths just yet, as human touch will always be necessary throughout the software development lifecycle," Lee said.
For example, generative AI is unable to capture human creativity and curiosity, requiring a developer to fill those gaps.
"From mitigating potential bias in datasets to ensuring training data is up to date to avoid model drift, the need for a human in the loop will remain to ensure the efficiency and accuracy of an LLM as well," Lee explained.
What GenAI Can Do for Developers: Take Over Mundane Tasks
Developers can lean into generative AI to overhaul the mundane, repetitive tasks that consume their daily workflows.
By automating simple tasks, such as code quality testing, developers are given the time back in their days to focus on technological innovation.
As generative AI continues to take over basic tasks from developers, they can level up their "soft" skill sets, such as curiosity, creativity, and problem-solving.
"By leaning into these equally desired traits required for their role, human developers will be able to offer greater value to organizations beyond just the tasks that AI can complete," Lee noted.
From his perspective, implementing generative AI must come from the top down, meaning CTOs and product leads are responsible for integrating the technology into developer team workflows.
"Given that CTOs often oversee product development, design, and several other facets of IT operations, these changes must be introduced by the person overseeing every aspect of the software development lifecycle and product roadmap," he said.
Lee cautioned that generative AI, in its current state, may be able to generate code, but that doesn't ensure its accuracy or quality.
Currently, many AI models run the risk of experiencing model drift, which occurs as training data ages out and there is either little data to help continue training or the datasets are not updated due to a lack of time, resources, and talent.
"Over time these issues can be resolved, and as training datasets grow larger and more accurate, generative AI will be better positioned to assist human developers and provide better support," he said.
About the authorNathan Eddy is a freelance writer for ITPro Today. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.