Developer interest in the key components of generative artificial intelligence (AI) is accelerating, according to a report from tech training specialist O'Reilly.
The report found searches for natural language processing (NLP) registered the highest year-over-year growth among AI topics (42%), followed by deep learning (23%).
Developers also increasingly searched for content related to transformers — the AI model that's led to tremendous progress in NLP — an indicator of the impact that OpenAI's GPT-3 and ChatGPT have had.
ChatGPT's Surprising Appearance
Mike Loukides, vice president of emerging technology content at O'Reilly, said the most surprising thing he's learned from the report is something that, ironically, barely made it in.
"ChatGPT appeared suddenly and, since its appearance, has almost completely dominated discussions about the future of computing," he explained. "We didn't have any data on ChatGPT."
The report is based on the usage of material in the company's platform, and that material is only now starting to appear.
"People can't use content that doesn't exist," Loukides said. "But it really looks like a game changer."
For the first time, an AI exists that feels like it's actually talking to you, he added.
"Forget the fact that it doesn't know what it's talking about, or that it will frequently make up its own facts," he notes. "Humans want to believe."
More practically, while ChatGPT isn't terribly good at writing, it's good enough, according to Loukides.
"Most companies need a lot of text that doesn't necessarily have to be high quality," he noted. "Descriptions of products for catalogs, boilerplate for annual reports, and so on. Will tools like ChatGPT write these? Probably."
Why Are Software Developers Turning to ChatGPT?
Loukides pointed out that ChatGPT is also proving valuable to software developers and said it's surprisingly good at explaining code that you don't understand.
Related: Can AI Help Automate IT Operations?
"You can use it to help you learn a new programming language, and you can use it to help you write code," he said. "A lot of the code that it writes will be wrong, but it still saves you the trouble of memorizing details about syntax and library functions. Fixing the errors is less work than searching for an obscure function in the documentation."
Loukides said there's been a big shift in people's attitudes toward AI, noting that the technology has been impressive, but there were questions about it being able to deliver the kind of results that drive investment, especially in a tight economy.
"Now there's no question. Startups in crypto, Web3, VR, and so on will have trouble getting funded," he said. "Venture money will be flowing to AI, and particularly to generative applications."
With ChatGPT, Fiction Is Becoming Reality
Interest in NLP is high in part because, for years, making computers talk was the stuff of fantasy and science fiction, Loukides said.
"Starting a couple of years ago, that fiction became reality. For the past two years, NLP has been at the forefront of AI research," he said. "With the release of ChatGPT, you can now have a conversation that more or less makes sense."
Deep learning is behind all this development in AI — and is the second most heavily used topic, with 23% growth, he pointed out.
"Almost all of the progress that AI has made in the past five years comes from figuring out how to apply large neural networks to new kinds of problems," he said.
However, a lot of work still needs to happen to make progress with tools like ChatGPT, with Loukides admitting it isn't surprising that it's subject to errors and "hallucinations."
"All these language models are optimized for generating plausible human language, not for being correct," he said. "And that's what they do: They're very good at generating human language, but they reflect all the errors that you can find on the internet."
While optimizing for correctness is going to be a difficult problem for researchers working on large, general models like ChatGPT, correctness is much less of a problem with models that have been trained on a specialized data set — for example, a company's financials.
"The number of errors goes down sharply when a model is trained on a set of data that's accurate and trusted," Loukides said.
Work is also needed on smaller models that can be trained and operated by companies that don't have the computing resources of Microsoft or Google.
"There are already a number of startups working on this problem," he pointed out. "The coming year is going to be exciting."
About the authorNathan Eddy is a freelance writer for ITPro Today. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.