The future of AI will hinge on efforts to advance three types of capabilities: self-supervised learning, transformer architecture, and transfer learning.
That’s according to Luis Vargas, partner technical advisor to the CTO at Microsoft. In the opening keynote address at the ODSC East conference, which runs this week in Boston and virtually, Vargas offered a panoramic view of what Microsoft deems “AI at Scale.” AI at Scale is an initiative at Microsoft to develop AI capabilities, which can be integrated into its products and platforms.
AI technology has made significant strides. Vargas suggested that just as the brain has “neural connections,” AI is powered by “parameters.” Two years ago, AI systems could be powered by one billion parameters. Fast-forward to 2022, and Microsoft’s research team is working on systems with over 500 billion parameters.
Three Concepts for AI Expansion
Vargas argued that three concepts will define the future of AI:
- Self-Supervised Learning: Connected machines will continue to improve on their own, with minimal human training.
- Transformer Architecture: The ability for machines to decode and encode based on representations of data.
- Transfer Learning: One can train machines across tasks and not for a unilinear set of operations, thus creating “portability.”
To achieve these three AI capabilities in unison, Vargas suggested that “the larger, the better,” not only in terms of computational power but also in terms of learning acceleration.
Questions About the Future of AI
Vargas’ keynote at ODSC East was highly informative yet essentially Microsoft-centric. He discussed numerous AI applications, including practical areas to use Microsoft AI such as for improvements in employee-employer matching, customer relationship management, and meetings.
Still, questions remain.
Vargas hedged his bets on the perennial question about the importance of human versus machine. He used the phrase “human-like” to suggest that the examples of AI applications he offered were not perfect replications of human behavior and production. However, he then preceded to show how machines could be “as good” as humans in cases such as providing answers based on natural-language questions and even deciphering context from ambiguous inputs.
The keynote could have ventured beyond Microsoft’s investments. Of course, Microsoft plays a large role in AI. And this goes back to the fundamental issue related to the future of AI and AI expansion, which Vargas implied with the “larger is better” notion: Can small organizations or small groups of scientists, engineers, and thinkers scale AI without access to the same resources that large vendors like Microsoft have?
Some might be reminded of what author Amy Webb calls “The Big Nine” -- nine firms that essentially control AI today. Vargas did not adequately address the question of who will influence the future of AI. Instead, his keynote seemed to suggest the notion that if you aren’t big, you need to go home. While this was likely not his intent, it certainly sparked questions about AI’s future.