Once again, thousands of developers who work within the Microsoft developer ecosystem have landed in Seattle. Their mission is to learn the latest developments that will help them expand their capabilities across Microsoft’s developer products and services.
As I wrote last week in my Microsoft Build 2018 conference expectations article, the company continued to expand the products and services they have around six major tech themes: Artificial Intelligence (AI), Machine Learning (ML), big data, Internet of Things, web development, and of course the cloud.
Last year at BUILD 2017, Microsoft introduced everyone to the concept of the outer boundaries of the Intelligent Cloud, the Intelligent Edge.
The Intelligent Edge is defined as all those connected devices that we use every day and range from IoT sensors to our computers and everything in between. They are connected to the Intelligent Cloud which is where they get their connectivity and move data back and forth. In turn, AI and ML tools are used to harness and evaluate that data.
Today’s news from the Microsoft BUILD Conference 2018 continues to expand the capabilities in both critical areas. All these new products and services are focused on allowing developers to extend their own apps and services even further into the Intelligent Edge.
Here is a rundown of today’s announcements in the various areas:
Intelligent Edge and Intelligent Cloud
- Open sourcing Azure IoT Edge Runtime to enable customers to modify, debug, and experience more transparency and control for applications on the Intelligent Edge.
- Custom Vision will run on Azure IoT Edge so that drones and industrial equipment can take necessary actions even when not connected to the cloud. More Azure Cognitive Services will be able to support edge deployments like this over the next several months.
- DJI and Microsoft have partnered up to create a new Software Development Kit (SDK) for Windows 10 PCs and they will now use Azure as their preferred cloud provider. This will be the new connectivity for their commercial drone and SaaS products. The new SDK will offer full flight controls and real-time data transfer capabilities to Windows 10 devices. Future cooperation between the two companies will work to extend options such as Azure IoT Edge and Microsoft AI services for new applications in several industries.
- Another partnership announced today was between Microsoft and Qualcomm to build out an AI vision developer kit that runs on the Azure IoT Edge. This will provide key hardware and software available so that camera-based IoT solutions can be developed. These will use products such as Axure Machine Learning and hardware acceleration using Qualcomm’s AI Engine. This hardware combination will also run locally out on the edge.
Data and AI Development
- Project Kinect for Azure is a collection of sensors which will include a depth camera and onboard computing capability for AI on the Edge. This technology has its roots in the original Kinect that was built for Xbox 360 and Xbox One. Those capabilities were further refined in HoloLens and now they arrive with even more enhancements as this new project. These new capabilities will be paired up with Microsoft’s Time of Flight sensor and other sensing hardware in a small form factor device with an extremely efficient power profile. Using Azure AI will enable articulated hand tracking, high fidelity spatial mapping resulting in a high level of precision.
- A new Speech Devices SDK will provide audio processing that takes in multiple sources to create highly accurate speech recognition. It will include noise cancellation and far-field voice capabilities. Developers will be able to implement a wide range of voice capable services such as drive-thru ordering, in-car and in-home digital assistants, smart speakers, and other digital assistants.
- An update to Azure Cosmos DB, first announced at BUILD 2017, will offer new and differentiated multi-master capabilities at a global scale. This database service supports both the cloud and edge and VNET for increased security.
- Project Brainwave was announced as a new architecture for deep neural processing. It is now in preview and available on Azure and on the edge. This project will make Azure the fastest cloud running real-time AI with full integration with Azure Machine Learning. Remember the FPGA hardware that Microsoft announced at IGNITE 2016? This service will run on that silicon and ResNet50-based neural networks. Project Brainwave is also in development for Azure Stack and Azure Data Box.
- Continued improvements in the unified speech service under Azure Cognitive Services will result in better speech recognition, text-to-speech translation, support for customized voice models and translation. You will be able to use any voice to train your own speech service here rather than just use the default male/female voices that get offered.
- Azure AI services continue to expand with new updates to the company’s Bot Framework. Developers will be able to create AI experiences integrated into any agent they create. These new services will power next-generation bots, so they can utilize and produce better conversational exchanges with fuller personalities and voice customizations to reflect the brand.
- A new preview of Azure Search with Cognitive Services integration will combine AI and indexing technology to speed up the retrieval of information and insights through both text and image related searches.
Multi-Sense and Multi-device Experiences
- Microsoft Remote Assist will enable collaboration from remote locations using heads-up views, hands-free calling, image sharing, and mixed reality annotation. This service is also integrated into Microsoft teams and allows someone to share what they are seeing with contacts on their Microsoft Teams contact list. Two minds are better than one – especially when they are both seeing the same view despite the distance between them. This very much reminded me of what we saw in some HoloLens demos in the early days of that device.
- Furthering the benefits of Windows Mixed Reality and 3-D on Windows 10, Microsoft Layout will enable users to create spaces by importing 3-D models, creating room layouts on the same scale as real-life spaces. These high-quality holograms can be seen in your current physical space or in virtual reality using devices such as HoloLens or a Windows Mixed Reality headset. Sharing and working on these spaces can be done in real-time with customers for full collaboration from anywhere in the world.
Modern Tooling and Experiences Across All Platforms and Languages
- Azure Kubernetes Service (AKS) will allow developers an easier method for building and running container-based solutions even with limited Kubernetes experience. This service, which will be widely available in the coming weeks, integrates with developer tools and workspaces, Ci/CD, networking, and monitoring tools in the Azure portal. Microsoft will also be sending Kubernetes out to the Edge with AKS for Azure IoT Edge and build a Kubernetes DevOps capability into Visual Studio Team Services for AKS.
- Visual Studio Intellicode is a new feature in Visual Studio, available in preview today, that provides suggestions to assist developers in improving the quality of their code. This, in turn, should also result in greater productivity during coding sessions/work.
- Visual Studio Live Share, also in preview today, allows developers to work together in real time with other team members. They will be able to collaborate to edit and debug code from tools like Visual Studio 2017 and VS Code. Live Share can be used across multiple languages and supports serverless, cloud, or IoT development projects.
- A new partnership between Microsoft and GitHub will provide Azure DevOps services to GitHub customers. A new Visual Studio App Center was released today which provides devs who are building mobile apps for iOS, Android, Windows, and MacOS to integrate DevOps within GitHub processes.
- Microsoft Azure Blockchain Workbench, another service that is available today, will assist in the development of blockchain applications. Using an Azure supported blockchain network alongside cloud services such as Azure Active Directory, Key Vault, and SQL Database. This should reduce development time when working on proof-of-concept scenarios.
Finally, Microsoft announced a follow-on program after the AI for Earth project they started up last year which used AI to find resolutions for issues such as climate, water, agriculture, and biodiversity. The new effort, called AI for Accessibility, looks to use the same tools to help make AI tools available to developers so that the development of accessible and intelligent solutions can be accelerated. The goal is to benefit the more than one billion plus people around the world with disabilities.
AI for Accessibility is a five-year program backed with $25 million dollars and look to make this impact through three key avenues.
- Seed grants of technology to developers, universities, non-governmental organizations, and inventors who are focusing on an AI first approach to building these type of accessibility solutions.
- Identify projects which show promise and provide larger investments including technology and access to Microsoft AI experts to help those individuals/companies scale the concept they are working on.
- Add AI and inclusive design across Microsoft technology, work with partners to add AI innovation at the platform level and empower others to maximize accessibility in their own products and services.
We know that Microsoft has made great strides in impacting social issues by applying technology to help solve those issues. They also know that the talent is out there in the world to build these solutions but often gets held up through lack of exposure or funding. AI for Accessibility will help sort out those hindrances.
Stay tuned to ITPro Today for further coverage of Microsoft BUILD 2018.