Date: Thursday, November 12, 2020
Time: 02:00 PM Eastern Standard Time
Duration: 1 hour
Machine learning (ML) brings exciting new opportunities, but applying the technology in production workloads has been cumbersome, time consuming, and error prone. In parallel, data generation patterns have evolved, generating streams of discrete events that require high-speed processing at extremely low response latencies. Enabling these capabilities requires a scalable application of high-performance stream processing, distributed application of ML technology, and dynamically scalable hardware resources.
In this webinar, learn how the Hazelcast In-Memory Computing Platform enables the application of ML (Java, Python, C++) algorithms on real-time data streams with a distributed, cooperative, low latency architecture. Additionally, we'll examine how Intel’s new 2nd generation processors coupled with Intel Optane memory capabilities are expanding the possibilities for in-memory platform applications.
Brought to you by:
If you have already registered, click here to access
Scott McMahon, Technical Director & Team Lead, America, Hazelcast
Scott McMahon has over 20 years of software development and enterprise consulting experience. Before specializing in Hazelcast In-Memory Data Grid technology he built big data analytics platforms and business process management systems for many of the world’s leading corporations. He lives in Portland, Oregon, and when not working on computer systems, he enjoys getting outdoors and having fun with his family.
Mel Beckman, Contributing Editor, ITPro Today
Mel Beckman has written computer-related features and product reviews for 20+ years. His focus areas include IT, data centers, networking, and communications. As a global thought leader, Mel has presented hundreds of seminars on computing technology throughout the US, Europe, and Asia.