At the Zoom analyst event in Palo Alto, California, on August 2 to 3, the company described how it plans to embrace artificial intelligence (AI) across its product portfolio, which ranges from Zoom Meetings, Phone, Chat, and Contact Center to its developer platform. Since then, the company has been criticized for some of the elements in the terms of service for its AI applications, but we believe Zoom has acted responsibly as it develops AI-led innovations without compromising trust and transparency.
Zoom Is Taking a Federated Approach To AI
Multiple vendors have begun integrating AI into various unified communications and collaboration (UC&C) and contact center applications. The advantages and concerns surrounding AI’s implementation in UC&C are multifaceted.
UC&C is more than just a typical business application; it is the foundation upon which organizations operate. Employee communications and collaboration, achieved through messaging, meetings, and calling applications, are crucial for task completion. Similarly, contact center applications are essential for organizations to interact with customers. These services generate valuable, often sensitive data that belongs to the enterprise customer. However, organizations must leverage AI to glean valuable insights from this data. While some enterprises may consider developing AI models in-house, the costs associated with such endeavors prove prohibitive for most organizations, causing many to opt for third-party AI models. Zoom’s customers would likely prefer to use Zoom’s AI algorithms to stay competitive, given that their data is already hosted on the company’s servers.
During the analyst event, Zoom underscored its intention to adopt a federated approach to enhancing its products with AI capabilities. This entails incorporating a diverse array of AI models, including proprietary ones developed by Zoom, models from leading AI entities, such as OpenAI, and models from Zoom’s partners, such as Anthropic, Meta, and select clients.
The company’s federated approach enables Zoom to rapidly integrate AI features from solutions like OpenAI while refining and expanding upon them using its models. This flexibility is advantageous to Zoom for several reasons, as the company can:
- Introduce new solutions swiftly.
- Decrease its overall research costs.
- Pass on cost savings to customers.
- Tailor solutions to address the unique requirements of specific clientele.
- Offer AI-enabled solutions tailored to specific industries.
- Customize AI models quickly to align with distinct business needs.
While incorporating OpenAI presents an easy move for Zoom, the question remains: How will the company cultivate its own AI capabilities?
For any AI-driven innovation by Zoom to come to fruition, three essential components are required: algorithms, hardware, and training data. Zoom boasts strong financials, enabling it to invest in hardware development and create compelling algorithms. Yet, the most crucial element is the training data, which forms the core for constructing credible AI models. Training data consists of labeled information instructing AI models or machine learning algorithms to make accurate decisions.
To illustrate, envision Zoom developing an AI feature for identifying and tagging individuals within a meeting room. In this scenario, the model must analyze countless hours of speech recordings captured during employee meetings and other interactions. The model must proficiently distinguish meeting participants from background noise and varying accents. Similarly, if a meeting summary tool were crafted, it would necessitate training data encompassing video call transcripts, in-meeting chat data, reactions, emojis, whiteboard application inputs, and third-party tool usage within Zoom Meetings. Meeting metadata, such as type, time, location, and participant count, would also be crucial.
Zoom remains one of the most popular unified communications as a service (UCaaS) vendors in the market and, as such, has access to these datasets. But does it have the right to use it? Not without customer consent, as there are potential issues related to Europe’s General Data Protection Regulation (GDPR) and some US state privacy laws, such as the California Consumer Privacy Act (CCPA). Sharing these sensitive data points can affect customers’ data privacy. If a business agrees to share any employee, customer, or partner-related information with Zoom, the vendor’s terms of service (ToS) state that it will use that information to train its model. Customers have the right to ask the following questions to ensure Zoom is safeguarding their data privacy:
- What security and privacy measures have been put in place?
- What will happen with the past collected data?
- Will data be collected on an ongoing basis?
- Will data be made anonymous?
- Will the records be held individually or combined across all Zoom customers?
Most customers recognize that access to AI functionalities elevates employee productivity and provides a competitive edge. Given the significance of these capabilities, enterprises would likely be willing to participate in research and permit Zoom to use their data for model training ethically. Nonetheless, enterprises must guarantee that they possess the approval of their staff members before sharing personal information, such as recordings of employee voices, photographs, visuals, video conferencing transcripts, textual exchanges, files, etc., with Zoom.
Every enterprise will adopt a distinct approach to handling the concerns of its workforce. Currently, only enterprise administrators can consent as per Zoom’s ToS, with end users having little control over the process other than choosing not to attend the meeting.
What Do Openness and Trust Mean for Zoom?
By asserting its dedication to maintaining AI transparency, Zoom has once again underscored its commitment to considering ethical and legal considerations throughout the development and ongoing utilization of its products. At the Zoom analyst event, the company specifically conveyed that it would not train its models using customer audio, video, or chat content without obtaining explicit consent.
As Zoom caters to diverse customers, ranging from individual free users to large-scale enterprises spanning various sectors like healthcare, government, education, and financial services, it is in the company’s best interest to stay transparent on all accounts. In many industries, Zoom offers additional compliance, such as the following:
- Service Organization Control (SOC) 2 Type II
- CSA Security Trust Assurance and Risk (STAR) Level 2 Attestation
- ISO/IEC 27001:2013
- International Association of Privacy Professionals (IAPP) Silver Member
- Common Criteria
- UK Cyber Essentials, Cyber Essentials Plus
- Japan’sCenter for Financial Industry Information Systems (FISC)
- FedRAMP Moderate
- Department of Defense (DoD) IL2
- Health Insurance Portability and Accountability Act (HIPAA)
- Personal Information Protection and Electronic Documents Act/ Personal Health Information Protection Act (PIPEDA/PHIPA)
- Family Educational Rights and Privacy Act (FERPA)
Zoom is embracing AI and transparency through two fundamental approaches:
- Zoom allows all account holders to choose whether to use AI capabilities and share data. The company has built effective consent flows that account admins must review and check to share data. This control extends to specific product features such as meetings, messaging, and calling. For instance, Zoom offers End-to-End Encryption (E2EE) for sensitive conversations. E2EE enhances security by turning off features like join before host, cloud recording, and more, thereby protecting meeting content. For E2EE-enabled calls, Zoom will not have access to the data either during the call or post-call. Moreover, if customers opt for value-added services like recording and transcription, they implicitly agree to share data with Zoom. No vendor can offer value-added services without accessing user data. Zoom is no different here.
- Zoom strategically trains its AI algorithms using telemetry and diagnostic data from its platform. This data encompasses collective information, including the total number of daily meetings, average participant counts, security issues, usage patterns across different regions and sectors, and feature utilization like whiteboarding, recording, or data center traffic. The anonymous data provides valuable insights into the analytics, diagnostics, and usage patterns of Zoom’s platform, and the insights enable the company to improve its availability, reliability, and security.
Over the recent days, there has been increased media attention regarding Zoom’s updated terms of conditions (ToC), which were implemented in March 2023. Queries have arisen concerning the rationale behind Zoom’s ToC revision. On August 7, Smita Hashim, Zoom’s chief product officer, issued a statement to provide clarifications in response to the ToC modifications, noting among other things that when individual users join a session enabled with one of Zoom’s AI applications, they are presented with a notification informing them about the potential use of their inputs. Additionally, Zoom CEO Eric Yuan shared on LinkedIn that these clarifications aimed to explain Zoom’s procedures, and he cited an unspecified internal “process failure” in connection with the March 2023 ToS update. Yuan promised this failure would be fixed, and he reiterated that Zoom remains committed to user privacy and will always refrain from utilizing any customer data without obtaining proper consent.
No UC&C vendor can guarantee a top-notch AI toolkit without refining their algorithms using extensive customer datasets. All major UC&C players have access to customer data; the critical aspects are how they use the data and how transparent they are with this information. Do vendors communicate with their customers and inform them at every AI data touchpoint? It appears Zoom is commendably upfront about maintaining transparency.