Skip navigation
People in conference room.png Getty Images

Visual Collaboration Platforms: Evaluate Human Factors First

When evaluating visual collaboration platforms, IT should first focus on finding the right users to assess value and use case suitability.

As organizations move to digitally transform themselves with data, it will be a challenge to provide technology that helps users more effectively collaborate on the many different sources of information they deal with in the course of their work. Visual collaboration platforms can be a way to work with many different sources of information simultaneously, but picking the right visual collaboration platforms requires considering many different factors. Beyond workstyle/culture to workplace design, end users should be the first focus of an evaluation. 

While basic video and web conferencing technology allows teams to work together, visual collaboration platforms enable teams to have many data sources open simultaneously, in order to compare many sources and types of data, or different views of the same information, side-by-side. Visual collaboration platforms typically support a mechanism for organizing different applications and data on multiple screens or digital workspaces, whiteboards or canvases. They can include video conferencing, “gesture”-based navigation between applications and screens, and integration with collaboration and workstream applications, either directly or via an API. Generally, companies have built their solutions with a particular industry or department use case in mind, but some are generally adaptable to many use cases. 

While evaluators of these solutions will want to focus on the technical aspects of the solution and how they fit into IT infrastructure, companies must invest time and effort in assessing the human factors. These will impact ROI directly and ensure the technology will be adopted and used regularly. After identifying a line-of-business executive sponsor who has a compelling use case, the executive sponsor and IT should work together to establish an evaluation team with specific timelines, evaluation tasks, and metrics to gauge initial and ongoing training and support requirements. 

Ideally, an evaluation team would include at least one executive sponsor who will use the technology, two or three departmental team members who would be regular users, at least one executive or administrative assistant, an onsite technical support resource from IT, and someone with human factors experience. Skills and experience for the team should encompass a complete range of technical adeptness.

Key people on the team need to wear two hats during the evaluation, acting as both end users and trainers for the technology. Department heads, or select team members, and any executive or departmental admins will need to be able to both use the system and train other users. The executive or department admins should use the evaluation to assess if they would be able provide a level of support for the solution to end users. Ideally, these individuals should be assessing sets of tier 1 and tier 2 support requirements. The IT staff should be able to provide both technical support and evaluation notes on the solution testing.

The testing schedule and tasks should focus on realistic day-in-the-life and train-the-trainer scenarios. Given the high degree of turn-over in the workforce, someone on every team in a department should be comfortable enough with the technology that they can assist new team members as they learn to use it.

An evaluation schedule should include intense usage in the first two weeks of the trial to ensure users learn and become comfortable with the system. The department teams should take a break of about a week from the system to assess if it helps them be more productive or not. This time also gives an opportunity to see how easily people can pick up using the system after a break. An interesting wrinkle in the final week of testing would be to add new users to collaboration sessions to assess how easily they adapt to the solution.  

 A use case, such as outfitting a “war room,” may dictate room selection, but picking neutral space for the evaluation may be helpful in assessing how adaptable the solution is. It may be tempting to put this technology in the board room or other well-appointed conference rooms. However, putting it in "average" conference rooms and team spaces that don’t have a view or the nicest furniture will help measure the degree to which the solution, rather than the space, compels adoption. 

 

TAGS: Conferencing
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish