We recently did a study of best practices for getting the most out of enterprise- and organization-scale Wi-Fi installations, and, even with a significant degree of diversity in the responses we obtained from practitioners--both suppliers and end users--a few universal observations and suggestions jumped out. As these were pretty much in alignment with our own experience at Farpoint Group, it looks like we have a brief but significant list of what we might even refer to as the Important Truths in getting the most out of medium- to large-scale Wi-Fi infrastructures. And, with the edge of the network now almost always wireless and always critical to overall mission success, and with so much of the IT budget sunk into Wi-Fi, it’s vital for network managers everywhere to optimize their ROI by optimizing Wi-Fi performance. In other words, get the most out of what you have now, and perhaps defer that next big upgrade for a while, realizing improved ROI both now and in the future.
To frame this discussion properly, it’s vital to consider that Wi-Fi performance isn’t just about throughput alone--and, in fact, it’s only rarely about throughput today, despite the industry’s emphasis on Mbps and now Gbps numbers as the key differentiator of generational advances in Wi-Fi standards and resulting products. Rather, we need to consider reliability, availability and especially overall network capacity as the key metrics of performance at the edge. The capacity element here is especially critical: Each user is increasingly working with multiple wireless devices simultaneously and running apps that require time-bounded services, so there must be aggregate throughput sufficient to enable their specific applications to perform as required, minimizing latency and thus optimizing end user productivity. End users with insufficient coverage, unreliable service or lacking the network capacity required most certainly won’t be as productive as they could be. And, if the network isn’t optimizing the productivity of end users, then what good is it at all?
That’s a pretty tall order, especially when we consider that radio-wave propagation usually varies from moment to moment and that end users often move around during the day. This means that both effective instantaneous throughput and demands for capacity are often difficult to predict. But, again, we did assemble a pretty good list of strategies and tactics for dealing with this classic more-variables-than-equations problem, so let’s see what floated to the top:
Installation planning: There are many forms of site surveys that can be applied during a greenfield deployment (that is, there is no Wi-Fi system currently installed). In general, it’s a good idea to know geographically where user demand will come from so that you can bulk up on APs in high-demand areas--conference rooms, lecture halls, and similar venues come to mind here. An initial RF sweep with a Wi-Fi-specific (i.e., 2.4 and 5 GHz.) spectrum analyzer is a good idea to identify any pre-existing interference. A survey of user applications is advisable, to identify traffic requirements. It’s also a good idea to examine the rest of the network--the wired portions all the way to the router that feeds into the internet--to identify potential bottlenecks and policy/configuration mismatches, and even obsolete network cabling.
Operational performance management: New installations should be tested with a limited number of devices running production applications before the general population is turned loose. Most importantly, learn to use the management console. There’s a wealth of information that can be gleaned from this vital resource, and while exploring the entire application here can be daunting, the more one knows, the faster problems can be resolved--and even identified before they have much of an impact. Ongoing operational monitoring and properly-set alerts and alarms are vital here as well.
Troubleshooting: Increasingly, the best solutions are based on the emerging class of artificial intelligence/machine learning solutions--with some even providing natural-language query facilities--now becoming commonly available from system vendors and third parties alike. Stand-alone assurance tools, or similar functionality provided by a system vendor, today handle far more than intrusion detection and prevention. Further, third-party monitoring can complement and even verify information reported by a management console--suspenders and a belt, if you will. We’re even seeing automated remediation capabilities take over significant elements of troubleshooting, and, as WLANs become so big that even the best mere-mortal network engineers can suffer from exhaustion, we expect that operational management will benefit from advances in automation increasingly and continually for years to come.
But, all you network engineers, fear not--for, even as routine problems are solved automatically, there will still be plenty to do in adding new management capabilities, planning for the next upgrade, aligning operations policies with overall organizational mission objectives, and much more. But it is nice to know that the occasional night and weekend off is also in the offing.