When a major US bank – which prefers not to be named due to the regulated nature of the business – rolled out a new unified communications infrastructure built around a 24-hour uptime, there were cheers all round for customer service quality, not to mention a projected saving of $1m
However, it soon became apparent that the new infrastructure contained a fairly major teething problem.
“The multi-vendor voice environment required 24-hour uptime, which was being compromised by quality of service (QoS) issues,” reports the bank.
“Queues began filling up as calls flooded into the call centres, resulting in long delays, which in turn negatively impacted the customer service experience.”
As ever, it was IT’s task to figure out the source of the quality issues. This could conceivably lie anywhere on the session border controller (SBC), intranet or VoIP services, all of which were provided by – in some cases – several different vendors. It was a tangled ball of string to unpick, and all the time those call queues were building, and customers were complaining.
NetScout’s nGeniusONE “service assurance platform” was eventually selected to deal with the problem, and the bank reports the product was able to “quickly pinpoint the root cause of [the] call quality problems”.
Specifically, NetScout reports a problem with the QoS tag on calls:
“nGeniusONE immediately identified that call traffic had the proper QoS tag when entering the SBC, but was incorrect as it exited the outsourced interactive voice response (IVR) system,” says NetScout.
“This meant that calls were being given best effort delivery, which explained why they were queuing up. nGeniousONE’s traffic intelligence enabled IT to gain critical insights and pinpoint exactly where within the environment the misconfiguration was located.”
That’s the science bit, but the most important outcome is that mean time to know (MTTK) was reduced, meaning mean time to repair (MTTR) was sped up on the infrastructure, and the problem was soon tackled effectively.