By Simen K. Frostad, Chairman of Bridge Technologies
I was recently invited to participate at a conference in London – the sort of event where the coffee is strong, and the flow of opinions even stronger. Inevitably, the conversation turned to AI: after all, there surely isn’t a conference happening today where it doesn’t arise as a topic of conversation, regardless of industry (well, perhaps dry-stone walling, but even then, there’s a chance AI has a role to play in optimum stone selection there too).In network testing and measurement, it’s not a question of whether AI will play a role in our industry, but how, where, and when. The enthusiasm is understandable: in systems of growing complexity, tighter error margins, and relentless pressure to avoid disruption – if AI cannot help here, one might reasonably ask where it can?
Twins, but not identical
One concept that surfaced repeatedly was the use of ‘digital twins’: virtual models of real-world networks, fed with live or historical data, in which changes can be tested before they are deployed. The appeal is obvious. Instead of modifying a delivery environment and hoping for the best, operators can simulate configuration changes and failure scenarios in advance, all without risk to viewer experience.
Conceptually, digital twins are not new – engineers have long modelled systems. What AI brings is the ability to ingest far larger volumes of telemetry, learn behavioural patterns, and adapt models over time. In a network context, a digital twin aims to replicate not just topology, but behaviour: jitter under load, timing drifts under stress, how redundancy actually behaves when it is no longer theoretical.
Done well, this is powerful, promising fewer outages and greater confidence when making change to live systems. However – and there is always a however – digital twins only function as well as the reality on which they are trained. If the underlying data is incomplete, inaccurate, or already filtered through layers of assumption, the resulting model will be convincing but wrong. Worse, it may be wrong in a consistent way, which is far more dangerous than being obviously broken. AI can generate answers, but it cannot know whether those answers are reasonable unless someone with real-world experience is prepared to challenge them – to recognise what is correlational rather than causal, which results should trigger scepticism rather than confidence.
This is where the distinction between AI as a tool and AI as a substitute becomes critical. Used as a tool, AI can extend professional judgement. Used as a substitute, it risks becoming self-referential: models trained on models, assumptions reinforced by their own outputs. The snake, as they say, begins to eat its own tail.

A dash(board) of caution around AI
The same themes emerge in the discussion around dashboards. Networks today generate extraordinary volumes of data. Different stakeholders – operations, engineering, management, partners – need different views, different levels of abstraction, and different explanations. As a result, there is growing interest in conversational interfaces: interrogating AI in plain language and receiving clear explanations. “Why did latency increase on this service yesterday?” “What is the likely impact if this link fails?”.
But again, the same risks apply. Every layer of interpretation introduces bias: from the data collected, to the model trained, to the phrasing of the question. Over time, if these layers are not anchored to transparent data, output quality degrades. Assumptions become embedded. Development decisions are influenced by conclusions that nobody can quite trace back to first principles.
In contrast, a well-designed dashboard shows what is measured, how it is measured, and how it is presented. Indeed, that was the key motivation for us when we introduced ‘dynamic hovercards’ into our probe visualisations – explanatory texts which display the significance of a metric and how it has been calculated.
Ultimately, our focus is on allowing different users to see the same underlying truth through different lenses. A core example of that is the VB440, which can take packet flow data and represent it as – for example – a LUFs meter for an audio engineer, a waveform scope for a camera painter, or timing path displays for a network engineer (to name but a few possibilities). The key is design: dashboards that are modular, customisable, and built from a clear understanding of each user group’s needs.
As a result, at Bridge, we hold a firm view that dashboards should – and will – remain an integral part of T&M, providing the stable, interpretable foundation on which AI assistance might rest.
The difference between intelligence and wisdom
None of this should be read as cynicism towards AI in T&M. But enthusiasm needs to be tempered with responsibility. And that responsibility has two parts:
First, to maintain the robustness of the data sets that feed AI systems. Real-world metrics must remain real-world metrics – not facsimiles of facsimiles, or simulations trained on simulations.
Second, to ensure that AI does not erode professional understanding. Human expertise must remain the backstop – the means by which interpretations are interpreted.
Ultimately, AI can make networks smarter. It cannot make us wiser unless we insist on thinking critically alongside it. That, perhaps, was the most useful conclusion to emerge from those inevitable conversations over conference coffee: not whether AI will shape the future of T&M, but whether we are prepared to shape how it does so.




