
Let us look back at the era of IoT before IoT. In that era—the late 1990s—we were building private radio networks based on proprietary RF technology. The learnings, the pitfalls from that era, were something I did not expect to revisit, and certainly not 25+ years later. Nowadays, we talk a lot about resilience and cybersecurity—and those are important, of course. But we often forget that if our systems are not operational and finely tuned, even the best hardware in the world will underperform. Most IoT systems don’t fail—they degrade.
In many IoT projects, there is a moment when everything appears to be working. Data is being delivered. Coverage maps look convincing. From a distance, the system feels like it has all the right attributes of a stable system. And yet, at some point, something starts to drift.
Packets are lost. Power consumption rises faster than the oil price. Latency increases. Some devices become unreliable, but not consistently enough to raise more than an eyebrow. The system does not fail outright. It degrades.
This is one of the more difficult aspects of IoT to deal with. Failures are not binary. They are fluctuating. But what looks like a connectivity issue is sometimes something else entirely.
The illusion of stability
Connectivity in IoT is often treated as a checkbox. Once a device is “online,” the assumption is that the problem is solved. Yieeha.
But connectivity is not static at all. It is a dynamic condition shaped by radio environments, protocols, hardware characteristics, antennas (and cables), and deployment decisions over time.
Take LoRaWAN as an example. On paper, it is robust, energy-efficient, and well-suited for large-scale deployments. And it is. But it is also highly sensitive to how the network is actually built and operated. When you purchase connectivity from your fiber operator or mobile network operator, they are responsible for the delivery of the bits and bytes. If you use a LoRaWAN operator, you should rightfully expect the same. But if you are the operator—the one who built the system—the problem is yours. A network can be fully compliant with specifications and still underperform in the field, and unfortunately, I see that happening from time to time.
When the physics starts to matter
In real deployments, radio behavior becomes unavoidable. Interference in the 868 MHz band is a shared reality. Other LoRaWAN networks, alarm systems, and various short-range devices all coexist in the same spectrum. Over time, the noise floor can rise without any change in your own system. You are simply using a shared resource, so this has to be expected.
The result can be packet loss—not uniformly, but at certain times of day or in certain locations. From a system perspective, everything still looks “mostly fine.”
Antenna design and placement introduce another layer of complexity. High-gain omnidirectional antennas can extend coverage, but they also narrow the vertical radiation pattern. Nodes located close to the gateway may fall into what is effectively a dead zone. In trying to optimize reach, you unintentionally degrade local reliability.
Even reflections and multipath effects from buildings and metal structures can shift signal behavior in ways that are difficult to predict—and even harder to monitor continuously.
Capacity is not just about bandwidth
As networks scale, another dimension emerges: capacity.
In LoRaWAN, capacity is governed by spreading factor, payload size, transmission interval, and regulatory constraints such as duty cycle. These are not abstract parameters. They directly influence how long each transmission occupies the air.
Higher spreading factors increase range, but they also dramatically increase airtime. This means fewer devices can communicate effectively within the same network. You need to find a balance.
If Adaptive Data Rate (ADR) is misconfigured—or not used at all—devices may continue transmitting with unnecessarily high spreading factors. The network still functions, but efficiency drops. Airtime increases. Collisions become more likely.
The hidden cost of minor issues
What makes these problems particularly challenging is that they rarely appear as critical failures. Instead, they accumulate.
A few lost packets lead to retransmissions. Retransmissions increase airtime. Increased airtime reduces overall network capacity. Devices stay active longer, consuming more energy. Battery life shortens. Maintenance intervals shrink.
At scale, this becomes significant. What started as a marginal radio issue evolves into a cost problem, an operational problem, and ultimately a sustainability problem. More battery replacements. More site visits. More hardware cycles.
Beyond connectivity
If there is one takeaway, it is this: connectivity in itself is not the goal. I know that many connectivity experts preach a certain technology. I don’t. Reliable system behavior over time is what we should strive for.
That requires moving beyond a religion-based fixation on a specific wireless technology—there is no one-size-fits-all. When you have selected your technology of choice, think in terms of system dynamics. Understand how design decisions in one layer affect outcomes in another. Anticipate how the environment will evolve after deployment—not just at installation.
Take ownership.
Because in the absence of end-to-end responsibility, IoT systems tend to drift. Not dramatically, but gradually. Quietly. Until the gap between expectation and reality becomes too large to ignore.
In many ways, the most interesting IoT challenges are not about getting things to work. They are about understanding why they stop working as expected—and better yet, planning so that such situations do not occur.
That is where the real engineering begins.

