3G (3rd generation mobile technology) networks for the major telecommunication companies are due to shut down over the next few years. This includes Telstra, whose network is now in the sunset phase and due to close in June 2024.
This will mean the end of 3G for Internet of Things deployments, and they will need to migrate to either LPWAN (Low-Power, Wide-Area Networks) or new generation cellular mobile, depending on the use case.
As pointed out in this article on Why you need to migrate your devices now! that does not give a lot of time. If you have 15,000 devices in the field you need to be replacing 30 devices per day — if you start tomorrow; more if you take long to commence your project.
The are three main options for migration, in two categories:
NB-IoT (Narrow-Band Internet-of-Things)
Cat-M1 (Category M1), also known as LTE-M (Long Term Evolution, Category M)
4G LTE (4th Generation) mobile
This post will explore those options in a bit more detail, as well as what other alternatives there might be. 5G NR (5th Generation New Radio) does not yet have wide enough coverage to be a viable option for IoT in most cases.
If this seems a bit overwhelming, given the short time frames and what you need to do, then you can also approach our consulting services, Telstra Purple, for advice and help.
Kubernetes is not really designed for a single server (but is great for scaling and enterprise system), and although it was good experience learning how to set it up on IPv6, the overhead was too much and I eventually ended up with a crashed blog.
I'm still running IPv6 only, but with a much simpler set up.
This consists of docker, configured to run with IPv6, with docker-compose to run the different components and systems.
On my server there are currently three instances of WordPress for different websites, and 3 corresponding databases, as well as a Matrix Synapse server and plugins.
Read on for my notes on initial setup of the server with IPv6 and connectivity testing, including addressing schemes, docker configuration, IPv6 network address translation, and the Network Discovery Protocol Proxy Daemon.
The open source gateway runs a variant of OpenWRT and the latest version supports a range of LoRaWAN features including Basic Station. You can use it for a private network or set it up with a community as I did for The Things Network (TTN).
Read on for details of how easy it was to set it up securely.
I often hear people talk about "waterfall" process software development. But a waterfall process doesn't really exist.
Well, there are software projects that meet the description of waterfall — the post-publication name give to the project structure described in Walter Royce's 1970 paper as the wrong way to develop software.
But there aren't any published methodologies, processes, books, tutorials, courses, tools, or certifications for waterfall. Because it isn't really a thing you should do.
For any modern dotnet system, distributed tracing is already built in to the default web client, server, and other operations.
You can see this with a basic example, by configuring your logging to display the correlation identifier. Many logging systems, such as Elasticsearch, display correlation by default. The identifiers can also be passed across messaging, such as Azure Message Bus.
Logging has always been a useful way to understand what is happening within a system and for troubleshooting. However, modern systems are very complex, with multiple layers and tiers, including third party systems.
Trying to understand what has happened when a back end service log reports an error, and then correlating that to various messages and front end actions that triggered it requires some kind of correlation identifier.
This problem has existed ever since we have had commercial websites, for over 20 years of my career, with various custom solutions.
Finally, in 2020, a standard was created for the format of the correlation identifier, and how to pass the values: W3C Trace Context. In a very short amount of time all major technology providers have implemented solutions.
The examples below show how this is already implemented in modern dotnet.
The team recently took some core bits out of project they are working on with code first Azure Digital Twins and have released it as an open source library, so I thought I would share an initial look at the project.
Why code first? Using a code first approach can make accessing Digital Twins easier for developers. They can use their native programming language and tools to develop their models, without having to learn the intricacies of DTDL (Digital Twins Definition Language) or the REST APIs for interacting with Azure Digital Twins.
In the past I have been leaning towards Azure CLI, as I found ARM templates a bit cumbersome, plus my previous experience with migrations vs desired state for database deployments. With Bicep being promoted as a lighter weight alternative I thought I would compare the Microsoft alternatives.
Having now revisited the options, I still prefer scripting, but think I will switch more to PowerShell, particularly as it makes it easier to follow to the tagging, and naming, guidelines.
For incremental development or changing environments, use Azure PowerShell scripts. They allow easy manipulation of parameters, and a migration/scripted approach can handle changes that a desired state/template approach can not.
If you are already heavily invested in an alternative scripting system, e.g. Bash, then Azure CLI would be easier to use.
If you have relatively stable infrastructure, such as a preset development environment or sample/demo code, that you want to repeatedly tear down and then recreate the same, then Bicep offers a nicer syntax than raw ARM templates. The deployments are viewable in the Azure portal, but templates do have some limitations compared to scripting.
Having a background in statistics, I like evaluating the mechanics of roleplaying game systems. I have previously blogged how 3d6 is not less swingy than d20 comparing different types of systems at a high level.
This post is a more detailed dive into Fate Core (the 4th edition), which has a bell curve based dice result distribution.
Fate Core is an open source roleplaying game, available under both OGL and Creative Commons licences, with several published books (available both printed and as nicely formatted PDFs). The Fate Accelerated Edition (FAE) version is a short (only 40 pages) and lightweight variant of the system, with the recent Fate Condensed a streamlined version of the full system with clarifications and minor updates such as a safety tools section. (All are considered part of the same edition.)
There are several highly rated settings that use different versions of Fate, including Dresden Files, Fate of Cthulhu, Disapora, and Spirit of the Century.
What is blockchain? What makes a good blockchain? Is blockchain just a buzzword? Is a blockchain really trustless?
Let's start with the last one. The word trustless is often used to describe blockchain, however it is not really trustless — it would be better described as decentralised trust or systemic trust (trust in the system).
Rather than trust an individual (person or organisation) the trust is provided by the very system itself.
You do need to trust the transaction processors (aka miners) for the chain, and also the programmers that wrote the software being used. However you don't need to trust the individual processors or software, only that the majority are truthful. A characteristic of blockchain is that an individual bad actor, controlling only a small portion of the network, cannot take the network down.
A blockchain network can increase systemic trust through:
A large number of processors (e.g. public chain)
Multiple clients (i.e. multiple development teams)
Open source (allow visibility and the option to reject changes and go a different direction)
Only two major blockchains rate highly on these trust criteria: Bitcoin (BTC) and Ethereum (ETH). There are strong reasons why these have strong support.
Some of the others are trying, but not yet there, and worryingly some of the type cryptocurrencies are not decentralised at all, but actually controlled by a single central organisation.