Running an IPv6-only host — redux(11 min read)

I have previously blogged about why you should consider IPv6 only hosting and setting up Apps on Kubernetes IPv6 to run my WordPress blog.

Kubernetes is not really designed for a single server (but is great for scaling and enterprise system), and although it was good experience learning how to set it up on IPv6, the overhead was too much and I eventually ended up with a crashed blog.

I'm still running IPv6 only, but with a much simpler set up.

This consists of docker, configured to run with IPv6, with docker-compose to run the different components and systems.

If you are planning on setting up your own server, read my notes on Securing your IPv6-only docker server before starting.

On my server there are currently three instances of WordPress for different websites, and 3 corresponding databases, as well as a Matrix Synapse server and plugins.

Read on for my notes on initial setup of the server with IPv6 and connectivity testing, including addressing schemes, docker configuration, IPv6 network address translation, and the Network Discovery Protocol Proxy Daemon.

Continue reading Running an IPv6-only host — redux(11 min read)

Unboxing the Dragino LPS8 LoRaWAN gateway(8 min read)

I recently got a Dragino LPS8 LoRaWAN gateway and set it up on my network. The LPS stands for LoRaWAN Pico Station.

The open source gateway runs a variant of OpenWRT and the latest version supports a range of LoRaWAN features including Basic Station. You can use it for a private network or set it up with a community as I did for The Things Network (TTN).

Unboxing the Dragino LPS8 LoRaWAN gateway

Read on for details of how easy it was to set it up securely.

Continue reading Unboxing the Dragino LPS8 LoRaWAN gateway(8 min read)

Waterfall does not exist(10 min read)

I often hear people talk about "waterfall" process software development. But a waterfall process doesn't really exist.

Well, there are software projects that meet the description of waterfall — the post-publication name give to the project structure described in Walter Royce's 1970 paper (old link) as the wrong way to develop software.

But there aren't any published methodologies, processes, books, tutorials, courses, tools, or certifications for waterfall. Because it isn't really a thing you should do.

There are many specific methodologies or processes for software development that are iterative, agile, or product based (Scrum, Unified Process, eXtreme Programming (XP), Crystal, PRINCE2, etc).

But there are no such processes for waterfall. Take a look and search for yourself. If you do find one, let me know, because they don't appear to exist.

See, it looks like a waterfall. Gantt chart created in Project Libre.
Continue reading Waterfall does not exist(10 min read)

Modern distributed tracing with dotnet(5 min read)

For any modern dotnet system, distributed tracing is already built in to the default web client, server, and other operations.

You can see this with a basic example, by configuring your logging to display the correlation identifier. Many logging systems, such as Elasticsearch, display correlation by default. The identifiers can also be passed across messaging, such as Azure Message Bus.

Logging has always been a useful way to understand what is happening within a system and for troubleshooting. However, modern systems are very complex, with multiple layers and tiers, including third party systems.

Trying to understand what has happened when a back end service log reports an error, and then correlating that to various messages and front end actions that triggered it requires some kind of correlation identifier.

This problem has existed ever since we have had commercial websites, for over 20 years of my career, with various custom solutions.

Finally, in 2020, a standard was created for the format of the correlation identifier, and how to pass the values: W3C Trace Context. In a very short amount of time all major technology providers have implemented solutions.

The examples below show how this is already implemented in modern dotnet.

Continue reading Modern distributed tracing with dotnet(5 min read)

Code First Azure Digital Twins — a first look(6 min read)

Telstra has a large Internet of Things portfolio, with Digital Twins one of the focus areas for Telstra Purple professional services. All major providers are supported, including Azure Digital Twins.

The team recently took some core bits out of project they are working on with code first Azure Digital Twins and have released it as an open source library, so I thought I would share an initial look at the project.

Why code first? Using a code first approach can make accessing Digital Twins easier for developers. They can use their native programming language and tools to develop their models, without having to learn the intricacies of DTDL (Digital Twins Definition Language) or the REST APIs for interacting with Azure Digital Twins.

The library can be found at https://github.com/telstra/DigitalTwins-CodeFirst-dotnet

Continue reading Code First Azure Digital Twins — a first look(6 min read)

Azure CLI vs PowerShell vs ARM vs Bicep(14 min read)

A key component of DevSecOps is infrastructure-as-code, and if you are using Azure there are multiple ways to specify what you want.

Microsoft provides Azure PowerShell, the Azure CLI, as well as both Azure Resource Manager (ARM) and the newer Bicep templates. There are also third party (and cross-cloud) solutions such as Terraform and Pulumi.

In the past I have been leaning towards Azure CLI, as I found ARM templates a bit cumbersome, plus my previous experience with migrations vs desired state for database deployments. With Bicep being promoted as a lighter weight alternative I thought I would compare the Microsoft alternatives.

Having now revisited the options, I still prefer scripting, but think I will switch more to PowerShell, particularly as it makes it easier to follow to the tagging, and naming, guidelines.

My recommendations:

  • For incremental development or changing environments, use Azure PowerShell scripts. They allow easy manipulation of parameters, and a migration/scripted approach can handle changes that a desired state/template approach can not.
    • If you are already heavily invested in an alternative scripting system, e.g. Bash, then Azure CLI would be easier to use.
  • If you have relatively stable infrastructure, such as a preset development environment or sample/demo code, that you want to repeatedly tear down and then recreate the same, then Bicep offers a nicer syntax than raw ARM templates. The deployments are viewable in the Azure portal, but templates do have some limitations compared to scripting.
  • In either case, follow the Azure Cloud Adoption Framework naming guidelines, allowing for unique global resources, as well as the associated tagging guidelines.

Example code is available on Github at https://github.com/sgryphon/azure-deployment-examples

Continue reading Azure CLI vs PowerShell vs ARM vs Bicep(14 min read)

RPG Mechanics: Fate Core(11 min read)

Having a background in statistics, I like evaluating the mechanics of roleplaying game systems. I have previously blogged how 3d6 is not less swingy than d20 comparing different types of systems at a high level.

This post is a more detailed dive into Fate Core (the 4th edition), which has a bell curve based dice result distribution.

Fate Core is an open source roleplaying game, available under both OGL and Creative Commons licences, with several published books (available both printed and as nicely formatted PDFs). The Fate Accelerated Edition (FAE) version is a short (only 40 pages) and lightweight variant of the system, with the recent Fate Condensed a streamlined version of the full system with clarifications and minor updates such as a safety tools section. (All are considered part of the same edition.)

There are several highly rated settings that use different versions of Fate, including Dresden Files, Fate of Cthulhu, Disapora, and Spirit of the Century.

Continue reading RPG Mechanics: Fate Core(11 min read)

Evaluating blockchain networks(5 min read)

What is blockchain? What makes a good blockchain? Is blockchain just a buzzword? Is a blockchain really trustless?

Let's start with the last one. The word trustless is often used to describe blockchain, however it is not really trustless — it would be better described as decentralised trust or systemic trust (trust in the system).

Rather than trust an individual (person or organisation) the trust is provided by the very system itself.

You do need to trust the transaction processors (aka miners) for the chain, and also the programmers that wrote the software being used. However you don't need to trust the individual processors or software, only that the majority are truthful. A characteristic of blockchain is that an individual bad actor, controlling only a small portion of the network, cannot take the network down.

A blockchain network can increase systemic trust through:

  • A large number of processors (e.g. public chain)
  • Multiple clients (i.e. multiple development teams)
  • Open source (allow visibility and the option to reject changes and go a different direction)

Only two major blockchains rate highly on these trust criteria: Bitcoin (BTC) and Ethereum (ETH). There are strong reasons why these have strong support.

Some of the others are trying, but not yet there, and worryingly some of the type cryptocurrencies are not decentralised at all, but actually controlled by a single central organisation.

Continue reading Evaluating blockchain networks(5 min read)

Crashed blog… now restored(1 min read)

So, I pushed the single-server Kubernetes cluster that I was running my blog on a little too far, and it crashed into a bit of a heap. The pods running the different sites, including this blog, failed, and the underlying database got corrupted.

It has been down for a few weeks now. Initially I thought it was just a server issue and rebooted. When it didn't come up, I did little bits of investigation over the following weeks, just a few hours at a time, to figure out the issue.

I managed to work out how to restore the database and get it working, but the server was not stable. It would quickly crash, and trying to activate more than one site would just cause problems.

Kubernetes is quite complicated, and there is a lot of overhead for a single server. It was still a good exercise to understand the complexities of deploying Kubernetes on IPv6.

Now, deploying multiple services via containers is still a good approach, with Kubernetes simply a way to orchestrate, and manage, a large number of containers. So, I can pretty much just run the same containers, just directly (instead of inside Kubernetes).

As you can see, from this blog entry, my services are now back up and running.

There was still the complexity of running on IPv6 only, which I should probably write up in more detail, but for now a lot of it was based on an article by Stefan Kleeschulte, https://medium.com/@skleeschulte/how-to-enable-ipv6-for-docker-containers-on-ubuntu-18-04-c68394a219a2

Scaling agile – a look at SAFe(10 min read)

One of the cornerstones of agile methods is delivering value – “working software over comprehensive documentation”, “production or it didn’t happen”.

To achieve this, when decomposing stories it is important to keep them independently valuable. Stories are only really finished when they are usable by the end customer.

To focus on progressing stories to completion, you want to minimise work in progress and you want to use a strict stack rank of work – 1st , 2nd, 3rd, etc. (Rather that general priority such as high/medium/low, where you have 20 high priority stories, none of them finished.)

However a stack rank doesn’t scale – if number 1 project in your stack rank has 100 week long tasks, you can’t assign 100 people to the project and have it done in 1 week.

The solution is the proven strategy of divide-and-conquer. If project number 1 can accommodate 20 people, then you assign it 20 people (and probably need to split that into 2 teams), then project 2 might have 10, project 3 has 15 people, and so on.

This means multiple agile teams, that need to co-ordinate, while keeping the overall organisation agile.

The Scaled Agile Framework for Enterprise (SAFe) has an approach that coordinates multiple teams towards common goals, leaves individual teams enough room to be agile, and manages planning horizons to preserve the ability to respond to change.

First we will take a look at the planning process for teams, and how SAFe scales to multiple teams using Program Increments, then we will look at plans vs roadmaps, and preserving team agility.

Continue reading Scaling agile – a look at SAFe(10 min read)