Elasticsearch-Logstash-Kibana (ELK) LoggerProvider for .NET Logging(4 min read)

Note: The Elasticsearch logger provider has been moved to the ECS DotNet project.

Find the latest version here: https://github.com/elastic/ecs-dotnet/blob/master/src/Elasticsearch.Extensions.Logging/ReadMe.md

The nuget package is here: https://www.nuget.org/packages/Elasticsearch.Extensions.Logging/1.6.0-alpha1

To add the package to your project:
dotnet add package Elasticsearch.Extensions.Logging --version 1.6.0-alpha1

This ElasticsearchLoggerProvider, for Microsoft.Extensions.Logging, writes direct to Elasticsearch, using the Elasticsearch Common Schema (ECS), and has full semantic logging of structured data from message and scope values.

To use, add the Essential.LoggerProvider.Elasticsearch package to your project:

PS> dotnet add package Essential.LoggerProvider.Elasticsearch

Then add the logger provider to your host builder, and the default configuration will write to a local Elasticsearch service:

using Essential.LoggerProvider;

// ...

    .ConfigureLogging((hostContext, loggingBuilder) =>
    {
        loggingBuilder.AddElasticsearch();
    })

Once you have logged some events, open up Kibana (e.g. http://localhost:5601/) and define an index pattern for dotnet-* with the time filter @timestamp.

You can then discover the log events for the index. Some useful columns to add are log.level, log.logger, event.code, message, tags, and process.thread.id.

Structured message and scope values are logged as labels.* custom key/value pairs, e.g. labels.CustomerId.

Example: Elasticsearch via Kibana
Continue reading Elasticsearch-Logstash-Kibana (ELK) LoggerProvider for .NET Logging(4 min read)

Rolling file LoggerProvider for .NET Logging(1 min read)

I have just released version 1.0 of the Rolling File Logger Provider as part of Essential Logging on Github, a port of my .NET diagnostics library across to Microsoft.Extensions.Logging.

To use, add the Essential.LoggerProvider.RollingFile package to your project via Nuget:

dotnet add package Essential.LoggerProvider.RollingFile

Then reference the namespace, and add the logger provider during host construction:

using Essential.LoggerProvider;

// ...

    .ConfigureLogging((hostContext, loggingBuilder) =>
    {
        loggingBuilder.AddRollingFile();
    })
Continue reading Rolling file LoggerProvider for .NET Logging(1 min read)

Eth 2.0 state transition(4 min read)

There is a lot of activity going on building the Ethereum 2.0 beacon chain, including the .NET client I am working on, Nethermind.

The beacon chain consists of blocks and a progressive state. Blocks are generated, signed, and transmitted across the network, then applied to transition the state. The following diagram shows the main relationships.

Continue reading Eth 2.0 state transition(4 min read)

Syslog Structured Data for Microsoft Extensions Logging(4 min read)

The first part of logging I have polished up Microsoft.Extensions.Logging is structured data support with a Syslog Structured Data package that contains a component which will render as syslog RFC 5424 structured data.

Diagnostics Logo

To use the Syslog StructuredData component, install the nuget package:

dotnet add package Syslog.StructuredData

You can then use the structured data via BeginScope() on an ILogger:

using (_logger.BeginScope(new StructuredData
{
    Id = "origin", ["ip"] = ipAddress
}))
{
    // ...
}

For default logger providers, that don't understand structured data, the ToString() method on the StructuredData object will render out the data in RFC 5424 format. This format can still be easily parsed by log analyzers, although the surrounding context won't be a syslog message.

Example output: Using the default console logger, with scopes and timestamp
Continue reading Syslog Structured Data for Microsoft Extensions Logging(4 min read)

Alpha of Essential Logging RollingFile(1 min read)

The new Microsoft.Extensions.Logging system has some improvements over the previous System.Diagnostics, with built in support for dependency injection and semantic logging (although I tend to think a singleton-type pattern, like TraceSource, is better than cluttering up every constructor with a logger).

Strangely, however, Microsoft did not include a basic file logger; they have App Insights, and even a file logger that works on Azure-only, but no basic logger. I guess they thought between Serilog/NLog/log4net/etc that there were enough third parties.

The only problem with these is that each of then is an entire logging systems, so you end up going through one framework (Microsoft.Extensions.Logging) to get to another framework (e.g. NLog) before you end up at an actual logger (e.g. a file logger). Why two frameworks?

With the old .NET Framework I never understood this either, which is why I wrote a range of TraceListeners, that each independently plugged into System.Diagnostics directly.

And finally I have started to port it across to Microsoft.Extensions.Logging, with an alpha release of Essential.Logging.RollingFile

This won't be another framework, just a bunch of logger providers that plug into the provider system.

It is only alpha; it works -- I mostly just copied it across from Essential.Diagnostics and commented out the invalid parts, but the infrastructure is still in flux while I sort things out.

OneDrive on Ubuntu Linux(7 min read)

UPDATE: Thanks to a comment from @abraunegg about the OneDrive Client for Linux (https://abraunegg.github.io/), I installed and have been using that instead. It features full 2-way sync and just runs in the background. I have set it up to sync both my personal OneDrive, and my work account OneDrive, and it has been working great. If you mostly want OneDrive (and not the other features of rclone), then I would recommend OneDrive Client for Linux, https://abraunegg.github.io/

Verdict (so far): Not fully stable, but somewhat usable.

So, I decided to try and use Linux (Ubuntu, now 19.10, although I started with 18.04 LTS) as my host operating system for my new laptop, a HP ZBook Studio x360 G5, as most of my client work is done in virtual machines anyway, that don't really care what the host is.

On my previous Windows machines, I have used OneDrive for cloud storage for a long time now, and it is really good to start up a brand new computer, sign in, and all my files just appear over the next day or two.

Normally, OneDrive is the first thing (it is easy), but this time I did a few others, like setting up virtual machines for work, before looking for a solution for OneDrive.

The solution I have so far is using rclone, and I thought I would document my setup.

rclone

After looking through several options for syncing OneDrive, some commercial, it looked like rclone was the most widely used.

There was a version available in the Ubuntu store, but it was a bit old, so I downloaded the latest (1.51 at the time) and installed.

Setting up a remote to connect to my personal OneDrive was pretty straight forward with 'rclone config' following the wizard.

Syncing and copying

Basic operations for rclone are syncing (one way, mirrors deletes as well as new or edit), and copying (one way, additive).

Neither is really the solution I wanted, so looking further I found mounting, and then also the cache backend.

Mounting

'rclone mount' allows you to mount a remote as a local drive. It includes a bunch of virtual file system caching options designed to improve compatibility with programs. This allows programs to directly load files from the remote without having to worry so much about the lag.

One drawback of mounting, however, is that the documentation says it only works while online.

The documentation did, however, mention a cache backend that could be used separately, or in conjuction with, VFS caching, and that allowed offline upload.

To automatically mount, you can set up a systemd configuration service; I tried the global service first (on machine start up), but then changed to a user configuration.

This means it connects to OneDrive when I sign in, and then disconnects on sign out; I only use it while signed in.

Cache backend

The cache backend is set up as a cache remote that points to a cloud remote.

From the features, it appears to be designed mostly for video caching solutions, as it has integration options for a Plex (video streaming) server.

I am not using it for that, but it did have an advanced option for offline uploading, something that seemed what I wanted.

Verdict (so far)

It does work, but is no where near as stable as OneDrive on Windows, plus has no real offline/two-way sync (yet).

  • OneDrive can only be accessed while online. If offline, you see nothing; it is simply a view to the remote file system.
  • File operations (copy to/from the mount) are fine, but some applications have trouble opening or creating files.
  • I have now increase VFS cache to full, and still get problems. e.g. saving a new file from LibreOffice will display errors and take a long time to appear.
  • I had trouble even opening files in LibreOffice until I increased the mount VFS caching from the default (none) to at least writes.
  • Applications, e.g. LibreOffice, will lock up on saving too often to be comfortable; the UI freezes, and you can't even save elsewhere. This seems to happen more often using backend cache, although it still occurs without it.
  • The "offline upload" of the cache backend is for uploading large files, e.g. movies, where you copy the file in while online, it inserts into the cache, and does the upload in the background. The documentation says it will continue across restarts, etc, although I haven't tested this. You still need to be online for the initial copy though.

My work around so far for new files has been to create new documents in a local folder, close them, copy them across to OneDrive, and then open the file.

For crash-on-save, I just have to save more often and hope I don't lose too much.

When it freezes, the LibreOffice (or other) process becomes unresponsive, and in System Monitor sometimes shows as "Uninterruptible". Using the file explorer, the rest of OneDrive is okay, but the folder I am working in doesn't load.

To fix, I have been stopping the rclone service, check the status and if there was an error (usually if it is frozen) manually cleaning up the mount point, and then restarting:

 systemctl --user stop rclonemount-OneDrive.service systemctl --user status rclonemount-OneDrive.service fusermount -u /home/(user)/OneDrive systemctl --user start rclonemount-OneDrive.service 

Sometimes this will unfreeze apps, but sometimes they will crash. I've also had them crash and then the files recovered by LibreOffice, although I wouldn't rely on this.


Set up instructions

Set up a OneDrive remote, a caching remote, and then mount the cache.

1) Use 'rclone config' to set up a onedrive remote; I called mine 'OneDrive'

2) Test it works, e.g. 'rclone lsd OneDrive:'

3) Use 'rclone config' to set up a cache remote. I called mine 'OneDrive-cache', and just pointed it at the root 'OneDrive:/' (although the documentation recommends a subfolder, I just use the root in Windows before and I wanted it similar).

During config, select the advanced options and set up tmp_upload_path, in my case to point to '/home/(user)/.tmp-upload/OneDrive'

Note: Replace (user) with your actual user name, in the examples.

4) Test it works, e.g. 'rclone lsd OneDrive-cache:'

5) Create a mount point, I used '/home/(user)/OneDrive', but you could also use a more traditional /mnt if you wanted.

mkdir ~/OneDrive

6) Test mounting the cache works (this will run in the console; CTRL+C to end when you have tested it works):

rclone mount OneDrive-cache: /home/(user)/OneDrive -vv

Note that to get files to open properly I needed to tweak the VFS cache options (details below); the above is just to test it is working.

Also, the cache didn't add any features I needed and seemed to make the system less stable, so I have been trying both it and mounting OneDrive: directly.

7) Set up a systemd service to automatically mount the cache. I am familiar with using vi as my editor, but you can use something different if you want.

You need to create a user service entry for systemd:

vi /home/<user>/.config/systemd/user/rclonemount-OneDrive.service

 [Unit] Description=rclone mount for OneDrive: [Service] Type=simple ExecStart=/usr/bin/rclone mount OneDrive: /home/&amp;amp;amp;lt;user&amp;amp;amp;gt;/OneDrive -vv --config /home/&amp;amp;amp;lt;user&amp;amp;amp;gt;/.config/rclone/rclone.conf --vfs-cache-mode full ExecStop=/bin/fusermount -u /home/&amp;amp;amp;lt;user&amp;amp;amp;gt;/OneDrive Restart=on-abort [Install] WantedBy=default.target 

You can use 'mount OneDrive-cache:' instead of 'mount OneDrive:', if you want to try the backend cache.

Note that you need full paths for commands and the config file.

8) Enable

systemctl --user enable rclonemount-OneDrive.service

9) Test that it starts

systemctl --user start rclonemount-OneDrive.service


Troubleshooting the systemd job

If the problem is with the job, you can check the output of the attempt to start from, e.g. if there is a typo or issue with the command:

systemctl --user status rclonemount-OneDrive.service


Using a systemd mount

To keep things simple, I ran rclone directly as a systemd service, but there is also a helper script available to set it up as a mount dependency instead, i.e. rclone-OneDrive.mount


Backup sync for offline access

For situations where I might want to access files while offline, I also have a script that uses rclone to do a regular one-way sync of some key folders (those I would need for reference offline).


Future

Continue testing the stability and whether using the backend cache is better or worse, or there are any other parameters I can tweak.

The rclone website also mentions two-way sync is planned for some time in the future, so that would be good to have.

IPv6 virtual networks on Azure(2 min read)

IPv6 support for Azure VNets is currently available in preview (https://azure.microsoft.com/en-us/updates/microsoft-adds-new-features-to-ipv6-support-for-azure-vnets/).

Most of it is available via the Azure Portal, but I found allocating an IP config to a network card had to be done via the shell.

Here are the steps I did to test:

Continue reading IPv6 virtual networks on Azure(2 min read)

Blockchain Conference and some musings on Bitcoin use cases(4 min read)

So, I attended Agile Global/Knowledge Hut's Blockchain Conference in Brisbane today.

My highlight was Aleksander Svetski, both his scheduled talk on Money/Bitcoin and the fill in he did in the morning on Cryptoeconomics (one of the other speakers couldn't make it).

Some of the other talks I liked were Benjamin Hall's talk on payment modernisation, and Kristyn Hales' talk on regulation of capital raising via coins/tokens; Dr Adrian McCullagh's talk on smart contracts was also reasonably interesting.

But, it was Aleks' talk on the history of money and why money (aka Bitcoin) is blockchain's killer app, and projects to increase the network size/reach like Stashh that got me thinking 'but what are the use cases for Bitcoin right now?' -- should we start using it to buy our groceries, or is it only for criminals?

Some Bitcoin use cases

Scenario

Bitcoin?

Comments

Buying groceries from the local shop or your morning cup of coffee

Not really

Right now, Australia has both stable fiat currency and extensive electronic payment options.

Until we get to the point where we distrust the government, Bitcoin is not very useful for local shopping.

Everyday shopping on the Internet

Not really

We have plenty of reliable third party payment, such as PayPal, Mastercard, VISA, and others.

Maybe Bitcoin has enough advantage in lower transactions fees for larger purchases where customers are charged a transaction fee that it is worth it, but right now I don't think it is really needed by the majority of people.

Note also that you need to have even greater trust in the merchant if paying by Bitcoin, as third party payment mechanisms can be reversed if the merchant never delivers, whereas Bitcoin, like cash, can't.

Should merchants accept Bitcoin, even if there are few customers using it

Why not, if it doesn't cost anything

Online shops could probably benefit from being able to transparently accept payment via Bitcoin (especially if fees turn out lower), but there may be some fixed costs in supporting it as a payment alternative.

High transaction fees have seen it dropped by some providers, but initiatives like TravelByBit are a good step. If the cost of providing the additional payment mechanism is close enough to zero, then having as many merchants as possible accept bitcoin, even if it is rarely used, is a good way to increase the network.

Funding causes censored by the government, or purchase of illicit items

Yes, useful

This ranges from supporting causes like Wikileaks, where governments have put pressure on traditional transaction providers to deny them service, through to grey and black markets.

Full disclosure: the political party I support has a policy of decriminalisation and legalisation [https://www.ldp.org.au/drug_reform] of currently illicit drugs; under the current laws a lot of the harm caused by drugs is a direct result of prohibition, and in this respect the availability of Bitcoin and black/grey markets actually reduces the damage done.

The unbanked population

Yes

I agree with Aleks that this is potentially one of the biggest growth areas.

Many of the world's poor, despite impoverishment, have leapfrogged straight to mobile Internet; in the absence of an available banking sector, Bitcoin has an opportunity to be *the* banking solution for these people.

International travel

Sure

This is an interesting scenario -- rather than constant currency exchange (along with the inability to get rid of coins), Bitcoin at airports (such as TravelByBit) makes some sense -- although existing credit cards already work fine for that (even with high exchange costs).

Large purchases

Makes sense

It is probably a bit uncomfortable to walk around with $30,000 cash (a car), or even $3,000 cash (a purebred puppy), and even if you have a credit card with a high enough limit the fees may be too high (and ma

rgins too low) to be viable, especially if it is the sort of thing where you want to inspect and exchange on the spot (rather than do a bank transfer).

The usual solution for these sorts of things is a bank cheque, but Bitcoin could remove the hassle of having to visit a physical bank, if the fees are low enough.

Real time payment systems, such as PayID that just launched in Australia, may intrude in this space (depending on the fees).

Settlement layer

Yes

Sure, but not really relevant to the average user.

All up in the first world, there are some fairly narrow use cases where it makes sense, such as supporting censored causes or large purchases, unless you want to start risking illicit activity; the big case could be the unbanked.

For a list a bit more risque, see http://www.libertylifetrail.com/2016/04/04/the-top-use-cases-for-bitcoin/

Note that it explicitly excludes use cases that are clear violations of property rights or an initiation of violence; whilst Bitcoin, as a tool, could be used for such scenarios, they should not be considered valid.

Anyway, attending the conference has hopefully encouraged me to spend more time on doing learning and professional development in the Bitcoin/Blockchain space.

Structured logging with .NET Framework System.Diagnostics(6 min read)

Structured logging (also sometimes called semantic logging) is a useful addition to the software development toolkit.

The latest release (2.2.0, and some 2.1.728) of Essential.Diagnostics adds structured tracing capabilities to the .NET Framework System.Diagnostics. It integrates seamlessly with existing tracing, including from the .NET Framework, and includes both producer-side extensions (to include in your application) and trace listener changes (to integrate with structured tracing systems).

The key new and updated packages are:

While the packages can be used independently, using both a producer and consumer in combination multiplies the benefits.

Continue reading Structured logging with .NET Framework System.Diagnostics(6 min read)

Versioning .NET Core in Visual Studio Team Services(8 min read)

I have always found it useful for applications to display their build version, and for libraries to have the build version in their properties. Relying on properties like the date (or file size) is always a bit risky.

.NET Core has embraced Semantic Versioning and at first glance appears to have a new way to specify version numbers.

It doesn't quite work to my full satisfaction, but luckily the older methods still work, so a basic GitVersion task in your build pipeline is pretty much all you need to get things working.

Continue reading Versioning .NET Core in Visual Studio Team Services(8 min read)