Windows 8 Developer Preview Safe Mode(1 min read)

In Windows 8 the 'Safe Mode' option is not available on the boot with F8 screen, prompting several guides on creating an additional boot entry using BCDEDIT and chaning the properties via the Windows GUI -- not much use if your machine has already run into trouble.

Shift+F8 is the secret key that gets the old boot options menu, with 'Safe Mode', but you need to hold it down during POST and keep holding it down until the "Advanced Boot Option" screen appears, otherwise you can't time it right (unlike F8, or F10 below, which you just hit at the beginning of the boot).

Also the Edit Boot Options menu, available via boot with F10, is still available, and that allows low level control of all the boot options.

I couldn't track down a current reference for the options, but did find one for Windows XP / Server 2003, most of which still appear relevant: http://support.microsoft.com/kb/833721

To get the same as 'Safe Mode with Networking':

  1. Press F10 while booting
  2. You should get a text screen titled "Edit Boot Options", with a section "Edit Windows Boot options for: Windows Developer Preview"
  3. There should be an input area that already has "/NOEXECUTE=OPTIN" (in my case it also had "/HYPERVISORLAUNCHTYPE=AUTO", which I think is because I am running Hyper-V)
  4. Add "/SAFEBOOT:NETWORK" (Note: "/NOGUIBOOT" doesn't seem to work -- it still shows the loading screen, so options like "/SOS" didn't work)
  5. Hit ENTER to boot

Safe Mode is important for an early Developer Preview like this as drivers issues are much more likely (I ran into problems trying to add the NVIDIA drivers for my Alienware M14x and had to boot into safe mode to uninstall them).

Comparison of logging frameworks(1 min read)

I added a comparison of the major logging/tracing frameworks for .NET to the CodePlex site for Essential.Diagnostics, to demonstrate how System.Diagnostics stacks up against log4net, NLog and the Enterprise Library.

I also added a performance comparison (the source code is in the CodePlex project if you want to verify the results).

Look at the results for yourself, but I think System.Diagnostics does okay -- and the extensions in Essential.Diagnostics (plus others such as Ukadc.Diagnostics and UdpPocketTrace) fill out the gaps compared to log4net and NLog. Similarly on the performance side, all have very little overhead (NLog is a winner on overhead, but does relatively worse on actually writing the messages to a log file).

What about the Enterprise Library Logging Application Block? Well, I just don't think it does well compared to the others. Sure it was a lot better than .NET 1.0 System.Diagnostics, but a lot of that was added in .NET 2.0 System.Diagnostics (such as multiple sources). In some cases it is worse than what is currently available in the standard framework -- e.g. no delayed formatting. This shows up in the performance figures which indicate several magnitudes greater overhead than any of the other frameworks!

I'm obviously biased, but I really think that the best solution is to stick with the standard, out-of-the-box, System.Diagnostics, extended where necessary to fill any gaps (Essential.Diagnostics, etc, for additional listeners, filters & formatting).

P.S. Also check out my guidance on Logging Levels.

SharePoint 2010 logging levels(3 min read)

According to MSDN "in Microsoft SharePoint Foundation 2010 the preferred method of writing to the Trace Logs is to use the SPDiagnosticsServiceBase class" (http://msdn.microsoft.com/en-us/library/ff512746.aspx).

MSDN also provides some guidance on the trace and event log severity levels to use (http://msdn.microsoft.com/en-us/library/ff604025.aspx), however the WriteEvent() and WriteTrace() methods use slightly different enums; the diagnostics logging configuration in Central Administration is slightly different again, and then you have a third set of values accessed by the PowerShell command Get-SPLogEvent.

The table below shows the mapping of levels from these different sources.

Despite the complicated mapping, in general I think things go in the right direction with events writing to the event log and trace log at the same time, and having a high trace level. The distinction between event logging and trace information is also good, with independently set thresholds.

EventSeverity EventLogEntryType TraceSeverity ULSTraceLevel ULSLogFileProcessor
.TraceLevel
None = 0 None = 0 0 (None) Unassigned = 0
ErrorServiceUnavailable = 10 Error 1 Critical = 1 (or ErrorCritical)
ErrorSecurityBreach = 20
ErrorCritical = 30
Error = 40
Exception = 4
Assert = 6
Warning = 50 Warning 8 Warning = 8
FailureAudit = 60
Unexpected = 10 Unexpected = 10 Unexpected = 10
Monitorable = 15 Monitorable = 15 Monitorable = 15
SuccessAudit = 70 Information 18 Information = 18
Information = 80
Success = 90
Verbose = 100
High = 20 High = 20 High = 20
Medium = 50 Medium = 50 Medium = 50
Verbose = 100 Verbose = 100 Verbose = 100
VerboseEx = 200 VerboseEx = 200 VerboseEx = 200

Continue reading SharePoint 2010 logging levels(3 min read)

[Rant] Why are IM/presence networks still fragmented?(3 min read)

Email (well, the majority of it anyway) is one nice standardised SMTP network -- any mail server can send to any address, finding each other by the domain. To some degree client protocols are also standardised (IMAP or POP), and although there are exceptions (e.g. Exchange protocol) the servers and clients still generally support the standard protocols as well.

Instant messaging, presence, and related networks are, however, still fragmented.
Continue reading [Rant] Why are IM/presence networks still fragmented?(3 min read)

SharePoint 2010 design considerations(3 min read)

SharePoint 2010 introduces the ribbon bar as a central place for all the editing controls, that in earlier versions could be scattered across the page.

When designing custom master pages for SP 2010 you may want to visually integrate the ribbon bar into the design, as is done in the out-of-the box v4.master (wiki & workspaces) and nightandday.master (publishing portal) pages.

To do this, I have documented the size of the different elements, so they can be included in the design.
Continue reading SharePoint 2010 design considerations(3 min read)

Essential.Diagnostics library added to CodePlex(1 min read)

Essential.Diagnostics is a library of additional trace listeners and other bits for the .NET Framework System.Diagnostics trace logging.

It doesn’t change the way you write log statements (you still use TraceSource), but fits into the built-in extension points to add functionality (mostly additional trace listeners and filters).

From the project description:

“Essential.Diagnostics contains additional trace listeners, filters and utility classes for the .NET Framework System.Diagnostics trace logging. Included are colored console (that allows custom formats), SQL database (including a tool to create tables) and in-memory trace listeners, simple property and expression filters, activity and logical operation scopes, and configuration file monitoring.”

The intention is to round-out System.Diagnostics with additional capabilities so that it can be compared to alternative 3rd party logging systems (NLog, log4net, Common.Logging, and even Enterprise Library).

Note that the library is intentionally much lighter than Enterprise Library; rather than an overhaul of the logging mechanism itself the library is mainly meant to provide additional trace listeners.

I put the source code up a few days ago, but only recently finished the packaging scripts for the downloads.

With the recent release of NuPack NuGet, I have also spent an additional bit of time and set it up as a NuGet package.

Automatic assembly file version numbering in TFS 2010(3 min read)

A colleague, Richard Banks, has previously blogged on this topic (http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html), using custom activities and modifying the build workflow.

However, I also like the approach taken by John Robbins  (http://www.wintellect.com/CS/blogs/jrobbins/archive/2010/06/15/9994.aspx).

John's approach is done entirely within the build file, using some of the new features of MSBuild 4.0, and therefore has no dependencies except what is already on a TFS build server.

Based heavily on John's work, I've created my own build targets that are based on the same core features but tailored to the way I like to work.

The build number is still based on the TFS build number, however rather than a base year of 2001 I pass it in as a property. This avoids the problem that a build on 31 Dec 2012 could be number 1.0.51231.1, whereas the one on 1 Jan 2013 would be 1.0.101.1. By setting the base year to the year your project starts you ensure your build numbers start low and increase.

(If you start reaching the end, after five years or so, you can always reset the base year after changing the minor version).

I also have an option to read the major and minor versions from the existing AssemblyInfo.cs file, rather than having them set in the build script, which I find a useful way to allow me to change the version number.

Like John's script, I only update AssemblyFileVersion, which can help in a multiple-project situation where there are version dependencies on strong names (such as in config files).

However, rather than a central shared version info file, I write the updated version number back into the projects AssemblyInfo.cs file.

The benefit of John's original approach is that you can have a separate project dependency that updates a central file for all builds, whereas with my approach you need to update each project's build (.csproj) file. On the other hand the negative with the original approach is that you need to change the structure of projects to point to the shared file, whereas I keep them self-contained with the original AssemblyInfo.cs file (similar to Richard's approach).

I only made changes to the C# project type, which is where I do most of my work, and so don't support all the project types that John's script does (VB.NET, C++, etc).

The only other output I have implemented is writing straight into a text file, which I find useful to copy to the output directory as a easy way to reference the build. This is particularly useful for web projects, where they are a directory full of .aspx files (and you can also hit the Version.txt file from a browser).

To use the script, include the TFSBuildNumber.targets file somewhere in your solution or project, then copy the example lines from Example.csproj to the .csproj files for projects you want to version.

To do this from within Visual Studio, first you need to right click and unload the project, then right click and open the file for editing. After pasting in the code, save and close the file, then reload the project.

The end of your .csproj file should look something like this (alter the path depending on where you placed the TFSBuildNumber.targets file):

 

  ...
  <!-- To modify your build process, add your task inside one of the targets below and uncomment it.
       Other similar extension points exist, see Microsoft.Common.targets.
  -->
  <PropertyGroup>
    <TFSBaseBuildYear>2010</TFSBaseBuildYear>
  </PropertyGroup>
  <Import Project="..\TFSBuildNumber.targets" />
  <Target Name="BeforeBuild" DependsOnTargets="GetMajorMinorFromCSharpAssemblyInfoFile;WriteProjectTextAssemblyVersionFile;UpdateCSharpAssemblyInfoFile">
  </Target>
</Project>

One benefit of having the version numbers line up with the TFS build numbers is that given a particular DLL you can check the version number and then easily translate to the particular build it came from, e.g. (with a base year of 2010) the version number 1.0.924.5 comes from TFS build number 20100924.5.

If you want to use this in your project, download the TFSBuildNumber.targets file from the Essential Diagnostics project on CodePlex.

Australian SharePoint conference 2010 – day 2(3 min read)

Content Deployment Bootcamp (Mark Rhodes)

Reasonable mix of slides and demos. Several of the slides had an interesting grab of quotes from the community on the new SP2010 content deployment -- most of them positive (or at least optimistic).

There was a brief mention of the problems with content deployment in MOSS2007 (particulary with variations), with the comment that despite initial issues MOSS2007 deployment actually got better with the various service packs and cumulative updates.

The demo was nothing special (it's relatively easy to get simple content deployment working in a demo/lab environment -- it was always real world deployment, with WAN issues, etc, that had problems).

SharePoint 2010 web part development (Ishai Sagi, Brian Farnhill)

The presentation was mostly demos (there were slides, but they weren't used much) -- which actually worked quite well. The two presenters worked well together and packed in a heap of demos -- an AJAX web part built from scratch, a Silverlight web part, how to upgrade web parts from 2007 including a demo of existing WSP's just working versus ones that need recompiling (or updating), plus a demo of the new developer dashboard and how you can use it to troubleshoot web part perfomance issues.

It was a lot of demo's they packed in and fairly informative. A good session overall.

Information Management with SharePoint 2010 (Rai Umar, Gayan Peiris)

A business-focussed session with a good overview of the information management aspects of SP 2010.

SP 2010 has expanded many of the record management features to be applicable anywhere you want in the system (not just a single record center). Another key component is the managed metadata service with hierarchical taxonomies.

There was a good slide with the positioning of different elements (MySites, digital asset management, etc) fit on a scale of managed taxonomies to open folksonomies and scope from team to enterprise-wise.

SharePoint 2010 Development - Business Connectivity Services (Adam Cogan)

The session title was not a good description of the content -- Adam went off on quite a tangent and a significant portion was spend on Facebook integration (I think this is just Adam doing his own thing -- something I have seen before).

Anyway, after a brief demo of BCS connecting to Adventure Works as a power user (including pulling the result into Outlook), the majority of the session was about integrating SharePoint + Facebook.

Admittedly one of the options for integrating with Facebook was via BCS, which may have been the point of the demo (not sure). In terms of other options given there was pulling from Facebook (via JS or API) or pushing to Facebook (via Workflow or Event Receiver).

It was suggested that the push model (e.g. Workflow) may actually be best for organisations that want to own their own data, plus push to multiple locations (SharePoint, Twitter, LinkedIn, etc), although the BCS option was the one demoed.

For the BCS option, the Facebook Development Kit (from Codeplex) was used to show how easy it is to integrate SharePoint + Facebook.

In Depth Architecture and Design Planning

Slides only presentation covering a wide range of architecture and planning. Started with an refreshing on MOSS 2007 capacity planning, then moved into SP 2010 considerations.

A brief mention of upgrade options, with in-place upgrade not recommended in most cases. (The database mount aka database attach approach was recommended as best practice; there are also many situations where in-place is not even directly possible, e.g. where hardware migration is needed.)

There was a discussion of disaster recovery solutions, mostly focussing on database mirroring. SP 2010 is failover aware, but using a SQL alias for your database is still a good idea for databases such as the config database.

Australian SharePoint conference 2010 – day 1(3 min read)

Keynote (Arpan Shah)
 
Some of the slides were re-used from other presentations (e.g. Services Ready content), but a reasonable overview. One thing particularly interested was some statistics that were provided: 17,000 SharePoint customers, 4,000 SharePoint partners, and 500,000 SharePoint developers.
 
The developer stats seem a bit iffy, but the customers/partner information is interesting -- if you do the maths it works out at 4 customers per partner, which gives an idea of the level of competition in this space.
 
Manipulating the SharePoint 2010 Ribbon (Todd Bleeker)
 
Todd is a very enthusiastic presenter, and provided plenty of side tips and suggests that are obviously based on experience, e.g. "Start with an Empty SharePoint project rather than with a specific item, so that you can give the item a nice name when you add it", or "The first thing I do is move the key.snk into the Properties folder because I never touch it and want to clear the screen real estate".
 
The presentation had a good mix of slides and demos, and continued with the tips, e.g. "Deploy .js files into a library so you can apply security management -- such as making them accessible to anonymous users.". He then went on with issues how the blank site template doesn't have a Site Assets (or Site Pages) library, so you need to also ensure they exist.
 
Taking SharePoint offline with SharePoint Workspace
 
I don't know what I was expecting from this business track presentation, but it did deliver on some of the changes/improvements in SharePoint Workspace (previously Groove) and the limitations.
 
I think SharePoint workspaces are a good solution for the 'occassionally disconnected' worker, but you need to think how you organise your SharePoint to support it, e.g. having one site with a large document library (with folders, etc) doesn't synchronise well (you can't limit SPWS to only part of a list); you want to have individual collaboration sites per department, project, etc.
 
Developing with REST and LINQ in SharePoint 2010 (Todd Bleeker)
 
Another good presentation by Todd, which covered two extremes of accessing SharePoint data. Half the demo was on the JS Client Object Model, and the other half jumped right to the other end of the spectrum and show cases using LINQ on the server-side to access SharePoint data.
 
DataView web parts
 
This presentation was plagued by demo issues and suffered a bit. It was also oriented towards no-code solutions, so limited in scope. It did show how far you can get through only customising dataview webparts (XSLT, etc).
 
There was also a nod towards the new client-side object model and the potential for developing significant solutions without the need for any server code.
 
Building Line of Business Applications using ECM solutions
 
I found this presentation a bit bizarre. The presenters started off selling themselves and it initially felt like I was in a vendor session. Whilst other sessions may give away books, CDs, etc, these presenters decided to give away five dollar notes, which I found quite strange.
 
Another example of something that struck me as strange was presenting some statistics about the types of successful projects but drawing the wrong conclusion regarding how likely a project is to be successful based on its type (affirming the consequent fallacy -- "a lot of successful projects are type A" does not mean that "type A projects are likely to be successful" -- maybe there are simply more type A projects).
 
The bulk of the session was actually an okay case study about building a demo OH&S business process consisting of an Infopath form for incident reporting kicking off a workflow that then led to a case management workflow (where necessary). The case management workflow used the new document set functionality, where a special folder (call a document set) of items can be group together, applying metadata, workflows, records management, etc to the document set as a whole.
 
(As an aside, if you want to take advantage of SharePoint Workspaces then you need to think carefully about how you structure you documents -- do you really want all cases in a single document library.)
 

Reflections on SOA(2 min read)

One way I expand my professional knowledge is by reading "good" books on software engineering; those books that are either commonly referenced classics or highly recommended (see sidebar for short reviews of some recommended books).

I have recently been reading "Pattern-Oriented Software Architecture, volume 1", by Buschmann, et al. (commonly referred to as POSA), which includes one of the early (1996) formalisations of the Layers pattern.

What does this have to do with SOA?

Well, the topic came up in a recent training course of what is SOA / what is a service. The training course level was such that only a basic description was appropriate, but what immediately jumped to my mind as important is that a Service Oriented Architecture usually consists of both coarse-grained business services, as well as fine-grained implementation services (often with a workflow component as a means to aggregate them).

Although a lot of knowledge can be learnt on the job, being passed on by other software engineers, it can sometimes be an eye-opener to actually go back and read the original description.

In this case, when defining an Service Oriented Architecture I would strongly suggest reviewing the Layers pattern and think about how your services might be structured in layers. Think about the different abstraction levels you have in your system, name the different layers and assign tasks to each of them. In particular, think about some of the issues from step 4 ("Specify the services"), and step 5:

"5. Refine the layering. Iterate over steps 1 to 4. It is usually not possible to define an abstraction criterion precisely before thinking about the implied layers and their services. Alternately, it is usually wrong to define components and services first and later impose a layered structure on them..."

One way I have seen service oriented architectures broken up is with different layers for Business Process Services, Business Function Services and Data Services, each with their own responsibility and features.

For example Data Services encapsulate business data entities specific to a slice of the business and are usually atomic, stateless, don't change often and are highly reusable, whereas Business Process Services are designed to encapsulate business process and workflow, are implemented through a stateful orchestration and will change more often.

There is a good diagram of how the different abstraction levels (Layers) of services can interact in "Understanding Service Oriented Architecture" in the inaugural January 2004 issue of the Microsoft Architecture Journal.