<March 2015>
Author: Created: Tuesday, September 09, 2008 1:58:59 PM RssIcon
News and updates about
By Stefan Koell on Thursday, September 19, 2013 8:00:06 AM

This week was a great week! I was impressed on so many levels, I have to dedicate a blog post about probably the best System Center focused conference in the German speaking area: System Center Universe DACH (which basically mean German, Austria and Switzerland).

The Location


The event took place at the BERN EXPO Congress in Bern/Switzerland. On Sunday we (the organizers and most of the speakers) had some time to walk through the beautiful city of Bern and got a small sight-seeing tour. On Monday and Tuesday we had about 40 breakout sessions on 4 parallel tracks – all about System Center!

Almost 300 people from 17 countries attended and it was just great. I attended a lot of conferences and events in Europe and besides the fact that this is System Center focused, it’s probably the best event I ever attended here in Europe! Kudos to the guys at itnetx – those guys actually made this highly professional and perfectly organized event happen!

I was very impressed with the quality, precision and execution of the event from all speakers, sponsors, participants and foremost the organizers. This is – by all means – Swiss precision! We had great parties, excellent food and best of all great content and discussions. At the sponsor booths you could see great solutions for System Center from well known industry leaders like veeam, OpsLogix, cireson, Provance and even brand new products, first shown in public like the promising Self-Service Portal for SCSM from SYLIANCE.


The Sessions


As a System Center guy by heart, it’s needless to say that I very much enjoyed all the sessions I could attend. It was tough though because of the 4 parallel tracks, I missed some of the sessions.

The keynote (by Travis Wright) was all about cloud computing with a clear message that cloud computing isn’t about location of data and services. It’s all about a specific method of computing. Travis nailed it. He was able to not only show us the vision of cloud computing from Microsoft’s perspective, he was also able to clarified some myths around “the cloud”. I think this was an eye-opener for many attendees (me included!).

Besides Travis, a lot of well known “rock stars” in the System Center community presented high quality content. Local experts (Marcel Zehner, Stefan Roth, Thomas Maurer, and many more) as well as experts from all over Europe (Andreas Baumgarten, Kevin Greene, Damian Flynn, Maarten Goet, and many, many more) presented great content. Most impressive: we even had presenters from the U.S.! Cameron Fuller, Pete Zerger, Chris Ross just to name a few.

Not only was I able to watch the pro’s deliver their content, I also had the honor to present at this great event. I had a lot of fun, especially talking in Dutch (inside joke!). No no, I presented in German; although, I’m not sure how much of it was actually German. Cameron Fuller attended my session without knowing one word of German and understood everything.


SCU (DACH) 2014

There are (so far) two SCU events in 2014: End of January in Houston, TX and September 8th, 9th somewhere in Switzerland/Europe. If you are interested in System Center related topics, this is the event to go to. Since MMS is now officially “dead”, this is the only System Center focused event and the fact that you can attend those events in the U.S. and/or Europe is an additional bonus. I’m looking forward to see more!


To itnetx and the Organizers

As you can read, I’m very enthusiastic about my time at SCU DACH. I had a blast, it was mind-blowing and I had fun. I personally want to thank itnetx, the Organizers, especially Marcel Zehner for having me as a speaker, taking good care of us all and for making all this happen. I’ve never written about an event and I’m afraid my words (as English is only my second language) doesn’t do it justice. Keep on rocking! Can’t wait for the Return of the (SCU) Jedi!


By Stefan Koell on Sunday, September 08, 2013 10:37:40 AM

A while back I’ve created two quick scripts which allows you to execute ANY command (batch or PowerShell) on one or more remote agent(s). In this post I will share those two scripts, the ready to use management pack and a quick guide on how to use it.  The goal for this management pack is to provide a way to not only execute a command on the agents but also capture the output and pass it back to the console.



Needless to say that this can be very dangerous. Executing the “wrong” command can potentially harm your remote machine(s) so you have to be very careful with not only what you are sending to the remote agent but also whom you give access to this task. Always do extensive testing in a lab or test environment before you execute tasks in production!


Using the Generic Tasks

The target for the tasks is Windows Computer, so you can execute those tasks from any view you see Windows Computers (or descendants).

Note: by default, the console limits the number of selection to 10. You can, of course, select more than 10 objects in the console but you will not see any task to execute. You can remove this limitation by tweaking the registry. Read Cameron Fuller’s article on how to do it:

In this example, we open the Windows Computer state view and select an agent to execute the task for. Then we click on the Custom Command task:


As you would expect from any agent task, you are provided with the Run Task dialog. You can run the task with the default action account or provide specific credentials for the task. In order to correctly execute our task, we need to click on the Override button to specify the command we want to execute:


In the above example we override the Arguments task parameter and enter the command we want to execute (e.g ipconfig /all). We will see later in the script that our script will expect the command in the Arguments and process the passed on command. Click on Override and run the task with the command specified. Next, SCOM will show the Task Status window:


As soon as the task is finished, you’ll see the output of the command in the status window. Alternatively you can also use the Task Status view in the Console to see the output but also see who has executed the task and when:


This is very important as it allows you to somehow audit those custom tasks – in case something went wrong.

As you can see, generic tasks are quite handy. You don’t have to do any configuration in SCOM to just execute a simple command. As mentioned before, it’s also dangerous and needs to be handled with care!

The PowerShell task works exactly the same. Just override the task argument value with something like get-executionpolicy or some other powershell command.


How does the Custom Command Task work?

Here’s the script which is used by the task:

' Parameter: "dir c:\ /s"
Option Explicit

Dim oAPI, oArgs, WshShell, Exec

Set oArgs    = WScript.Arguments
Set oAPI     = CreateObject("MOM.ScriptAPI")

Set WshShell = CreateObject("Wscript.Shell")

Set Exec     = WshShell.Exec("%comspec% /c """ & oArgs(0) & """")

WScript.Echo Exec.StdOut.ReadAll

Set WshShell = Nothing
Set oArgs    = Nothing
Set oAPI     = Nothing
Set Exec     = Nothing


As you can see the script is very simple. We need to use the WScript.Shell object in order to capture the standard output of the command. We pass in the command to the %comspec% (cmd.exe) /c from the arguments (hence the need for overriding the Arguments in the Run Task dialog). In the end we “Echo” out all the captured standard output. The agent will then transfer the output back to the console. In case you are wondering about the MOM.ScriptAPI: this script is just the prototype. If you want to have better error handling, alerting if something goes wrong, etc. you should extend the script and do some logging. With a couple of rules, you can easily create alerts in case someone executes a script or executes it with no arguments, and so on.


How does the Custom PowerShell Task work?

Here’s the script used by the task:

' Parameter: "get-executionpolicy"
Option Explicit


Dim oAPI, oArgs, WshShell, oFS, TempFolder, TempFile
Dim Exec, Output, TempFileName

Set oArgs      = WScript.Arguments
Set oAPI       = CreateObject("MOM.ScriptAPI")
Set WshShell   = CreateObject("Wscript.Shell")
Set oFS        = CreateObject("Scripting.FileSystemObject")

Set TempFolder = oFS.GetSpecialFolder(TEMP_FOLDER)
TempFileName   = TempFolder.Path & "\" & oFS.GetTempName
Set TempFolder = Nothing

Exec           = "%comspec% /c ""%windir%\system32\windowspowershell\v1.0\powershell.exe -noninteractive -command ""&{" & oArgs(0) & "}"""" >" & TempFileName

WScript.Echo Exec

WshShell.Run Exec, 0, True

Set WshShell   = Nothing

Set TempFile   = oFS.OpenTextFile(TempFileName, FOR_READING)

While Not TempFile.AtEndOfStream
    Output     = Output & TempFile.ReadLine & vbCrLf

WScript.Echo Output


Set TempFile   = Nothing
oFS.DeleteFile TempFileName
Set oArgs      = Nothing
Set oAPI       = Nothing


As you can see, this one is a bit tricky. Back when I wrote the script for this task, we still used PowerShell v1.0 (yes, I am that old!). Somehow, PowerShell behaved differently and didn’t let me capture the output very easily. You might expect it should work just as the Custom Command Task, but it doesn’t – at least not with v1.0. Maybe it’s now different with the newer PowerShell versions but I didn’t check that yet…

Anyway, in order to get the output of the PowerShell task, we redirect it to a temporary file. We then wait until the script has finished. Once that happened we open the file with the redirected output and “Echo” it out, just like before. Afterwards we clean up and delete the temporary file.



You can download the management pack and the scripts in a zip file from here.

I hope this is as useful to you as it is to me. If you have any questions, comments, suggestions or any other feedback, let me know.


By Stefan Koell on Tuesday, September 03, 2013 9:54:24 PM

This is a longer blog post, so please bear with me. Read on if you are interested in how to create a truly useful knowledge base for Operations Manager which offers a lot more than the built-in knowledge management. I’ve done this a couple of years ago for some customers based on Microsoft Office SharePoint Server 2007 and wanted to update the solution to the most recent SharePoint version 2013.


Target Audience

Just to be clear, if you are authoring management packs for a specific product (like Microsoft is authoring MPs such as Exchange or Dell is authoring MPs for their hardware) this is not for you. The target audience are admins of companies who operate and use Operations Manager to monitor their infrastructure and applications. Knowledge management in SCOM (Company Knowledge) isn’t easy and straight forward. It requires you to install a lot of pre-requisites and tools to just get started. The handling of KBs in SCOM in general isn’t very good. I can only see one advantage/feature in using company knowledge: it offers linking of tasks or views but that’s it. Using something like SharePoint offers much more...


Why SharePoint?

Frankly, it’s not really about SharePoint at all. Most other Wiki products, even in-house developed applications can be integrated and tied up together as demonstrated in this blog post. The implementation details may be different but in general this approach works with almost all products which offers you a minimum set of URLs to control whether an article should be created or displayed. I chose SharePoint for the following reasons:

  • It’s a product from Microsoft and already well established in many companies. Those companies may also have good SharePoint know how.
  • I’m not a SharePoint fan or specialist but I had a couple of situations where I needed to work with it. So it kind of felt familiar using something I already knew instead of picking something completely new.
  • Feature wise, SharePoint offers many things which are desirable for knowledge bases:
    • Integrated indexing and search.
    • Access control/permissions if you want or need.
    • Other departments can contribute and enrich knowledge base without having access to the SCOM console.
    • Versioning of articles.
    • Extensible and highly customizable. Additional metadata for your KBs like department, responsibilities, owner, etc. can be entered and used.

This example is shown with a SharePoint Enterprise Wiki Site (which requires the SharePoint Server!). Other wiki products may have similar features. You can also use SharePoint Foundation with a Wiki Page Library. In this case you cannot have any custom layouts and content types.


General Approach: High Level Overview

It all starts with an Alert Task in SCOM. The basic idea is that the user/operator selects an alert in one of the alert views and clicks on an Alert Task (which is executed on the console machine where the SCOM console is running). In general, this console task will open an URL in the default browser and passes on the name of the alert. Alternatively you can also pass in the Alert ID and access the SCOM APIs to get all the alert data needed to continue processing.

The web page which is opened by this task needs to do the following:

  • Normalize the alert name (remove all unsupported/not allowed characters)
  • Use the normalized name to see if a knowledge base article with this name already exists. If yes, just redirect to the article.
  • If the article doesn’t exist yet, redirect to a different page which ideally accepts the normalized name to pre-populate the form to enter a new KB article for the alert name.

That’s it. Easy, right? ; )

Well, back then when I used MOSS 2007 it was pretty easy because you could easily figure out what URLs you needed. Unfortunately in SharePoint 2013 things are a bit different so the approach is a bit different here and is mainly done in JavaScript on the SharePoint side.



As mentioned before, I’m not a SharePoint expert by any means and I’m certainly not a JavaScript developer (yes, JavaScripts ahead!). So please forgive my ignorance and if there’s a better way to do things, let me know. I also want to mention two good friends of mine who helped me on this: Felix from lemon mojo (JavaScript guru) and Alex from timewarp (SharePoint guru, developer and consultant). Thanks for that!

Downloads and files: At the bottom of this post I provide a zip file with the JavaScript file and the management pack. Keep in mind that you need to make some modifications to get this to work. All files provided are not really tested for production environments. This is more of a proof-of-concept or inspiration which should help you to get started.


Alert Task

Let start with the easy part. First, create a management pack for the task in the Administration space. You can also import the sample MP provided in the zip file and modify the task.

In case you start from scratch, go to the Authoring space, expand the Management Pack Objects and right-click on the Tasks node. Select Create a New Task in the context menu. Select Console Task / Alert command line as the Task Type and make sure you select the previously created management pack at the bottom of the page. Specify a meaningful task name like “Open Knowledge Base”. Note that the name you enter here is the name which shows up in the Tasks panel when you select an alert.


Configure the task as above.

Application: cmd.exe 
 Parameters: /c "start "" "http://sclab-sp/kb/Pages/customredirect.aspx?name=$Name$"

We need the cmd.exe /c and the start command to make sure the default browser is executed when launching the URL. The URL is also specific to my lab environment and you need to adapt the URL to match your Sharepoint server and path. Keep an eye on the double-quotes and make sure they are all set correctly. Use a command prompt window to test and debug the command.

Replace sclab-sp with the host name of your SharePoint machine/farm.

Replace /kb with the path to your Enterprise Wiki Site.

Don’t worry about the customredirect.aspx for now. We need to create this page in the next step in order to make this work. You can name this page differently and also host it in a different path as long as you make sure that the JavaScript from the page runs within the SharePoint web part. Talk to your SharePoint master about this!

With the $Name$ variable we pass in the alert name from the console.


Setup SharePoint Enterprise Wiki Site

As mentioned above, I’ve created an Enterprise Wiki Site. For demonstration purposes I’ve also created a new ContentType with an additional Column (Responsible Department) and also created a new template to display the new column on the Wiki page. You need to enable publishing on the site in order to create an Enterprise Wiki Site (click here for more information). The template will not be available if you do not enable it. I will not go into more details here as this is slightly out of scope – maybe I will dedicate a separate blog post about this if more people are interested. In any case, if you do not create a new content type or template, the stuff below will still work with all the defaults. I haven’t tested it but it should also work exactly the same way for standard Wiki Page libraries – even when using SharePoint Foundation.


Create and Setup the CustomRedirect.aspx Page

Once the Wiki is setup, we need to do some magic as described above. In this case a simple JavaScript which processes the alert name from the query string (URL parameter: Name). In this case, we do not really need open different URLs for wiki page display and creation. SharePoint has a JavaScript function which we hijack to do all the work.

First things first: Let’s create the CustomRedirect.aspx page.

Open the Enterprise Wiki Site (Home.aspx – which is created by default). Click on Page, New and enter the name “CustomRedirect” (it’s important that you use the very same name you picked in the Alert Task above).


Once you are in edit mode on the CustomRedirect.aspx, insert a new Web Part and look for the Content Editor Web Part. Add the web part to the page.


Click on Edit Snippet and paste the JavaScript code below (or attached as download) into the popup:

You can go through the script and the comments. Essentially the script takes the alert name from the query string, forms some weird URL and executes the internal SharePoint function OpenPopUpPage. Luckily this function seems to only create a popup when the page you are trying to access doesn’t exist. If it does exist, it will just redirect you to the page. Perfect!


Now that everything is set up, let’s see how it looks like.

An operator selects an alert and clicks on the task to open the knowledge base:


The operator is presented with the custom redirect page if no KB entry is available for the selected alert:


After clicking the Create button you can start using the KB entry for this alert:


Two things to note here:

1. The name is pre-populated with the name of the alert.

2. At the very bottom you see the customization I made to the Wiki. I added a new choice column to specify the responsible department. Of course, many other useful things are possible here…

Once a KB entry for the selected alert exists., the operator will be redirected to that page immediately the next time he clicks on Open Knowledge Base:


Wrapping Up

As mentioned before, this is just a proof-of-concept and not really 100% production ready (I guess). I just want to show that there’s a good alternative when it comes to knowledge management. Using this tutorial/example as starting point should get you quickly up and running. It doesn’t hurt when you have a SharePoint guru around ; )

The files used in this example are available in this zip file.




Christoph Maresch used Media Wiki instead of SharePoint:


Robert commented that you can also use rundll in the task to open an URL in the default browser:

Application: %SystemRoot%\System32\rundll32.exe
Params: url.dll,FileProtocolHandler http://scom-sp/kb/...
By Stefan Koell on Sunday, March 24, 2013 6:41:53 PM

I’m building a new lab environment for my System Center stuff. Unfortunately I’m so busy lately, that it took a while to pick up where I left:
Building my System Center SP1 Lab - Part 1: Domain Controller

In this blog post I want to install a SQL 2012 SP1 server on Windows Server 2012. I’ve never installed SQL 2012 so let’s see what happens.

Since I plan to use this SQL server instance as a shared instance for multiple System Center components I will use SQL Server Enterprise Edition to get some advanced stuff in Service Manager and I also need to be careful what collation to install. More about this later…

Why Enterprise Edition?

In general, both editions (Standard and Enterprise) are supported for Service Manager but there are a couple of enhancements which are only available if your Service Manager installation is located on an Enterprise edition. For more information, read this TechNet article:

Note: The new System Center licensing introduced with System Center 2012 includes all necessary licenses for SQL Standard installations. So if you decide to use SQL Enterprise in your production environment you need to properly license the instance (or install it on an already licensed instance) in order to use SQL Enterprise. To be clear, if you use SQL Standard edition, no licenses are necessary as they all are covered by the System Center license.

Installing SQL Server 2012 SP1

After installing Windows Server 2012, joining the server to my lab domain and installing all available updates, setup.exe on the SQL installation media is opened:


After the prerequisites are checked, you will be asked for the product key. Enter the key (or select you want to evaluate) and click Next:


I’m not really a SQL expert (in fact I’m far away from being anything else as a SQL noob), so the setup is kind of exciting for me. My first pleasant surprise is this nice wizard page:


SQL 2012, will lookup all the updates available online for the version you install and offers you to download and install them right away. After clicking next, the wizard will download the online files (in case you accepted the updates in the previous page) and extracts all the setup files before it will continue:


Ok, enough praise on the setup. I left my windows firewall on and I got the above warning. Nothing wrong with the warning but the message box shown when you click the Warning link is kind of useless because there’s a link in the message which cannot be clicked. I know you can CTRL-C when the message box is focused to get the text into the clipboard but this is just note really user friendly. In addition, I would expect the setup to open all necessary ports (as SCOM is doing) but maybe this is on purpose for security reasons… Let’s check that later and continue.

The next screen asks us for the “Role”. We go with the SQL Server Feature Installation:


On the Feature Selection screen we check the three main instance features (DB Engine, Analysis Services, Reporting Services) as we use the instance for SCOM and SCSM and maybe other products. Note that the Reporting Services can only be installed and used for one product (either SCOM or SCSM) but not for more than one.


The shared features I selected aren’t mandatory to run the System Center components/products. I chose to have them on board (especially the management tools) for my convenience.

The next screen allows you to configure the instance (name) as well as the default directory paths:


The Disk Space Requirements page will show you a short summary if all components can be installed on the selected/configured drives. If you have enough disk space, click on next to get to the Server Configuration screen. On the Service Accounts tab I leave all configuration as they were, except for the Agent. I prefer to let the agent start automatically. The more interesting stuff is the Collation tab:


Since my Windows OS is using English US locale, SQL setup by default suggests the “correct” collation for my scenario. If you are using only english you can go ahead and use the shown collation to successfully install and use SCOM/SCSM and other SC components on the same SQL instance. For production scenarios you may want to separate the database for SCOM and SCSM because of the workload both systems have on the DB. If you want to run both systems on the same DB, check the following blog post if your language you intend to use on those systems are supported and which collation you need to install:

On the next three screens, make sure you leave the default values but also make sure your current user is in the list. Just click the “Add Current User”  button to add your user.

On the Error Reporting page you may choose to report error and usage data to MS. Click on Next, verify if the Installation Configuration Rules are good to go and hit Install.

That’s it for today. I hope the server as configured and install will work for my upcoming installations of SCOM and SCSM.


By Stefan Koell on Wednesday, January 02, 2013 11:16:18 AM


Microsoft published a section on TechNet which gives you an overview of what’s new and changed in System Center SP1:

Here’s my list of improvements/features from SP1 which are really cool and noteworthy:

By Stefan Koell on Tuesday, January 01, 2013 3:02:30 PM

Everyone who is working with System Center products should have a lab/test/staging/whatever-you-want-to-call-non-production environment. System Center SP1 “RTM’ed” and I thought, let’s build a new lab environment. Most of the time, you will use parts of your existing infrastructure (like Active Directory or maybe even an existing SQL server) to setup your lab environment. This time I wanted to build a completely isolated test environment with my own, dedicated AD and dedicated SQL server. So this part will focus on setting up a new domain for my lab.

Before Windows Server 2012 was released, there were some limitations/issues with virtualized DCs, especially if you had no physical DCs at all. Windows Server 2012 has many new features and improvements, especially when it comes to running DCs virtualized. Read more about these improvements here:

You may also check out “Virtual Domain Controller Technical Reference (Level 300)” on TechNet:

Setup VM

The VM will be hosted and running on a Windows Server 2012 Hyper-V machine. The system requirements of a Windows Server 2012 DC didn’t change and are basically the same as for Windows Server 2008 R2. Minimum RAM is 512MB, recommended is 2GB, so I will go with 1024MB RAM initially. My lab environment will be very small and I can always increase resources if I need them. You may also consider enabling dynamic memory but I couldn’t really find some useful guidance about dynamic memory and domain controllers. If there’s any good read on that topic, let me know. I connect the VM to my virtual network and the max. size of the extending HD will be 500 GB (just to be safe). I also recommend to assign at least 2 virtual CPUs to your VM.

Since this is a lab environment I will also only install one DC (at least for now). I will backup all lab VMs on a daily basis but I do not really care about uptime.

Windows 2012 Domain Controller

After Windows 2012 is installed, make sure to provide a static IP address to your DC-to-be. I also tend to leave IPv6 enabled and set a static address as well. There’s a lot of discussion whether or not to disable IPv6 and usually people think, disable it if you don’t need it. Anyway, MS recommends to leave it enabled and since it’s enabled by default, I leave it that way. I use the IPv4 to IPv6 converter to create IPv6 addresses:

If this is your first Windows 2012 Domain Controller and you’re still trying to use “dcpromo” to create a domain controller, you will see a message like “The Active Directory Domain Services Installation Wizard is relocated in Server Manager…”.

In the Server Manager, go to Manage –> Add Roles and Features:


Once the Wizard appears, skip the first page and select Role-based or feature based installation:


In the next screen we leave the “Select a server from the server pool” and keep the computer we want to promote to a DC selected in the server pool list. On the Select server roles screen we check the Active Directory Domain Services box:


After you’ve checked the checkbox, the Add Roles and Features Wizard appears. Just click on Add Features to continue and also check the DNS Server checkbox. Again, click the Add Features button when the Add Roles and Features Wizard appears for the DNS Server role.

You can skip the next three pages as we do not install any additional features on that server at this time. Two pages are only providing additional information about the AD DS and DNS Server role.

On the Confirmation page I chose to automatically restart the server if it is required:


I hit Install and off we go…

The Server Manager will show a warning triangle after the role installation completed:


Click on the “Promote this server to a domain controller”. In my case, I’m installing a complete new, isolated forest:


Next up, Domain Controller Options:


Since I’m not integrating DNS with an existing DNS infrastructure, “no action is required”:


Specify a NetBIOS name:


I leave the Path configuration as it is:


After the Review Options page, you will see the Prerequisites Check:


A bunch of warnings (compatibility, DNS) but at the bottom you should read “All prerequisite checks passed successfully.”. Then click on Install. After a while the server will reboot.

Once the machine is up again you will see the two roles in the dashboard:


One important step after installing a DC is to setup time synchronization:

That was pretty painless and considering my focus to the System Center Suite it’s kind of “off-topic” but maybe this is useful to some of you – it will definitely serve me as “lab documentation” Winking smile

Stay tuned for the next part: SQL Server

Cheers and a happy new year!


By Stefan Koell on Sunday, September 16, 2012 7:58:08 PM

A fairly common requirement from customers is to extend work item or configuration item classes and pull values from other systems (like Active Directory or some database) and put it into a new property of an existing class. Service Manager is very extendable and has some great feature to help you with that. To achieve a goal like this, you *just* need to “author” your own management pack and do some scripting or System Center Orchestrator magic.

Sounds complicated? Follow this blog post and you will see that it’s not that hard at all.

Here’s the high level overview of what to do:

  1. Identify the class you want to extend
  2. Create a management pack to extend the class
  3. Create a script (or Orchestrator runbook) to populate the new property/properties of your extended class

1. Identifying the class to extend

In this example I demonstrate how to extend the Active Directory User class (which is used and populated by the AD connector) to have a dedicated property for the primary email address. In case you wonder why I do this, you should know that SCSM handles email addresses using relationships and dedicated objects which makes it hard to just get a view of all users with a column showing the primary email address (as set in AD). With SCSM you cannot do this easily.

My fellow MVP colleague Anton Gritsenko is working on a tool called “Advanced View Editor for SCSM 2012” which will allow you to create a view to display the email addresses in a column. At this point it’s still beta but you should definitely check it out as you can do much, much more with this awesome tool!

Still, this blog post should be more like a demonstration on how to solve a problem like this in general: pulling some data from somewhere and populate an extended class.

Take a look at this Technet Library page to see how the AD connector is mapping objects to Service Manager:

As you can see, it seems that the mail property from AD is synchronized with the Microsoft.AD.User property but if you try to create a view showing the Email property, you are out of luck:scsm01

At least, the Technet page gives us a clue what class we should extend: “Active Directory User” (ID = Microsoft.AD.User). In case you do not have a Technet page which is showing you the class ID, the easy way to find the ID (= Name) for the class is a PowerShell command like this:

Get-SCSMClass | where {$_.DisplayName –eq ‘Active Directory User’}

2. Create a management pack to extend the class

To extend a class we need to “author” a management pack. Since a management pack basically some XML definitions you could use a tool like notepad but it would require you to know the schema and exact syntax on how to create a management pack. A much easier way to create management packs is either the use of Service Manager Authoring Tool or the Visual Studio Authoring Extension (which allows you to not only create Service Manager management packs but also allows you to create Operations Manager management packs). Note, the Service Manager authoring tool was released for SCSM 2010 SP1 but it will also work for SCSM 2012 management packs.

Since we are only extending a class, I will use the Visual Studio Authoring Extension (which requires at least Visual Studio 2010 Professional). If you want to customize forms, I recommend you use the Service Manager Authoring Tool as this one has a graphical, designer like interface to do so.

Once I’ve installed Visual Studio 2010 and the Authoring Extensions, I create a new Project with the type: “Service Manager 2012 Management Pack”:

Next, we need to add a reference to the management pack which holds the original class we want to extend. The Get-SCSMClass command above will also show the management pack name:

Right-click on the "References” node in the Solution Explorer and select “Add Reference”. Navigate to the SCSM installation folder and browse for the file

Next, click on the project in the Solution Explorer and right-click to select Add –> New Item…:

Now, select Class and type a name like ADUserExtension.mpx:

In the management pack fragment, paste the following code:

<ManagementPackFragment SchemaVersion="SM2.0" xmlns:xsd="">
        <ClassType ID="code4ward.SCSM.ADUser.Extension.ADUserExtension" 
          <Property ID="PrimaryEmailAddress" Key="false" Type="string" />
    <LanguagePack ID="ENU" IsDefault="true">
        <DisplayString ElementID="code4ward.SCSM.ADUser.Extension.ADUserExtension">
          <Name>Active Directory User (Extended)</Name>
        <DisplayString ElementID="code4ward.SCSM.ADUser.Extension.ADUserExtension" SubElementID="PrimaryEmailAddress">
          <Name>Primary Email Address</Name>

The code snippet above looks basically almost the same as the code snippet you find after adding the new class item. Here’s what I changed and why:

  1. The class type ID was adjusted to be unique (usually you add those dot-separated naming like company.application.thing[.subthing] to uniquely identify your class).
  2. The base class is changed to the Microsoft.AD.User class in the Windows (alias for Microsoft Windows Library) management pack.
  3. The extension attribute is added to tell SCSM that we do not want a new class, instead we want to extend an existing class. This also ensures that all inherited classes get the new property/properties.
  4. The property id was changed to something meaningful: PrimaryEmailAddress.
  5. The display strings (here only English/ENU) were updated.

That’s it. Build the management pack and import it into your management group.

Note that for this demo I didn’t seal the MP. MPs with extensions or class definitions should be sealed when used in production.

After you have imported the MP you will see that the new property “Primary Email Address” is now available for the Active Directory User class (and all inherited classes):

Also note that creating a new or editing an existing user would already allow you to use the new property:

Because we didn’t adjust the user form, you will find all additional, custom properties on the Extensions tab.

3. Automatically populate the new property

Now, that we have successfully extended a class, we want to have it filled with data – preferably using a script or something which can be automated. In this example I will write a simple PowerShell script which can be executed on demand, using a scheduled task or invoked by a runbook.

$UserCIs = Get-SCClassInstance -class (Get-SCClass -name 'Microsoft.AD.User')
foreach ($UserCI in $UserCIs)
    $ADUser = Get-ADUser -Filter "SamAccountName -eq '$($UserCI.UserName)'" -Server $UserCI.Domain -Properties EmailAddress
    Write-Host Processing CI $UserCI.UserName
    Write-Host CI Email: $UserCI.PrimaryEmailAddress
    Write-Host AD Email: $ADUser.EmailAddress
    if ($ADUser.EmailAddress -ne $UserCI.PrimaryEmailAddress)
        Write-Host CI and AD email address does not match!
        $UserCI.PrimaryEmailAddress = $ADUser.EmailAddress
        $UserCI | Update-SCSMClassInstance
        Write-Host CI was updated.
    Write-Host ----------

Let me explain the above script above:

  1. The approach here is to get all CIs from SCSM and compare each found entry with the one in Active Directory. You could go through the AD users and compare it with the CI classes but this should be much more efficient and better because you may not import all your AD users into the CMDB.
  2. For each CI object we get the corresponding AD user (note that you may need to specify which properties to load; in this case we just need the EmailAddress).
  3. Then we compare the email address from AD with the one from the CI (note that you just can refer to our extended property using $UserCI.PrimaryEmailAddress.
  4. The Update-SCSMClassInstance will commit the change of the CI object to the database.


As you can see, it’s not too hard to extend classes with additional properties and fill and update them automatically using some simple script. Please bear in mind that the content of this blog post is roughly a proof of concept and should be carefully tested, adapted and extended before you put it into production.

If you have any suggestions or feedback, let me know.


By Stefan Koell on Thursday, August 16, 2012 10:45:49 AM

This release is mostly a maintenance release and includes several bug fixes. However, there are a couple of new things in this release which deserves some attention:

Ad Hoc Connection Improvement

As mentioned in this blog post, we’ve introduced some convenient features around ad hoc connections and the credential picker. We’ve now further improved the handling which allows you to enter a credential name in the URI followed by an @ character and the URI (hostname).

So let’s assume you have a credential called “lab\admin” (in the display name) and you want to use ad hoc connect to connect to a machine which isn’t yet in your document(s). By simply putting the credential display name (lab\admin) followed by an @ character and the hostname (like vm198) you can tell Royal TS to use the existing credential:


Note this only works if the display name of the credential is unique. If multiple credentials are found with the same name (in different folders or documents), the credential picker will appear and display all matching credentials.

Of course, this will also work for other connection types.

New Replacement Tokens

In Royal TS V2, credential objects have a username and password field. For domain environments, the username field can be used to specify the domain and the username using the “domain\user” syntax. In this configuration the $EffectiveUsername$ replacement token will always return the domain and the user part.

In this release we are also introducing new replacement tokens:

$EffectiveUserdomain$ will return only the domain part of the username.
$EffectiveUsernameWithoutDomain$ will return only the user part of the username field.


You can use these two new replacement tokens in tasks, key sequences and template connections.

If a username does not have a domain part (domain\), the $EffectiveUserdomain$ value will be empty (whitespace) and the $EffectiveUsernameWithoutDomain$ will be the same value as the $EffectiveUsername$.


The release notes can be found here:

To download the latest release, visit:

By Stefan Koell on Friday, August 03, 2012 11:00:26 AM

We’ve just released Royal TS 2.1.2 with a couple of bug fixes and performance optimizations. You can read the release notes here:

To download Royal TS 2.1.2, head over to our download page here:

This update is free for all Royal TS V2 license holders. If you experience any issues or want to provide feedback, please use our forums:

Best regards,
Your Team

By Stefan Koell on Monday, July 23, 2012 4:19:29 PM released the first SCOM 2012 dashboard widget extension for SCOM 2012: Web View Widget Management Pack

Hopefully much more to come!

This weekend I had a little fun with SCOM and the new dashboards. Brian Wren posted a little (actually a huge!) guide on TechNet (including the files) about writing/authoring custom dashboard widgets. It wasn’t that easy because some parts of the guide didn’t quite match with the files and some things didn’t work as expected. Considering the size of the guide and judging from the screenshots, this guide was in the works for several months, maybe more than a year! To sum up, I’ve learned a lot and I will try to update the guide as good as I can and incorporate my experience as well.

Web View Widget

You may wonder why you need such a widget. There are plenty of reasons for that and while Microsoft ships a Web Page View for Operations Manager, they didn't ship a widget for the Dashboards. A web view widget might be very useful if you want to create a dashboard with SCOM data and data from other systems which provide a web interface - all in one view. Well, with this little enhancement you can do that now.

Features of this Management Pack

  • Configure a URL of a web page to load and display in the dashboard widget.
  • Personalize the widget to show/hide a navigation toolbar and to enable, disable, configure to auto refresh the page every x seconds.
  • The web console does not provide all navigation toolbar features.


To install the management pack, download the zip file from and extract the file: code4ward.SystemCenter.WebViewWidgetPack.mpb

In the Operations Console, go to Administration -> Management Packs. Click on "Import Management Packs...". Add the management pack to the import list:


Click on Import to start the import process.

Using the new Dashboard Widget

First create a new Dashboard View in the Monitoring space or "My Workspace". Choose any layout for the dashboard and finish the wizard. Once the (empty) dashboard is loaded, click on "Click to add widget...":


Click on in the Template page and select "Web View Widget":


Click on next and provide a name and optional description for the widget you want to add. Click on next:


Enter the URL to the web page you want to load and display within the widget. Click on next and finish the wizard.

Once the dashboard is created and the widget(s) is configured you can personalize some settings:


Note the gear icon which appears when you hover the mouse over the widget. Click on it to change the configuration or personalize the widget:


That's it. Not much but I think still useful. If you have any feedback or bug reports, contact me using the following email address:
stefan.koell (-at-)

The information and management pack is provided "AS IS" with no warranties. Use at your own risk!