Pass an Array to a PowerShell Script in a TFS Build Definition

Published on Monday, November 26, 2012 in ,

In one my current projects I had to configure a TFS deployment for my customizations I wrote for AD FS and FIM. Setting up a deployment (build) in TFS seems pretty straightforward. The most complex thing I found was the xaml file which contains the logic to actually build & deploy the solutions. I picked a rather default template and cloned it so I could change it to my needs. You can edit such a file with Visual Studio which will give you a visual representations of the various flows, checks and decisions. After being introduced to the content of such a file by a colleague of mine I was still overwhelmed. The amount of complexity in there seems to shout: change me as little as you can.

As I have some deployments which require .NET DLL’s to be deployed on multiple servers I had some options: modify the XAML so it’s capable of taking an array as input and execute the script multiple times, or modify the parameters so I could pass an array to the script. I opted for the second option.

My first attempt consisted of adding an attribute of the type String[] to the XAML for that deployment step. In my build definition this gave me a multi valued parameter where I could enter multiple servers. However in my script I kept getting the value “System.String[]” where I’d expect something along Server01,Server02 This actually made sense, TFS probably has no Idea it needs to convert the input to a PowerShell array.

So I figured if I use a String parameter in the build and if I feed it something like @(“Server01”,”Server02”), which is the PowerShell way of defining an array.


Well it did, but not exactly like we want it. The quotes actually screwed it up and it was only available partially in the script. So we had to do some magic. Passing the parameters to the script means you pass though some vb.net code. This is some simple text handling code, and all we need to do for this to work is add some quotes handling magic. Here’s my test console application which tries to take an array as input and make sure I got the required amount of quotes on the output.


Here’s the TFS parameter section where we specify the arguments for the scripts. The magic happens in the “servers.Replace” section. We’ll ensure that quotes “survive” be passed along to the PowerShell script.

String.Format(" ""& '{0}\{1}' '{2}' {3} "" ", ScriptsFolderMapping, BuildScriptName, BinariesDirectory, Servers.Replace("""", """"""""))

In the GUI this goes into the “Arguments” field:


This allows us to configure the build definition like this. Which is actually pretty simple. Just put the array as you’d put it in PowerShell.


P.S. Make sure to either copy paste or count those quotes twice ; )


FIM: Calling FIM Automation cmdlets from within a PowerShell Activity

Published on in

I’m currently setting up a FIM solution where the users should be preregistered for Self-Service Password Reset (SSPR). Their email address will be managed in a system outside of FIM, and will be pushed to the correct attribute in the FIM Portal: msidmOneTimePasswordEmailAddress. After some googling I quickly realized that in order for the user to be properly registered, flowing the mail attribute wouldn’t be enough. So Register-AuthenticationWorkflow to the rescue! Using this PowerShell cmdlet you can perform the proper registration from within an administrator perspective. In order to automate this, I combined this with a custom PowerShell activity in the Portal. This activity will execute a PowerShell script with some parameters (attributes from the FIM Portal object) upon execution.

The trigger: whenever the msidmOneTimePasswordEmailAddressattribute is modified, the workflow will be executed.

The script (I left out some logging):

Add-PSSNapIn FIMAutomation

    $template = Get-AuthenticationWorkflowRegistrationTemplate –AuthenticationWorkflowName "Password Reset AuthN Workflow"
    $usertemplate = $template.Clone()
    $userTemplate.GateRegistrationTemplates[0].Data[0].Value = $maill

    Register-AuthenticationWorkflow -UserName "$domain\$name" -AuthenticationWorkflowRegistrationTemplate $userTemplate
Catch {
    $errorDetail = $_.Exception.Message;

However calling this script from within a workflow seemed to result in the following error:

Unexpected error occurred when registering Password Reset Registration Workflow for DOMAIN\USER with email address EMAIL, detailed message: The type initializer for 'Microsoft.ResourceManagement.WebServices.Client.ResourceManagementClient' threw an exception.

In the event log I found the following:


In words:

Requestor: Internal Service
Correlation Identifier: e98bcce4-54e7-4fd3-a234-7f7b5c7146d3
Microsoft.ResourceManagement.Service: Microsoft.ResourceManagement.WebServices.Exceptions.UnwillingToPerformException: IdentityIsNotFound
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.GetUserFromSecurityIdentifier(SecurityIdentifier securityIdentifier)
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.GetCurrentUser()
   at Microsoft.ResourceManagement.WebServices.ResourceManagementService.Enumerate(Message request)

Some where I found a forum thread or a wiki article which suggested you modified the FIM Service configuration file. The file is located in the FIM Service installation folder and is called Microsoft.ResourceManagement.Service.exe. The section we need to modify:

  • Before: <resourceManagementClient resourceManagementServiceBaseAddress=”fqdn” / > Depending on your installation it can also be localhost.
  • After: <resourceManagementClient resourceManagementServiceBaseAddress=”http://fqdn:5725” / >  Depending on your installation use FQDN or localhost.

After retriggering my workflow I now receive the following error:


In words: GetCurrentUserFromSecurityIdentifier: No such user DEMO\s_fim_service, S-1-5-21-527237240-xxxxxxxxxx-839522115-10842

This is easily resolved by adding the FIM Service as a user in the Portal. I’d make sure it’s filtered in the FIM MA or double check no attribute flows can break this AD Account.

Check the following URLs for some more background:


UAG: Failed to run FedUtil when activating configuration

Published on Monday, October 22, 2012 in

I’ve been testing an UAG setup where the trunk is either authenticated using Active Directory or Active Directory Federation Services. For this particular setup I had both configured some months ago. Now I wanted to reconfigure my trunk from AD to ADFS again. When I tried to activate the configuration  I was greeted with the following error:


In words: Failed to run FedUtil from location C:\Program Files\Microsoft Forefront Unified Access Gateway\Utils\ConfigMgr\Fedutil.exe with parameters /u "C:\Program Files\Microsoft Forefront Unified Access Gateway\von\InternalSite\ADFSv2Sites\secure\web.config".


In the event log I saw the above error. Now I started trying the most obvious things like a reboot, but all in vain. I also tried creating a completely new trunk, but that didn’t work out either. Finally I started thinking that some patch was being uncool. I verified the updates and I saw a patch round had occurred a few days ago. I uninstalled all patch from that day, and after a reboot I was able to activate the configuration again! Now you’re probably hoping for me to tell which specific patch is being the culprit? Well for now I don’t know that yet… But here’s the list of patched I uninstalled:

There are a lot…. Good luck! I still might have hit something else, but I sure did try a few reboots before actually going the uninstall-patches route… And that one definitely did it for me.


Quick Tips: October Edition #1

Published on in ,

Tip #1 (IIS): appcmd and IIS bindings:

Some more IIS (configuration) awesomeness: you can easily view the bindings for an IIS site using the following command:

  • appcmd list site /site.name:”MySite”

Now obviously just viewing isn’t that cool, but you can also set them! This is extremely useful for those environment where you have work according to “Development –> Test –> Acceptance –> Production” or other variances. I hate doing the same task (manually) multiple times.

So here’s how you can “push” your bindings to a site called “MySite” in IIS.

Syntax option #1: just a host header available on all IP’s (*):

  • appcmd set site /site.name:”MySite” /bindings:”http://mysite.contoso.com:80”,”http://mysite:80”

Syntax option #2: a host header bound to one specific IP:

  • appcmd set site /site.name “MySiite” /bindings:”http/”,”http/”

Mind the difference in the bindings parameters syntax: e.g. http:// <> http/

Now what if you want to change one specific binding to an other value?

  • appcmd set site /site.name:"MySite" /bindings.[protocol='http',bindingInformation=':80:mysite.contoso.com'].bindingInformation:

In this example I changed the binding which was listening on all IP address to only listen on a specific IP address.

Or what about adding a binding witouth modifying existing bindings?

  • appcmd set site /site.name:"MySite" /"+bindings.[protocol='https',bindingInformation='’]

P.S. appcmd is an executable which you can find in the c:\windows\system32\inetsrv directory.

Tip #2 (SQL), is SQL Full Text Search installed?:

One of the prerequisites when installing the FIM Service is that the SQL Full Text Search feature is installed on the SQL Instance hosting your database. There’s two easy ways to see if this is the case:

  • Using the services.msc mmc: check if there’s a service name SQL Server FullText Search ([instance]) where [instance] will be the name of your instance
  • Using the following SQL query: IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled')) print 'INSTALLED' else print 'NOT INSTALLED'

My source: serverfault.com: How to detect if full text search is installed in SQL Server

Tip #3 (Visual Studio): auto-increment version info:

When you create a class in C# or your preferred language you might want to have some version information on the DLL you build. There’s an easy way to configure your solution/project to auto-increment the build version every time you compile your project.

You can do this by directly editing the AssemblyInfo.cs below the properties.


// Version information for an assembly consists of the following four values:
//      Major Version
//      Minor Version
//      Build Number
//      Revision
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.*")]
//[assembly: AssemblyFileVersion("")]

In words: make sure to set the AssemblyVersion to something like “1.0.*” or “1.0.0.*” and comment the AssemblyFileVersion line. As I tested around a bit I noticed the following things:

  • the AssemblyFileVersion does not work with the *
  • If the AssemblyFileVersion is not commented the AssemblyVersion is ignored and the AssemblyFileVersion “wins”.

Some more background information:

Tip #4 (Certificate Authority): RPC traffic check

Often when playing around with certificates I’m hitting gpupdate like hell in order to retrieve auto-enrollment. But if you want to make sure your CA is actually reachable from a given endpoint over RPC/DCOM you can easily check this using the certutil utility. This utility is available out of the box.

  • certutil -ping -config "ca-server-name\ca-name”
  • Example: certutil –ping –config “SRVCA01\Customer Root CA”


Forefront UAG (TMG) Remote SQL Logging Database Size

Published on in

A while ago I did a basic install of UAG and enabled both Firewall and Web Proxy logging to SQL. I configured a trunk and published an application. Now one month later I checked the size of the SQL database which holds the logging information. It was 1,4 GB… Not really special, but taking into account that during that month I visited the published application like 5 times or so, it’s a lot…

So just out of curiosity I tried finding out if there were any records being logged which I didn’t care about.

In the database I did a select top 1000 rows and just glared at the rule names:


At first sight I saw a lot of [System] rules. To be honest I really don’t care if my UAG servers are accessing the SQL server configured for logging, or if they contact Active Directory for authentication. So I executed the following query:


This would delete all entries related to the logging configured in the TMG System Policy rules. Here’s the size before and after:





So I only won like 122 MB. Not really as much as I’d expect. After looking at the SQL data some more I executed this query:


This would delete all events logged because the request was denied by the “default deny” rule in TMG. Now the database lost another 880 MB! Now we’re talking!


So it seems to me that a large amount of data is related to the “default deny” rule in TMG. If you feel like you don’t need this information, you could disable the logging for this rule in the TMG console:


However this seems to be impossible:


There’s some articles explaining a way around this, but I don’t like those. In troubleshooting times you might want to have logging enabled again for this rule. I feel more like setting a checkbox then start importing and exporting the configuration of a TMG server in a production environment. Here’s an example of an article explaining how to alter the default rule: ISAServer.org: Tweaking the configuration of Forefront TMG with customized TMG XML configuration files

So what I did, for now, is to create my own rule which is positioned just in front of the default deny rule. It’s almost exactly the same, but it has logging disabled:


Whether or not this is a good Idea you have to see for yourself. It all depends what you want the logs for. If you want to have statistics about your published applications this might be fine. If you want statistics about the amount of undesired traffic hitting your UAG’s you might want the default behavior. If you still feel you need some but not all of the default deny rule logging here’s some additional idea’s: configure a deny rule in front of the rule we just created, but now configure it to actually log requests. In this rule you can specify to only log requests concerning HTTP(S) traffic. Or only log requests hitting the external interface. This would avoid all the chitchat which is happing on the LAN side.


FIM 2010 R2 Password Reset Configuration Troubleshooting

Published on Monday, October 8, 2012 in ,

I configured a FIM 2010 R2 for Self Service Password Reset using Email OTP. This is documented quite well on TechNet. However when my test user provided the OTP and entered a new password he got greeted with an error:


In words: An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000)

In order to explain when this issue occurred:

  1. I Provided the username
  2. I Received the OTP in my mailbox
  3. I Entered the OTP in the form
  4. I Provided a password in the form
  5. I Clicked Next

In order to solve this I tried/verified the following items:

Besides the user being confronted with an error in his browser I also noticed the following events in the event log.

  • Log: Forefront Identity Manager
  • Source: Microsoft.ResourceManagement
  • Level: Warning
  • Text: System.Workflow.ComponentModel.WorkflowTerminatedException: Exception of type 'System.Workflow.ComponentModel.WorkflowTerminatedException' was thrown.
  • Log: Forefront Identity Manager
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: The error page was displayed to the user.


    Title: Error

    Message: An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000)



    Details: System.InvalidProgramException: Error while performing the password reset operation: PWUnrecoverableError

  • Log: System
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: Microsoft.IdentityManagement.CredentialManagement.Portal: System.Web.HttpUnhandledException: ScriptManager_AsyncPostBackError ---> System.InvalidProgramException: Error while performing the password reset operation: PWUnrecoverableError
  • Log: System
  • Source: Microsoft.CredentialManagement.ResetPortal
  • Level: Error
  • Text: The web portal received a fault error from the FIM service.


    Microsoft.ResourceManagement.WebServices.Faults.ServiceFaultException: DataRequiredFaultReason

  • Log: System
  • Source: Microsoft.ResourceManagement
  • Level: Error
  • Text: mscorlib: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)

    at System.Management.ManagementScope.InitializeGuts(Object o)

    at System.Management.ManagementScope.Initialize()

    at System.Management.ManagementObjectSearcher.Initialize()

    at System.Management.ManagementObjectSearcher.Get()

    at Microsoft.ResourceManagement.PasswordReset.ResetPassword.ResetPasswordHelper(String domainName, String userName, String newPasswordText)

Besides the above entries I also stumbled upon this one:


In words: The program svchost.exe, with the assigned process ID 684, could not authenticate locally by using the target name RPCSS/fimsyncdev.contoso.com. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name.

Try a different target name.

As far as I could tell this entry did not got logged when a user attempted a reset or when the service got restarted, but it was logged a few times nevertheless.

After seeing this one it finally came clear: I like to use a DNS alias to target the FIM Synchronization Server when installing the FIM Service bits. This makes it easier when I have to activate my cold standby FIM Synchronization server. Typically you got two options for creating an “alias”:

  1. An A record
  2. A CNAME record

Scenario 1 is very much the preferred scenario when working with web applications. It makes registering your SPN’s way more logic as you just add them to the service account (the application pool identity). However here we have a special version of this scenario. The password reset relies on WMI/DCOM and it should be authenticated to those in order to successfully execute a set password. The WMI/DCOM stuff doesn’t run as a service account, it’s a service which runs under the local system account. Even if I would add my alias as an SPN on the computer account of the active FIM Sync server I would have to modify this SPN when activating my cold standy server.

So long story short: if you feel like using an alias for your FIM Synchronization Server is interesting, use a CNAME. Normally I do in this scenario, but for this specific customer it slipped through and caused me some hours to figure it out.  On the other hand I learned something about DCOM and it’s authentication stuff.


Temporary Profiles and IIS Application Pool Identities

Published on Monday, September 24, 2012 in , ,

I’m a bit stumbled that I’ve only come across this now. Recently I discovered that there are some cases where you can end up with your service account using a temporary profile. Typically this is the case where your service account has very limited privileges on a Server. Like application pool identities which run as a regular AD user, which I consider a best practice. I myself saw this in the context of the application pool identities in a SharePoint 2010 farm or with SQL Server Reporting Services 2008 R2.

The phenomena is also described at: Todd Carter: Give your Application Pool Accounts A Profile So this does not apply to all Application Pool identities! Only those running with “load profile=true”.

In the Application event log you can find the following event:

Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.

How to fix it if you see those nasty “c:\users\TEMP” folders?

  1. Stop the relevant application pools
  2. Stop the IIS Admin Service (in services.msc)
  3. See that the TEMP folders are gone in c:\users
  4. Follow the next steps

How to make sure your accounts get a decent profile?

We will temporary add the service account to the local administrators group so they can create a profile. In fact all they need is the “logon locally” privilege. The second command will start a command prompt while loading a profile. This will ensure a proper profile is created.

  1. net localgroup administrators CONTOSO\AppPoolAccount /add
  2. runas /u:CONTOSO\AppPoolAccount /profile cmd
  3. net localgroup administrators CONTOSO\AppPoolAccount /del

As a side note: if the TEMP folders are not disappearing, or you are still getting a temporary profile, you can try to properly cleanup the temporary profile:

  1. Stop the application pools
  2. Stop the IIS Admin Service
  3. Using right-click properties on computer, choose advanced tab and then pick User Profiles. There you can properly delete them.

If you’re still having troubles you might need to delete the TEMP folders manually AND cleanup the following registry location: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList Especially look if there aren’t any keys with .bak appended to it.


UAG: Trunk With Anonymous Authentication Not Working

Published on in

A few days ago I was setting up an UAG which has a trunk configured with anonymous authentication so that I could publish our FIM Self Service Password Reset page. I think I tried to outsmart UAG because this was I was getting over and over again:


In words: 500 – Internal server error.

I said to myself “how hard can it be?!”. After some time I started thinking that removing the default Portal entry which is added to the trunk wasn’t a good idea. I didn’t need it as my users will go directly to the SSPR site, but it seems like UAG needs it very badly! Just re-add it, activate the config and everything should start working.


To conclude: even if you don’t need it, better leave it in place.


Win 8 Client: Manage Wireless Networks, Where Art Thou? Follow Up

Published on Wednesday, September 19, 2012 in

A while ago I posted a workaround to manage the more advanced settings of wireless networks: Win 8 Client (Dev Preview): Manage Wireless Networks, Where Art Thou?

In some of the comments I read that in the final version the explorer.exe shell:: command did no longer worked. After verifying on my own fresh install I noticed that this was indeed the case. However, there’s other possibilities which make it less bad. You can now access the advanced settings in the followings ways:

1. Just before finishing the creation of a new network:

In the network and sharing center click “set up a new…”


Choose “Manually connected to a …”


After entering some basic parameters you can choose “Change connection settings” before clicking close.


2 For an existing network connection:

Ok, my title is a bit misleading, I think you can only edit this one if the SSID is actually accessible. Meaning you are actually in the physical location where the Wireless LAN is supposed to be. So I’m not saying authentication should succeed, but the SSID should be “online”. So in a lot of situations this might be sufficient.

When clicking the network item in the tray a bar will appear to the right with your networks in it. You can right-click it and choose “view connection properties”.


3 By deleting and re-adding the profile:

Yep, this one is not funny, but for now I don’t see any other options. I actually found this one on the following blog: Ryan McIntyre : Windows 8 Missing “Manage Wireless Networks”

  • Show the profiles: netsh wlan show profile
  • Delete a profile: netsh wlan delete profile “profile name”
  • Recreate it using the GUI and make sure you now do it properly



Quick Tips: September Edition #1

Published on Monday, September 17, 2012 in , ,

Ok, I’ve gone through my mailbox and I’ve got quite some little neat tricks I want to share and most of all never forget myself. So I’ll put them here for future reference.

Tip #1 (Network):

Remember “Network Tracing Awesomeness” If you’d only want to have traffic captured which involves a specific IP you can start the trace like this:

netsh trace start capture = yes ipv4.address=

This can be very convenient if your server is a domain controller or a file server and communicates with a lot of clients all the time.

Tip #2 (IIS):

In various IIS Kerberos configuration howto’s you are instructed to set useAppPoolCredentials to true. I Always hate editing XML’s directly as it’s quite easy to make errors. Using the following command you can easily set this parameter from a command prompt:

appcmd set config "Default Web Site" /section:windowsauthentication
/useAppPoolCredentials:true /commit:MACHINE/WEBROOT/APPHOST
(the command is supposed to be on one line)

The Default Web Site is the name of the site as it appears in the IIS management console. Remember, you might need to have something like Default Web Site/vDir If you have to configure this for sublevels of the site.

Tip #3 (Kerberos):

If you enable an account to be trusted for delegation to a given service, you might have to wait some time before the service itself notices this. This is often noticed as: I changed something, it didn’t work and magically the next day it started working. If I’m not mistaken, this might have to do with the Kerberos S4U refresh interval which is at 15’ by default. At least that was the value at Windows 2003… See also: KB824905: Event ID 677 and event ID 673 audit failure messages are repeatedly logged to the Security log of domain controllers that are running Windows 2000 and Windows Server 2003

Tip #4 (PowerShell):

From: MSDN: Win32_PingStatus class

When you use PowerShell to perform remote tasks on a server, such as WMI queries, it might be way more efficient to do a quick ping before actually trying to talk WMI to the server. This way you can circumvent those nasty timeouts when the server you are trying to talk to is down.

$server = "server01"
$PingStatus = Gwmi Win32_PingStatus -Filter "Address = '$Server'" |Select-Object StatusCode

Tip #5(Tools):

Every once in a while I need a tool from the Sysinternals Utilities set. Mostly I go to google, type in the name, get to the Microsoft site hosting the utility and click launch. However, it seems you easily access all of the tools using this webdav share: \\live.sysinternals.com just enter it in a file explorer or your start-> run. The utilities we all know so well are located in the Tools folder. Or if that doesn’t works, just use http://live.sysinternals.com/ 


Thanks to a colleague for this last tip!

-Stay tuned for more!-


SCCM 2007: DCM Check For A Registry Value Only If the Value Exists

Published on Monday, September 10, 2012 in

This is a bit far from my regular technologies, but today I used the DCM (Desired Configuration Management) feature of SCCM to map the amount of clients which are suffering a particular issue. More specific, we are suffering the issue as described in: social.technet.microsoft.com: Print drivers on windows 7 clients missing dependent files..?

So we know that clients which have the “corrupted” printer driver registry settings look like this:

  • Key: HKLM\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows NT x86\Drivers\Version-3\Lexmark Universal
  • Value1: Help File=””
  • Value2: Dependent Files=””

We also know that clients which are healthy look like this:

  • Key: HKLM\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows NT x86\Drivers\Version-3\Lexmark Universal
  • Value1: Help File=”UNIDRV.HLP”
  • Value2: Dependent Files=”blabla.dll blablo.dll ….dll”

And we should not forget that not all clients have this driver! So the ones which don’t have have the key/value should not be reported!

SCCM DCM to the rescue! I’ve actually spent quit some time to get this right. Probably because I’m a first time DCM’r, but perhaps because some things aren’t that obvious as well. What I wanted to achieve with DCM explained in words: get me a report which returns all computers that have a blank value for the “Help File” value. So I’d specifically wanted to ignore the ones where that registry value didn’t exist or where it has a value of “UNIDRV.HLP”.

So here is how you don’t do it:

Adding a CI (Configuration Item) where you add a registry key to the Objects tab


As far as I’ve come to understand the DCM configuration, by adding a registry key to the Objects tab, you can check for it’s existence. Now I typed key in bold as in registry terms, a key is like a folder. A registry value on the other hand is like a string, or binary thing which can hold an actual value.

Here’s how can do it:

Leave the Objects tab empty and go on with the Settings tab.


On the settings tab we can add a specific setting of the type registry. Your definition should look like this:


On the general tab all we need to do is specify the Hive, the Key and the name of the Value we are interested in. The validation tab is the one where the real magic happens:


I will first go the next screenshot and then I’ll come back to this one. In the next screenshot you will see how I added a new validation rule by clicking “new”.


What you see here should be pretty obvious: I specified that if the “Help File” registry value equals “UNIDRV.HLP” all is good. And more specific if this wouldn’t be the case it should be expressed as a severity of Error. Now some examples:

  • Value example #1: “UNIDRV.HLP”: compliant
  • Value example #2: “UNIDRV”: non-compliant
  • Value example #2: “”: non-compliant
  • Now what if the registry value doesn’t exist to begin with?!

Well that’s where the previous screenshot comes into play: by default Report a non-compliance event when this instance count fails is checked. I specifically unchecked this one. It is to my understanding that this one will cause the CI to be non-compliant if the registry value (the instance) can’t be found. In my case if the value can’t be found it means the driver isn’t installed and thus the client is not suffering the issue.

So in short, using the configuration as shown above I have established that all clients which have a registry value “Help File” under the given key should have a value of “UNIDRV.HLP”. If they’ve got an empty value, they’ll be included in the report. The ones which don’t have this driver, and thus don’t have this registry value will be excluded from the report. This will allow us to do some quick and dirty fixing of the clients which already are suffering this issue and at the same time we can try distributing a printer feature hotfix package of Microsoft. Once that one is out on the clients we can use the reporting to find out if new cases are occurring.

It was a post of KevinM (MSFT)  which made all of the above fall together: social.technet.microsoft.com: Check if Registry Value Exists?


SCCM 2007: DCM Development Tip

Published on in

The actual reason why I’m toying around with DCM (Desired Configuration Management) will be explained in my next post. But here’s a tip I’ve found to be quit practical when trying to get your CI (Configuration Item) configuration right.

I quickly found out that whenever you changed settings in the CI you’ve had to initiate the Machine Policy Retrieval & Evaluation Cycle action so that the Conf Manager client would have the latest version of your Baseline/CI.


In the Configuration Manager client you’ve got a button called Evaluate on the last tab which you can use to actually allow the CI to be evaluated and give you a report displaying the current compliance state.


In the screenshot you see “Unknown:Scopel….” but that’s just a GUI refresh thingy. After a few minutes it’s properly displayed. Now this part is easy. Now on the other hand I was switching a regkey by hand on the client in order to trigger the various possible outcomes of my baseline. And after I while I figured out that there had to occur some caching behind the scene’s…

Using google I found an explanation at the following forum: myitforum.com:[mssms] Configuring DCM to detect (Default) Value name [mdfdr5]

And then I started using the following workaround in order to avoid the 15’ interval:


By appending a number to the CI name I was triggering a version increase. This in turn causes the cached result to be come invalid and ensures my evaluation always gives the most up to date answer. It’s a bit dirty and causes for a high version number, but on the other hand, this is in a test environment, and it’s damn easy like this.


DebugView 100% CPU In a Windows 2008 VM

Published on in ,

A while ago I got a tip of a colleague to use the DebugView utility from Sysinternals (Microsoft) to debug code. Once in a while I write a simple rules extension for Forefront Identity Manager, or even an attribute store for ADFS. As simple as they may be, sometimes things don’t go as I wish…

You can use DebugView by using the following lines in your coding: at the top of your class you make sure you have “using System.Diagnostics;” and everywhere you feel like you want diagnostic output you put “Debug.WriteLine(“your string here”); It might be obvious, but you have to make sure you compile your code in Debug mode!

And perhaps a little sting here: make sure the DEBUG constant is enabled. It’s on by default though.


I’ve used this approach a few times now, but yesterday things went bad. After starting DebugView my server, a VM I was running on my Laptop, became sluggish. I still could reproduce my issue though, but nothing was being captured. Odd. After checking the task manager I found out my DebugView.exe process was using 100% CPU.

Off to google! I quickly found this topic: forum.sysinternals.com: DbgView.exe 100%CPU

Finding the DebugView version 4.76 is not that easy though, there’s a zillion sites just linking through to the Microsoft site and thus giving you version 4.79 every time. Finally I found this site which has the actual 4.76 version:  http://www.myfiledown.com/download/435608/debugview-435608-3.html But the link seems down now… Once I used this version my CPU usage was normal and my debug came out just fine.


Windows Azure: Add Your Own Management Certificate

Published on Monday, September 3, 2012 in

Recently I figured out that I can try out Azure as that comes as one of the benefits of having an MSDN account. I got 375 hours of free computing hours per month! Just for the fun of it I want to host a small VM which acts as a TeamSpeak server every now and then. I guess that’s not really what the Azure subscription is meant for in the MSDN package, but hey I’m experimenting and getting to know the possibilities of Azure in the meanwhile! Guess that’s a Win-Win right?

Either way, because I only have 375 hours that means I can’t have my VM deployed 24/7. I wrote some simple PowerShell scripts which basically remove the VM, leaving the VDH intact and recreate it whenever I want. That might be another blogpost if I find some time. But now I want the possibility to have my colleagues power it up whenever I’m not around. The following options were not OK:

  • Be on duty 24/7 with an internet connection at hand
  • Hand out my live-id to everyone

So here comes the, be it limited, delegation capabilities of the Windows Azure management infrastructure: it seems you need your live ID to log in via the web interface. But for the PowerShell cmdlets you can actually have up to 10 certificates! So here comes how to start toying around with that part of Azure.

Remark: I only used the Get-AzurePublishSettingsFile cmdlet as explained on Windows Azure Cmdlet Guidance for my initial Azure PowerShell configuration on my home PC. However it seems like if you run the command again it will just generate another Windows Azure very long name –date-credentials management certificate. So in the end you got no clue to who you handed out which certificate.

So here we go:

1. Generate a new certificate

Using Visual Studio’s makecert utility I created my own certificate, for a detailed howto: How to Create a Certificate for a Role

The command I used: makecert -sky exchange -r-n "CN=[CNF]Invisibal" -pe -a sha1 -len 2048 -ss My "o:\SkyDrive\Documenten\Personal\Azure\Invisibal.cer"

2. Upload the .cer file in the Windows Azure management portal


3. Export your certificate from your local store and store it somewhere safe

The makecert command created a .cer file which is good for the upload,  but you have to make sure that from whatever computer you want to run your Azure PowerShell cmdlets you have the certificate with the private key available. So as in my case I created the certificate on my own PC, and I want my colleague to be able to connect to the Azure management API using PowerShell, I have to export the certificate (including the private key) and hand it over to him.

To export the certificate:

Start –> Run –> MMC –> Add/Remove the certificate snap-in, choose user



4. Download and configure the Azure PowerShell cmdlets

You can download the cmdlets from here: Downloads for managing Azure

After starting the shell and trying out a simple command you will be greeted with an error:


In words: Get-AzureVM : Call Set-AzureSubscription and Select-AzureSubscription first.

After some trial and error I found the following in one of the help sections of a cmdlet.

5. Retrieve your Azure subscription ID

You can get it either from the account section (where you get to see the usage & billing information) or just copy it from the Management Certificates section where you just uploaded a certificate:


Just copy paste it in a temporary notepad file.

6. Retrieve your certificate thumbprint

From a PowerShell prompt execute get-item cert:\\currentuser\my\*


Also just copy paste it in a temporary notepad file.

7. Start up the Azure PowerShell shell and start the magic

You can now easily copy the SubscriptionID ($subID) and the Thumbprint ($thumbprint) from the tempory notepad into the required variables.

$subID = "af2f6ce8-demo-demo-demo-dummydummyd3"
$thumbprint = "01675217CF4434C905CF0A34BBB75752471869C6"
$myCert = Get-Item cert:\\CurrentUser\My\$thumbprint
Set-AzureSubscription -SubscriptionName "CNF_TS" -SubscriptionId $subID -Certificate $myCert

This should command should also persist between sessions. Meaning if you restart the shell, it will still be available and you can go ahead and start executing cmdlets right away.

8. You’re good to go!


Well just when I was about the wrap this up I found this great article: it covers most of my stuff and way more. Definitely worth reading: Automating Windows Azure Virtual Machines with PowerShell


Solaris OpenWindows, Xming and Windows 7 Stealth Mode Firewall

Published on Wednesday, August 29, 2012 in

First off, this is a very very very specific issue which I think not many people will run in to. But as I found some forum posts here and there which look like the same I thought I'd post it nevertheless.

A while ago I was troubleshooting a situation where we seemed to experience some kind of delay in an application startup. More specific whenever we used Xming to connect to an OpenWindows desktop session on a Solaris server, we were seeing a delay of about 3 minutes before actually seeing the desktop. This delay was only seen when connecting to this specific Solaris server. Other servers did not posed this problem.

Very soon we found out that if we'd set all Windows Firewall Profiles to "off" on the Windows 7 client, we didn't saw the issue. Now one could think we needed to open some specifics port. We thought we had them all covered, but still no luck. In the end we let the firewall on, but for both in and outbound we built rules which were supposed to allow all traffic. The so called any rules ;) And we were still seeing the issue. Now what is that?!

So in comes the tracing...

Below is an excerpt from a trace on the client to the server when the Windows Firewall is ON:


In this trace I’ve filtered all other traffic than the one to port 2000. So in the background there’s more (client-server) traffic, such as regular X11 traffic. What we are seeing here is that the server(.10) is trying to reach the client (.227) on port 2000. The client does not respond to these queries. After +- 3 minutes the server continues X11 traffic and the user gets his desktop. Now this is quit odd... It's the client which is contact the server. Why is the server initiating traffic to the client?!

If we compare this with the trace to the server (.10) when the Windows Firewall is OFF:


Here we clearly see that the same traffic is sent by the server (.10), but the client (.227) immediately answers with RST, ACK response. Basically telling the server that there’s nothing there. After this entry the server/client communication continues. The users gets his desktop more or less instantly.

In the Solaris configuration there must be a configuration option which makes the server poll the client on port 2000 when launching an OpenWindows desktop from that client. The problem lies in the fact that the server waits for an answer, or times out after about 3 minutes, until he decides communication can go on. To be precise, on the client there’s nothing listing on port 2000, in a situation without a firewall, the client would answer with a RST stating that nothing is there and that communication can continue. A windows firewall however is by default working in “stealth mode”  (http://technet.microsoft.com/en-us/library/dd448557(WS.10).aspx ). As such the client doesn’t send the RST answer and the server waits for about 3 minutes before continuing and showing the desktop.

[First thought]: The Windows 7 firewall stealth mode is causing the server to keep retrying for a number of times

Now if we compare this with the trace where we connect to another server:


Here the server does NOT contact the client on port 2000 (or another port) and the desktop starts promptly.

[Conclusion 2]: That kind of traffic should hit the client on the first place!

Lucky a colleague which is more experienced in Solaris then me had a golden hunch. When he connected using Xming he started toying around with the client settings. In one of his attempts he tried connecting with another font selected. And voila! So it seems we are trying to connect to a server where we say we want to use a specific font which the server hasn't. As such it tries search for this font and even contacts the client for it. I've no idea what the places are it searches, but this was definitely the culprit!

[Final conclusion]: If you are seeing this behavior, check your fonts!

Just for completeness: here’s the exact same issue also discussed:


Windows OS About To Stop Support For RSA Keys Under 1024 Bits

Published on Tuesday, August 7, 2012 in , , , , , , ,

One of my colleagues was having troubles accessing an HTTPS site. The site is secured with a certificate coming from an Active Directory Certificate Authority. Now I know of a bug where if you have a pinned website on your taskbar, and from that browser instance you open an HTTPS site with an untrusted certificate, there’s no "continue anyway” button…

Now this wasn’t the case today. He had the “continue anyway” option, where you typically click on, load the site and check the certificate. However, after clicking, it didn’t go trough, it just remained at the same page. We installed the root CA manually in the trusted root authorities, but still no improvements. When verifying the root certificate in the MMC we also saw it mentioned that the digital signature was invalid.. odd!

Using that as a query for google we quickly came across this:

If you read those first two carefully you’ll see the update will be released as a critical non-security update on august 14th for Windows XP, Windows Server 2003, Windows Server 2003 R2, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2.

An example of a bad certificate:


Now how come he was having this issue now already?! Ahah, now comes the clue! He was using Windows 8! Now I am too, and I’m not having that problem with that specific site, but here’s the difference:

  • Windows 8 with issue: Windows 8 Release Preview: build 8400
  • Windows 8 without issue: Windows 8 Consumer Preview: build 8250

So it seems they’ve included this update somewhere in the build process of Windows 8.

Having certificates with an RSA key < 1024 is probably not really the case for most of us, but be sure to double check those certificates and their (intermediate) roots! Especially for those customer facing sites where you can’t control what updates hit the clients and thus potentially might be denied access to your sites.


Dynamics Ax 2012: Error Installing Enterprise Portal

Published on Thursday, June 21, 2012 in ,

I was assisting a colleague which was installing the Ax 2010 Enterprise Portal on a SharePoint Farm. The farm consisted of 2 servers hosting the SharePoint web applications (the actual sites), and 2 servers hosting the central admin and application services roles. We wanted to start with installing the Enterprise Portal bits on both web front servers without actually choosing the “create site” option in the installer. This would just prep the server and then we’d finalize it by running the “create site” option on the central admin server.

Here’s the screenshot where we selected the Enterprise Portal (and some other prereqs for the Portal):


Some steps further we were supposed to get a dropdown with an overview of all sites hosted by SharePoint. Although SharePoint was installed, and we had multiple sites created, we were greeted with an error stating Microsoft SharePoint 2010 is not installed or running. Please run the prerequisite utility for more information. Operation is not valid due to the current state of the object.


Going back and clicking next again doesn’t really solve the problem. Going to the installer log file (which is located in \Program Files\Microsoft Dynamics AX\60\Setup Logs\[Date]) showed us that the installer seemed to query the local IIS configuration just fine. As far as I could tell no actually error was given, but it started processing shortly after trying to get information regarding the first actual SharePoint site.



After staring a bit at the log my eye fell on the “GetFirstHostHeaderOfWebSite” method. It seemed to have ran fine for the default website, but it wasn’t executed for the first actual SharePoint site. And it rang a bell as we have customized this a bit. We had in fact 3 host headers for each SharePoint site. One for the virtual name, one for the virtual name but dedicated to the node and one which was just blank with the IP. I know the last one more or less make the others unnecessary, but we added that one later on when we figured our hardware load balancer status probing wasn’t playing nice with the host headers.


Long story short, after modifying ALL sites found in the IIS configuration so that thed have one or no host headers, the setup was able to enumerate the potential sites to configure the AX Enterprise Portal for just fine. Bit weird and seems like a bug in the installer to me…