Showing posts with label Active Directory. Show all posts
Showing posts with label Active Directory. Show all posts

5 comments

Domain controller: LDAP server signing requirements and Simple Binds

Published on Thursday, September 22, 2016 in

Lately I’ve been wondering about the impact of the following setting: Domain controller: LDAP server signing requirements. The documentation (TechNet #1 and TechNet #2 ) spells it out pretty well: This policy setting determines whether the Lightweight Directory Access Protocol (LDAP) server requires LDAP clients to negotiate data signing. You can set it to either None or Required. None is the default and allows signing if the client asks for it.

Sometimes when I read information I read too fast and draw my conclusion. Shame on me. Wrong conclusion from my side: configuring this setting to required requires all connection to use LDAPS (TCP 636). Nope. It says data signing! Signing can be perfectly done with traffic targetted at both LDAP (TCP 389) or LDAPS (TCP 636).

From AskDS: Understanding LDAP Security Processing I learned various things about simple binds. Simple binds send your username and password in clear text. Needless to say that in combination with LDAP you’re at risk. On the other hand, if the communication is using LDAPS, sending passords in clear text could be acceptable. 

Now the documentation I referenced earlier is a bit conflicting on this topic:

  • This setting does not have any impact on LDAP simple bind or LDAP simple bind through SSL.
  • If signing is required, then LDAP simple bind and LDAP simple bind through SSL requests are rejected.
  • Require signature. The LDAP data-signing option must be negotiated unless Transport Layer Security/Secure Sockets Layer (TLS/SSL) is in use.

Now it might be just me but I would phrase that in another way. Both articles suffer from the same wording. So like with any other uncertainty we just test it. Once you see and experience it you’ll never forget!

This is part of the Default Domain Controller Policy on Windows Server 2012 R2:

image

I changed it to:

image

Now using LDP.exe we can do some tests:

Connecting over LDAPS:

image

Performing a simple bind:

image

And the result:

image

Now if we try to connect over LDAP:

image

Bind like before. But now we get:

image

In words: Error <8>: ldap_simple_bind_s() failed: Strong Authentication Required
Server error: 00002028: LdapErr: DSID-0C090202, comment: The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection, data 0, v2580
Error 0x2028 A more secure authentication method is required for this server.

Conclusion:

All of this is definitely not new. But writing about it helps me never forget it. Setting the LDAP Server Signing Settings to required will probably require some planning and testing. But it doesn’t mean you can’t use simple binds. As long as you can configure your application to use LDAPS. Your domain controller should be logging a warning event every once in a while when simple binds or unsigned LDAP traffic is seen. Here’s some more info on this event: Event ID 2887 — LDAP signing.

If you want to read more on LDAP signing, please check KB935834: How to enable LDAP signing in Windows Server 2008

2 comments

Protected Users Group

Published on Saturday, February 27, 2016 in ,

Earlier this week I’ve been talking to a customer about the “Protected Users” group. You might have seen it appearing when introducing the first 2012 R2 domain controller. Here’s a good explanation on its purpose:

Protected Users is a new global security group to which you can add new or existing users. Windows 8.1 devices and Windows Server 2012 R2 hosts have special behavior with members of this group to provide better protection against credential theft. For a member of the group, a Windows 8.1 device or a Windows Server 2012 R2 host does not cache credentials that are not supported for Protected Users. Members of this group have no additional protection if they are logged on to a device that runs a version of Windows earlier than Windows 8.1. Source: TechNet: How to Configure Protected Accounts

The above is actually a bit misleading. The functionality was actually backported to Windows 2008 R2/Windows 2012 in the hotfix KB2871997 See blogs.technet.com: An Overview of KB2871997 for an explanation on this.

This group might be part of your organization’s strategy to reduce the attack surface for pass the hash. A great white paper on this can be found here: Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft, Version 1 and 2

One of the things the Protected Users group ensures is that no NTLM hashes are available to be used or stolen. Now I wanted to see this for myself. There are various tools out there that are capable of listing the various secrets. I tried Windows Credential Editor (WCE) but that one didn’t work on (my) Windows 2012 R2. So I used Mimikatz. My setup: A 2012 R2 domain controller and a 2012 R2 member server. I’ve got 3 domain admins: one that has the remote desktop session open to the member server and then two that have a powershell runnning through runas. Of the latter one is a member of the Protected Users group:

Run as different user: SETSPN\john

image

Run as different user: SETSPN\thomas

image

As you can see John is an oldschool Domain Admin whereas Thomas has read the Mitigating PtH whitepaper and is a proud member of the Protected Users group. This is the PowerShell oneliner I used to dump the groups I care about: WHOAMI /GROUPS /FO CSV | ConvertFrom-Csv | where {$_."group name" -like "Setspn\*"}

Here you can see the Protected Users admin has no NTLM available:

image

Where the regular admin has NTLM available:

image

Here’s the difference from an attacker point of view:

Start Mimikatz –> Privilege::debug –> sekurlsa::logonpasswords And here are the goodies:

John:

Authentication Id : 0 ; 3529276 (00000000:0035da3c)
Session           : Interactive from 0
User Name         : john
Domain            : SETSPN
Logon Server      : SRVDC01
Logon Time        : 2/24/2016 6:59:54 PM
SID               : S-1-5-21-4274776166-1111691548-620639307-5603
        msv :
         [00000003] Primary
         * Username : john
         * Domain   : SETSPN
         * NTLM     : 59884edfb057d0fec8cb7e0d571dc200
         * SHA1     : 7e655db2b3a7e88fb0c50ca56416ae655469f09e
         [00010000] CredentialKeys
         * NTLM     : 59884edfb057d0fec8cb7e0d571dc200
         * SHA1     : 7e655db2b3a7e88fb0c50ca56416ae655469f09e
        tspkg :
        wdigest :
         * Username : john
         * Domain   : SETSPN
         * Password : (null)
        kerberos :
         * Username : john
         * Domain   : SETSPN.LOCAL
         * Password : (null)
        ssp :
        credman :

Thomas:

Authentication Id : 0 ; 3493146 (00000000:00354d1a)
Session           : Interactive from 0
User Name         : thomas
Domain            : SETSPN
Logon Server      : SRVDC01
Logon Time        : 2/24/2016 6:59:36 PM
SID               : S-1-5-21-4274776166-1111691548-620639307-5602
        msv :
         [00010000] CredentialKeys
         * RootKey  : db1c2347608db0c4e2d89bbd6c328bf6f42671b7d88653cd4cc9af2713
e958f0
         * DPAPI    : 63adfe49948fca81c885933b3aa23eba
        tspkg :
        wdigest :
         * Username : thomas
         * Domain   : SETSPN
         * Password : (null)
        kerberos :
         * Username : thomas
         * Domain   : SETSPN.LOCAL
         * Password : (null)
        ssp :
        credman :

As you can see the admin that’s a member of the Protected Users group does NOT have the NTLM hashes dumped. Wooptiedoo! Now think and test before you start adding the Domain Admins group to the Protected Users group! By no means you should do that! Here’s some good information on how to start with the Protected Users group and some additional caveats: How to Configure Protected Accounts

Here’s one from my side: after adding my admin user to the Protected Users group he was no longer to RDP to a 2012 R2 member server:

image 

In words: A user account restriction (for example, a time-of-day restriction) is preventing you from logging on. For assistance, contact your system administrator or technical support.

Remote desktop to a Windows 2008 R2 worked fine with that account. It seems for my Protected User admin to be able to log on to a Windows 2012 R2 server it had to actualy use mstsc.exe /restrictedadmin and I had to enable Restricted Admin mode on the member server:

image

You can find that value below HKLM\SYSTEM\CurrentControlSet\Control\Lsa

If you want to know more about the Protected Users group and the Restricted Admin feature read up on both of them here: TechNet: Credentials Protection and Management or digital-forensics.sans.org:Protecting Privileged Domain Accounts: Restricted Admin and Protected Users

Some additional reading on Restricted Admin mode: Restricted Admin mode for RDP in Windows 8.1 / 2012 R2

2 comments

ADFS Alternate Login ID: Some or all identity references could not be translated

Published on Wednesday, August 5, 2015 in

First day back at work I already had the chance to get my hands dirty with an ADFS issue at a customer. The customer had an INTERNAL.contoso.com domain and an EXTERNAL.contoso.com domain. Both were connected with a two-way forest trust. The INTERNAL domain also had an ADFS farm. Now they wanted both users from INTERNAL and EXTERNAL to be authenticated by that ADFS. Technically this is possible through the AD trust. Nothing special there, the catch was that they wanted both INTERNAL and EXTERAL users to authenticate using @contoso.com usernames. Active Directory has no problems authenticating users with an UPN different with that from the domain. You can even share the UPN suffix namespace in more than one domain, but… you cannot route shared suffixes cross the forest trust! In our case that would mean the ADFS instance would be able to authenticate user.internal@contoso.com but not user.external@contoso.com as there would be no way to locate that user in the other domain.

Alternate Login ID to the rescue! Alternate Login ID is a feature on ADFS that allows you to specify an additional attribute to be used for user lookups. Most commonly “mail” is used for this. This allows people to leave the UPN, commonly a non public domain (e.g. contoso.local), untouched. Although I’m mostly advising to change the UPN to something public (e.g. contoso.com). The cool thing about Alternate Login ID is that you can specify one or more LookupForests! In our case the command looked like:

001
002

Set-AdfsClaimsProviderTrust -TargetIdentifier "AD AUTHORITY" -AlternateLoginID mail -LookupForests internal.contoso.com,external.contoso.com

Some more information about Alternate Login ID: TechNet: Configuring Alternate Login ID

Remark: When alternate login ID feature is enabled, AD FS will try to authenticate the end user with alternate login ID first and then fall back to use UPN if it cannot find an account that can be identified by the alternate login ID. You should make sure there are no clashes between the alternate login ID and the UPN if you want to still support the UPN login. For example, setting one’s mail attribute with the other’s UPN will block the other user from signing in with his UPN.

Now where’s the issue? We could authenticate INTERNAL users just fine, but EXTERNAL users were getting an error:

3

In words:

The Federation Service failed to issue a token as a result of an error during processing of the WS-Trust request.

Activity ID: 00000000-0000-0000-5e95-0080000000f1

Request type: http://schemas.microsoft.com/idfx/requesttype/issue

Additional Data
Exception details:
System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated.
   at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess)
   at System.Security.Principal.SecurityIdentifier.Translate(Type targetType)
   at System.Security.Principal.WindowsIdentity.GetName()
   at System.Security.Principal.WindowsIdentity.get_Name()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.InitializeName()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.get_Claims()
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.AddClaimsInWindowsIdentity(UserNameSecurityToken usernameToken, WindowsClaimsIdentity windowsIdentity, DateTime PasswordMustChange)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateTokenInternal(SecurityToken token)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateToken(SecurityToken token)
   at Microsoft.IdentityModel.Tokens.SecurityTokenHandlerCollection.ValidateToken(SecurityToken token)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.GetEffectivePrincipal(SecurityTokenElement securityTokenElement, SecurityTokenHandlerCollection securityTokenHandlerCollection)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.Issue(RequestSecurityToken request, IList`1& identityClaimSet)

Now the weird part: just before the error I was seeing a successful login for that particular user:

2

I decided to start my search with this part: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. That led me to all kind of blogs/posts where people were having issue with typo’s in scripts or with users that didn’t exist in AD. But that wasn’t the case with me, after all, I just had a successful authentication! Using the first line of the stack trace: at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess) I took an educated guess of what the ADFS service was trying to do. And I was able to do the same using PowerShell

001
002
003

$objSID = New-Object System.Security.Principal.SecurityIdentifier ("S-1-5-21-3655502699-1342072961-xxxxxxxxxx-1136") 
$objUser = $objSID.Translate( [System.Security.Principal.NTAccount]) 
$objUser.Value

And yes I got the same error!:

psError

At first sight this gave me nothing. But this was actually quite powerful: I was now able to reproduce the issue as many times as I liked, no need to go through the logon pages and most importantly: I could now take this PowerShell code and execute it on other servers! This way I could determine whether it was OS related, AD related, trust related,… I found out the following:

  • Command fails on ADFS-SRV-01
  • Command fails on ADFS-SRV-02
  • Command fails on WEB-SRV-01
  • Command runs on HyperV-SRV-01
  • Command runs on DC-INTERNAL-01

Now what did this learned me:

  • The command is fine and should work
  • The command runs fine on other 2012 R2 servers
  • The command runs fine on a member server (the Hyper-V server)

As I was getting nowhere with this I decided to take a Network Trace on the ADFS server while executing the PowerShell command. I expected to see one of the typical SID translation methods (TechNet: How SIDs and Account Names Can Be Mapped in Windows) to appear. However absolutely nothing appeared?! No outgoing traffic related to this code. Now wtf? I had found this article: ASKDS: Troubleshooting SID translation failures from the obvious to the not so obvious but that wouldn’t help me if there was no traffic to begin with.

Suddenly an idea popped up in my head. What if the network traffic wasn’t showing any SID resolving because the machine looked locally? And why would the machine look locally? Perhaps if the domain portion of the machine SID is the same as that of the user we were looking up? But they’re in different domains… However, there’s also the machine’s local SID! The one that is typically never encountered or seen! Here’s some info on it: Mark Russinovich: The Machine SID Duplication Myth (and Why Sysprep Matters)

I didn’t took the time to find out whether I could retrieve it’s value with PowerShell or so, but I just took PsGetsid.exe from SysInternals. This is what the command showed me for the ADFS server:

2015-08-03_14-43-07

Bazinga! It seemed the local SID of all the machines that were failing the command were the same as the domain portion of the EXTERNAL domain SIDs! Now I asked to customer if he could deploy a new test server so I could reproduce the issue one more time. Indeed the issue appeared again. The local SID was again identical. Running sysprep on the server changed the local SID and after joining the server again to the domain we were able to succesfully execute the PowerShell commands!

Resolution:

The customer had been copying the same VHD over and over again without actually running sysprep on it… As the EXTERNAL domain was also created on a VM from that image the Domain Controller promotion process choose that local SID as base for the EXTERNAL domain SID. My customer choose to resolve this issue by destroying the EXTERNAL domain and setting it up again. Obviously this does not solve the fact that several servers were not sysprepped, and in the future this might cause other issues…

Sysprep location:

image

For a template you can run sysprep with generalize and the shutdown option:

image

Each time you boot a copy of your template it will run the sysprep process at first boot.

P.S. Don’t run sysprep on a machine with software/services installed. It might have a nasty outcome…

0 comments

Federating ADFS with the Belnet Federation

Published on Monday, June 8, 2015 in ,

logo_federationThe Belnet federation is a federation where a lot of Belgian educational or educational related institutions are joined to. I’m currently involved in a POC at one of these institutions. Here’s the situation we started from: they have an Active Directory domain for their employees, and are part of the Belnet federation through a Shibboleth server which is configured as an IDP with their AD. Basically this means that for certain services hosted on the Belnet federation, they can choose to login using their AD credentials through the Shibboleth server.

Now they want to host a service themselves. They would like to provide users outside of their organization access to that service, a SharePoint farm. These users will have an account at one of the institutions federated with Belnet. After some research it came clear to use that we would need an ADFS instance to act as a protocol bridge between SAML and WS-FED. SharePoint does not natively speak SAML. Now the next question: how do we get Belnet to trust our ADFS instance and how do we get our ADFS instance to trust the IDP’s part of the Belnet federation?

These are two different problems and both need to be addressed in order for authentication to succeed. We need to find out how we can let Belnet trust our ADFS instance. But first we zoom into the part where we try to trust the IDP’s in the Belnet federation. This federation has over 20 IDP’s in it and it’s metadata is available at the following URL: Metadata XML file - Official Belnet federation From my first contacts with the people responsible for this federation I heard that it would be hard to get ADFS to “talk” to this federation. They mentioned ADFS does speak SAML, but not all SAML specifications are supported. One of the things that ADFS cannot handle is creating a claims provider trust based upon a metadata file which contains multiple IDPs. And guess what this Belnet metadata file contains…

Some research led me to the concept of federation trusts topologies. Suppose you got two partners who want to expose their Identity Provider so that their users can authenticate at services hosted between partners. In the Microsoft world one typically configures one ADFS instance as a claims provider trust and on the other side the other way round: as a relying party trust. And for the other organization the other way round. And that’s it. But what happens if you want to federate with 3 parties? Now each party has to add two claims provider trusts. And what happens when a new organization joins the federation? Each organization that is already active in the federation has to exchange metadata and add the new organization. As the number of partners in the federation grows you can see that the Microsoft approach seems to scale badly for this…

Now after reading up a bit on this subject I learned that there are two types of topologies: full mesh and proxy based. In the proxy approach each party federates with the proxy and the proxy remains in the middle for authentication requests. In the full mesh topology each party federates with each party. As I explained above, a full mesh approach scales bad. The Belnet setup is mostly based upon Shibboleth and each Shibboleth server gets updated automatically whenever an additional IDP or SP is added to the federation. So Belnet is only responsible for distributing the federation partner information to each member. So I came up with the following idea: If I were to take the Belnet XML file and chop it into multiple IDP XML files, I could add those one by one to the ADFS configuration. I got this idea here: Technet (Incommon Federation): Use FEMMA to import IDPs

Here’s a schematic view of the Federation Metadata exchanges. It might makes things a bit more clear. On the schema you’ll see the Shibboleth server, but in fact, for the SharePoint/ADFS instance it’s irrelevant.

belnet

Adding Belnet IDP’s to ADFS

Search the Belnet federation XML file for something recognizable like part of the DNS domain: vub.ac.be, or (part of) the name of the IDP: Brussel Once you got the good entry we need everything from this IDP that’s between the <EntityDescriptor> tags. So you should have something like this:

001
002
003
004
005
006
007

<EntityDescriptor entityID="https://idp.vub.ac.be/idp/shibboleth" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:shibmd="urn:mace:shibboleth:metadata:1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
     
    <GivenName>Technical Support</GivenName> 
    <SurName>Technical Support</SurName> 
    <EmailAddress>support@vub.ac.be</EmailAddress> 
    </ContactPerson> 
</EntityDescriptor>

Copy this to a separate file and save it as FederationMetadata_VUB.xml

Now go to the ADFS management console and add a claims provider trust.

image

When asked, provide the XML file we just created. When you’re done change the Signature hash algorithm. You can find this on the advanced trust. This might differ from trust to trust and you can try without changing, but if your authentication results in an error, check your ADFS event logs and if necessary change this setting.

image

The error:

image

In words:

Authentication Failed. The token used to authenticate the user is signed using a weaker signature algorithm than expected.

And that’s it. Repeat for any other IDP’s you care about. Depending on the number of IDP’s this is a task you’d want to script or not. The InCommon federation guide contains a script written in Python which provides similar functionality.

Adding your ADFS as SP to the Belnet Federation

Now the first part seemed easy. We had to do some cutting and pasting, but for a smaller amount of IDP’s this seems doable. Now we have to ensure all involved IDP’s trust our ADFS server. In the worst case we have to contact them one by one and exchange information. But that would mean we’re not benefitting the Belnet federation. Our goal is to have our ADFS trusted by Belnet and that will ensure all Belnet partners trust our ADFS instance. This would ensure we only have to exchange information with one party and thus simplifying this process a lot!

First we need the Federation Metadata from the ADFS instance: https://sts.contoso.com/FederationMetadata/2007-06/FederationMetadata.xml

Then we need to edit a bit so that the Belnet application that manages the metadata is capable of parsing the file we give it. Therefore we’ll remove the blocks we don’t need or that tooling at Belnet is not compatible with:

  • Signature block: <signature>…</signature>
  • WS-FED stuff: <RoleDescriptor xsi:type="fed:ApplicationServiceType … </RoleDescriptor>
  • Some more WS-FED stuff: <RoleDescriptor xsi:type="fed:SecurityTokenServiceType" … </RoleDescriptor>
  • SAML IDP stuff, not necessary as we’re playing SP: <IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> … </IDPSSODescriptor>

We also need to add some contact information:

There should be a block present that looks like this: <ContactPerson contactType="support"/>

Replace it with:

001
002
003
004
005
006
007
008
009
010
011

<Organization>
    <OrganizationName xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> Contoso </OrganizationName>
    <OrganizationDisplayName xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> Contoso Corp </OrganizationDisplayName>
    <OrganizationURL xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> http://www.contoso.com </OrganizationURL>
</Organization>
<ContactPerson contactType="technical">
    <GivenName>Thomas</GivenName>
    <SurName>Vuylsteke</SurName>
    <EmailAddress>adfs.admin@contoso.com</EmailAddress>
<
/ContactPerson>

Now you’re ready to upload your modified metadata at Belnet: https://idpcustomer.belnet.be/idp/Authn/UserPassword

After some time you’ll be able to logon using the IDP’s you configured. Pretty cool eh! Authentication will rely on the trusts shown below:

belnetAu

Some remarks:

Scoping: once you trust several IDP’s like this, you might be interested in a way to limit the users to the ones your organization works with. The customer I implemented this has an overview of all users in their Active Directory. So we allow the user to log on at their IDP, but we have ADFS authorization rules that only issue a permit claim when we find the user as an enabled AD user in the customer AD. These user are there for legacy reasons and can now be seen as some form of ghost accounts.

Certificates: the manual nature of the above procedure also means you have to keep the certificates up to date manually! If the IDP starts using an other certificate you have to update that IDP specific information. If you change your certificates on the ADFS instance you have to contact Belnet again and have your metadata updated. Luckily most IDP’s in the Belnet federation have expiration dates far away in the future. But not all of them. Definitely a point of attention.

Just drop a comment if you want more information or if you got some feedback.

8 comments

Synchronizing Time on Azure Virtual Machines

Published on Friday, June 5, 2015 in ,

I’m currently setting up a a small Identity infrastructure on some Azure Virtual Machines for a customer. The components we’re installing consist of some domain controllers, a FIM server, a FIM GAL Sync server and an SQL server to support the FIM services. All of those are part of the CONTOSO domain. Besides the Azure virtual machines we also got two on-premises machines, also member of the CONTOSO domain. They communicate with the other CONTOSO servers across a site to site VPN with Azure.

Eventually I came to the task of verifying my time synchronization setup. Throughout the years there have been small variations in recommendations. Initially I had configured time synchronization like I always do: configure a GPO that targets specifically the PDC domain controller. This GPO configures the PDC domain controller to use an NTP server for it’s time.

Administrative Templates > System > Windows Time Service > Global Configuration Settings:

image

Set to AnnounceFlags to 5 so this domain controller advertise as a reliable time source. Besides that we also need to give a good source for the PDC domain controller:

Administrative Templates > System > Windows Time Service > Global Configuration Settings > Time Providers

image

In the above example I’m just using time.windows.com as a source and the type is set to NTP. Just for the reference, the WMI filter that tells this GPO to only apply on the PDC domain controller:

image

Typically that’s all what’s needed. Keep in mind, the above was done on a 2012 R2 based domain controller/GPMC. If you use older versions you might have other values for certain settings, on 2012 R2 they are supposed to be as per current recommendations. But that’s not the point of this post. For the above to work, you should make sure that the NTP client on ALL clients, servers and domain controllers OTHER than the PDC is set to NT5DS:

w32tm /query /configuration

image

Once the above is al set the following logic should be active:

Put simple: if you got a single domain, single forest topology:

  • The PDC domain controllers syncs from an internet/external source
  • The domain controllers sync from the PDC domain controller
  • The clients/member servers sync from A domain controller

You can verify this by executing w32tm /query /source:

On my PDC (DC001), on a DC (DC002) and on a member server (hosted in Azure):

time1

=> VM IC Time Synchronization Provider

On my DC (DC003)(hosted on premises on VMware):

time2

=> The PDC domain controller

On my member server (hosted on premises on VMware):

time3

=> A domain controller

As you can see, that’s a bit weird. What is that VM IC Time Synchronization Provider? If I’m not mistaken, it’s a component that gets installed with Windows, and is capable of interacting with the hypervisor (E.g. on-premises Hyper-V or Azure Hyper-V). As far as I can tell, VMware guests ignore it. Basically it’s a component that helps the guest sync the time with the physical host it runs on. Now you can imagine that if guests run on different hosts, time might start to drift slowly. In order to mitigate this, we need to ensure the time is properly synchronized using the domain hierarchy.

Luckily it seems we can easily disable this functionality. We can simply set the enabled registry key to 0 for this provider. The good news: setting from 0 –> 1 seems to require a Windows Time Service restart, but I did some tests and setting from 1 –> 0 seems to be come effective after a small period of time. The good news part 2: setting it to 0 doesn’t seem to have a side effect for on-premises VM’s as well.

In my case I opted to use group policy preferences for this:

time4

The registry path: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider set the Value Enabled to 0

And now we can repeat our tests again:

On my PDC (hosted in Azure):

time5

On my DC (hosted in Azure):

time6

On a member server (hosted in Azure):

time7

Summary

I’ll try to validate this with some people, and I’ll definitely update this post If I’m proven to be wrong, but as far as I can tell: whenever you host virtual machines in Azure that are part of a Windows Active Directory Domain, make sure to disable to VM IC Time Provider component.

Imho this kind of information is definitely something that should be added to MSDN: Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines or Azure.microsoft.com: Install a replica Active Directory domain controller in an Azure virtual network

References:

1 comments

Protecting a Domain Controller in Azure with Microsoft Antimalware

Published on Wednesday, June 3, 2015 in ,

I’m getting more and more involved with customers using Azure to host some VM’s in an IAAS scenario. In some cases they like to have a Domain Controller from their corporate domain on Azure. I think it’s a best practice to have some form of malware protection installed. Some customers opt to use their on-premise solution, other opt to use the free Microsoft Antimalware solution. The latter comes as an extension which you can add when creating a virtual machine. Or just add it afterwards. One of the drawbacks is that there’s no central management. You push it out to each machine and that’s it.

Both the old and new portals allow to specify this during the machine creation:

Old portal wizard:

image

New portal wizard:

image

However, the new portal allows you to specify additional parameters:

image

As you can see you can also specify the exclusions. For certain workloads (like SQL) this is pretty important. From past experiences I know that getting exclusions for a given application is a pretty tedious work. You have to go through various articles and compose your list. I took a look at the software installed on an Azure VM and I noticed it was called System Center Endpoint Protection.

image

Second I went ahead and looked in the registry:

image

The easiest way to configure those exclusion setting is through PowerShell. The Set-AzureVMMicrosoftAntimalwareExtension cmdlet has a parameter called AntimalwareConfigFile that accepts both an XML or JSON file. Initially I thought I’d just take the XML files from a System Center Endpoint Protection implementation and be done with it. Quickly I found out that the format for this XML file was different than the templates SCEP uses. So I thought I’d do some quick find and replace. But no matter what I tried, issues kept popping inside the guest and the XML file failed to be parsed successfully. This guide explains it pretty well, but I failed to do so: Microsoft Antimalware for Azure Cloud Services and Virtual Machines

I was preferring XML as that format allows for comment tags which is pretty easy to document certain exclusions. Now I had to resort to JSON which is just a bunch of text in brackets/colons. Here’s some sample config files based upon the files from SCEP:

A Regular Server

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb",
"Processes": ""
}
}

A SQL Server

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb",
"Processes": "%ProgramFiles%\\Microsoft SQL Server\\MSSQL10.MSSQLSERVER\\MSSQL\\Binn\\SQLServr.exe"
}
}

This one is almost identical to the server one, but here we exclude the SQLServr.exe process. The path to this executable might be different in your environment!
A Domain Controller

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb;E:\\Windows\\ntds\\ntds.dit;E:\\Windows\\ntds\\EDB*.log;E:\\Windows\\ntds\\Edbres*.jrs;E:\\Windows\\ntds\\EDB.chk;E:\\Windows\\ntds\\TEMP.edb;E:\\Windows\\ntds\\*.pat;E:\\Windows\\SYSVOL\\domain\\DO_NOT_REMOVE_NtFrs_PreInstall_Directory;E:\\Windows\\SYSVOL\\staging;E:\\Windows\\SYSVOL\\staging areas;E:\\Windows\\SYSVOL\\sysvol;%systemroot%\\System32\\Dns\\*.log;%systemroot%\\System32\\Dns\\*.dns;%systemroot%\\System32\\Dns\\boot",
"Processes": "%systemroot%\\System32\\ntfrs.exe;%systemroot%\\System32\\dfsr.exe;%systemroot%\\System32\\dfsrs.exe"
}
}

Again a lot of familiar exceptions as in the server template but also specific exclusions for NTDS related files and DNS related files. Remark: One of the best practices for installing domain controllers in Azure is to relocate the AD database/log files and sysvol to another disk with caching set to none. So the above exclusions might be wrong! Replace %systemroot% with the drive letter containing your AD files!

Special remark: the SCEP templates have a bug where they add %systemroot%\\system32\\GroupPolicy\\Registry.pol which in fact should be %systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol I’ve given an example issue of that here: Setspn.blogspot.com: Corrupt Local GPO Files

The templates above are in the JSON format. I save them as MicrosoftAntiMalware_DC.json

001
002
003

$vm = get-AzureVM -servicename "CoreInfra" -name "SRVDC01"
$vm | Set-AzureVMMicrosoftAntimalwareExtension -AntimalwareConfigFile C:\Users\Thomas\Documenten\Work\MicrosoftAntiMalware_DC.json | 
Update-AzureVM

Now in the registry on the VM we can verify our extensions are applied:

reg3

Some good references:

19 comments

Configure Windows Logon With An Electronic Identity Card (EID)

Published on Wednesday, October 22, 2014 in , , ,

Here in Belgium people have been receiving an Electronic Identity Card (EID) for years now. Every once in a while I have a customer who asks me whether this card can be used to logon to workstations. That would mean a form of strong authentication is applied. The post below will describe the necessary steps in order to make this possible. It has been written using a Belgian EID and the Windows Technical Preview (Threshold) for both client and server.

In my lab I kept the infrastructure to a bare minimum.

  • WS10-DC: domain controller for threshold.local
  • WS10-CA2: certificate authority (enterprise CA)
  • W10-Client: client

The Domain Controller(s) Configuration

Domain Controller Certificate:

You might wonder why I included a certificate authority in this demo. Users will logon using their EID and those cards come with certificates installed that have nothing to do with your internal PKI. However, in order for domain controllers to be able to authenticate users with a smart card, they should have a valid certificate as well. If you fail to complete this requirement, your users will receive an error:

image

In words: Signing in with a smart card isn’t supported for your account. For more info, contact your administrator.

And your domain controllers will log these errors:

ErrorClientClue1

In words: The Key Distribution Center (KDC) cannot find a suitable certificate to use for smart card logons, or the KDC certificate could not be verified. Smart card logon may not function correctly if this problem is not resolved. To correct this problem, either verify the existing KDC certificate using certutil.exe or enroll for a new KDC certificate.

And

ErrorClientClue2

In words: This event indicates an attempt was made to use smartcard logon, but the KDC is unable to use the PKINIT protocol because it is missing a suitable certificate.

In order to give the domain controller a certificate, that can be used to authenticate users using a smart card, we will leverage the Active Directory Certificate Services (AD CS) role on the WS10-CA2 server. This server is installed as an enterprise CA using more or less default values. Once the ADCS role is installed, your domain controller should automatically request a certificate based upon the “Domain Controller” certificate. This is a V1 template. A domain controller is more or less hardcoded to automatically request a certificate based upon this template.

image 

In my lab this certificate was good enough to let my users authenticate using his EID. After restarting the KDC service and performing the first authentication, the following event was logged though:

image

In words: The Key Distribution Center (KDC) uses a certificate without KDC Extended Key Usage (EKU) which can result in authentication failures for device certificate logon and smart card logon from non-domain-joined devices. Enrollment of a KDC certificate with KDC EKU (Kerberos Authentication template) is required to remove this warning.

Besides the Domain Controller template there’s also the more recent Domain Controller Authentication and Kerberos Authentication templates which depend on auto-enrollment to be configured.

Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies

image

After waiting a bit, gpupdate and/or certutil –pulse might speed things up a bit, we got our new certificates:

image

You can see that the original domain controller certificate is gone and replaced by its more recent counterparts. After testing we can confirm that the warning is no longer logged in the event log. We have now covered the certificate the domain controller requires, we’ll need to add a few more settings on the domain controllers for EID logons to work.

Domain Controller Settings

Below HKLM\SYSTEM\CurrentControlSet\Services\Kdc we’ll create two registry keys:

  • DWORD SCLogonEKUNotRequired 1
  • DWORD UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors 1

Strictly spoken, the last one shouldn’t be necessary if your domain controller can reach the internet, or at least the URL where the CRL’s used in the EIDs, are hosted. If you use this registry key, make sure to remove a name mapping (more on that later) or disable the user when the EID is stolen or lost. An easy way to push these registry key is using group policy preferences.

Domain Controller Trusted Certificate Authorities

In order for the domain controller to accept the EID of the user, the domain controller has to trust the full path in the issued certificate. Here’s my EID as an example:

image

We’ll add the Belgium Root CA2 certificate to the Trusted Root Certificate Authorities on the domain controller:

Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Trusted Root Certification Authorities

image

And the Citizen CA to the Trusted Intermediate Certificate Authorities on the domain controller:

Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Intermediate Certification Authorities

image

Now this is where the first drawback from using EIDs as smartcards comes: there are many Citizen CA’s to add and trust… Each month, sometimes more, sometimes less, a new Citizen CA is issued and used to sign new EID certificates. You can find them all here: http://certs.eid.belgium.be/ So instead of using a GPO to distribute them, scripting a regular download and adding them to the local certificate stores might be a better approach.

The Client Configuration

Settings

For starters we’ll configure the following registry keys:

Below HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters we’ll create two registry keys:

  • DWORD CRLTimeoutPeriod 1
  • DWORD UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors 1

Again, if your client is capable of reaching the internet you should not need these. I have to admit that I’m not entirely sure how the client will react when a forward proxy is in use. After all, the SYSTEM doesn’t always know what proxy to use and it might be requiring to authenticate.

Besides the registry keys, there’s also some regular group policy settings to configure. In some articles you’ll probably see these settings also being pushed out as registry keys, but I prefer to use the “proper” settings as they are available anyhow.

Computer Settings > Policies > Administrative Templates > Windows Components > Smart Cards

  • Allow certificates with no extended key usage certificate attribute: Enabled
    • This policy setting lets you allow certificates without an Extended Key Usage (EKU) set to be used for logon.
  • Allow signature keys valid for Logon: Enabled
    • This policy setting lets you allow signature key-based certificates to be enumerated and available for logon.

These two are required so that the EID certificate can be used. As you can see it has a usage attribute of Digital Signature

UsageAttr

In some other guides you might also find these Smart Card settings enabled:

  • Force the reading of all certificates on the smart card
  • Turn on certificate propagation from smart card
  • Turn on root certificate propagation from smart card

But my tests worked fine without these.

Drivers

Out of the box Windows will not be able to use your EID. If you don’t install the required drivers you’ll get an error like this:

SmartCardErrorNoDrivers

You can download the drivers from here: eid.belgium.be On the Windows 10 preview I got an error during the installation. But that probably had to do with the EID viewer software. The drivers seem to function just fine.

EidDriverError

Active DIrectory User Configuration

As these certificates are issued by the government, they don’t contain any specific information that allows Active Directory to find out to which user should be authenticated. In order to resolve that we can add a name mapping to a user. And this is the second drawback. If you want to put EID authentication in place you’ll have to have some sort of process or tool that allows users to link their EID to their Active Directory User Account. The helpdesk could do this for them or you could write a custom tool that allows users to do it themselves.

In order to do it manually:

First we need the certificate from the EID. You can use Internet Explorer > Internet Options > Content > Certificates

EID

You should see two certificates. The one you want is the one with Authentication in the Issued To. Use the Export… button to save it to a file.

Open Active Directory Users and Computers > View > Advanced Features

NameMapping1

Locate the user the EID belongs too > Right-Click > Name Mappings…

NameMapping2

Add an X.509 Certificate

NameMapping3

Browse to a copy of the Authentication smart card which can be found on the EID

NameMapping35

Click OK

NameMapping4

Testing the authentication

You should now be able to logon to a workstation with the given EID. Either by clicking other user and clicking the smart card icon

EIDLogon

Or if the client has remembered you from earlier logons you can choose smart card below that entry.

EIDLogon2

An easy way to see if a user logged on using smart card or username/password is the query for the user his group memberships on the client. When users  log on with a smart card they get the This organization certificate group SID added to their logon token. This is a well-known group (S-1-5-65-1) that was introduced with Windows 7/ Windows 2008 R2.

ThisOrgCertWhoami

Forcing smart card authentication

Now all of the above allows a user to authenticate using smart cards, but it doesn’t forces the user to do it. Username password will still be accepted by the workstations. If you want to force smart card logon there are two possibilities. Each with their own drawbacks.

1. On the user level:

There’s a property Smart card is required for interactive logon that you can check on the user object in Active Directory. Once this is checked, the users will only be able to logon using a smart card. There’s one major drawback though. Once you click apply, at the same time this will set the password of that user to a random value and password policies will no longer apply for that user. That means that if you got some applications that are integrated with Active Directory, but do so by asking credentials in a username/password form, your user will not be able to logon as they don’t know the password… If you configure this setting on the user you have to make sure all applications are available through Kerberos/NTLM SSO. If you were to use Exchange Active Sync, you would have to change the authentication scheme from username/password to certificate based for instance. So I’m not really sure enforcing this at the user level is a real option. This option seems more feasible for protection high privilege accounts.

RequireSm

2. On the workstation level:

There’s a group policy setting that can be configured on the computer level that enforces all interactive logons to require a smart card. It can be found under computer settings > Policies > Windows Settings > Security Settings > Local Policies > Security Options > Interactive logon: Require smart card

 InterActL

While you’re there, also look at Interactive logon: Smart card removal behavior. It allows you to configure a workstation to lock when a smart card is removed. If you configure this one, make sure to also configure the Smart Card Removal Policy service to be started on your clients. This service is stopped and set to manual by default.

Now the bad news. Just like with the first one, there’s also a drawback. This one could be less critical for some organisations, might it might require people to operate in a slightly different way. Once this setting is enabled, all interactive logons require a smart card:

  • Ctrl-alt-del logon like a regular user
  • Remote Desktop to this client
  • Right-click run as administrator (in case the user is not an administrator himself) / run as different user

For instance right-click notepad and choosing run as different user will result in the following error if you try to provide a username/password

AccountRest 

In words: Account restriction are preventing this user from signing in. For example: blank passwords aren’t allowed, sign-in times are limited, or a policy restriction has been enforced. For us this is an issue as our helpdesk often uses remote assistance (built-in the OS) to help users. From time to time they have to provide their administrative account in order to perform certain actions. As the user his smart card is inserted, the helpdesk admin cannot insert his own EID. That would require an additional smart card reader. And besides that: a lot of the helpdesk tasks are done remotely and that means the EID is in the wrong client… There seem to be third party solutions that tackle this particular issue: redirecting smart cards to the pc you’re offering remote assistance.

Now there’s a possible workaround for this. The policy we configure in fact sets the following registry value to 1:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\ScForceOption

Using remote registry you could change it to 0 and then perform your run as different again. Changing it to 0 immediately sets the Interactive logon: Require smart card to disabled. Effective immediately. Obviously this isn’t quite elegant, but you could create a small script/utility for it…

Administrative Accounts (or how to link a smart card to two users)

If you would use the same certificate (EID) in the name mapping of two users in Active Directory, your user will fail to login:

DoubleMapping

In words: Your credentials could not be verified. The reason is quite simple. Your workstation is presenting a certificate to Active Directory, but Active Directory has two principals (users) that map to that certificate. Now which user does the workstation want?

Name hints to the rescue! Let’s add the following GPO setting to our clients:

Computer Settings > Policies > Administrative Templates > Windows Components > Smart Cards

  • Allow user name hint: enabled

After enabling this setting there’s an optional field called Username hint below the prompt for the PIN.

Hint

In this username hint field the person trying to logon using a smart card can specify which AccountName to be used. In the following example I’ll be logging on with my thomas_admin account:

HintAdmin

NTauth Certificate Store

Whenever you read into the smart card logon subject you’ll see the NTauth certificate store being mentioned from time to time. It seems to be involved in some way, but it’s still not clear to me. All I can say is that in my setup, using an AD integrated CA for the Domain Controller certificates, I did not had to configure/add any certificates to the NTauth store. Not the Belgian Root CA, Not the Citizen CA. My internal CA was in it of course.

I did some tests, and to my experience, the CA that issued your domain controllers certificate has to be in the NTAuth store on both clients and domain controllers. If you would remove that certificate you’ll be greeted with an error like this:

NtAuth

In words: Signin in with a smart card isn’t supported for your account. For more info, contact your administrator. And on the domain controller the same errors are logged like the ones from the beginning of this article.

Some useful commands to manipulate the NTauth store locally on a client/server:

  • Add a certificate manually: certutil -enterprise -addstore ntAuth .\ThresholdCA.cer
  • View the store: certutil -enterprise -viewstore ntAuth
  • Delete a certificate: certutil -enterprise -viewdelstore ntAuth

Keep in mind that the NTauth store exists both locally on the client/servers and in Active Directory. An easy way to view/manipulate the NTauth store in Active Directory is the pkview.msc management console which you typically find on a CA. Right-click the root and choose manage AD containers to view the store.

A second important fact regarding the NTauth store. Whilst you might see the require CA certificate in the store in AD, your clients and servers will only download the content of the AD NTauth store IF they have auto-enrollment configured!

Summary:

There are definitely some drawbacks to using EID in a corporate environment:

  • No management software to link the certificates to the AD users. Yes there’s active directory users and computers, but you’ll have to ask the users to either come visit your helpdesk or email their certificate. Depending on the number of users in your organisation this might be a hell of a task. A custom tool might be a way to solve this.
  • Regular maintenance: as described, quite regular a new Citizen CA (Subordinate Certificate Authority) is issued. You need to ensure your domain controllers have this CA in their trusted intermediate authorities store. This can be done through GPO, but this particular setting seems hard to automated. You might be better off with a script that performs this task directly on your domain controllers.
  • Helpdesk users will have to face the complexity if the require a smart card setting is enabled.
  • If an EID is stolen/lost you might have to temporary allow normal logons for that user. An alternative is to have a batch of smart cards that you can issue yourself. An example vendor for such smart cards is Gemalto.
  • An other point that I didn’t had to chance to test though. What about the password of the users. If they can’t use it to logon, but the regular password policies still apply, how will they be notified of the expiration? Or even better, how will they change it? Some applications might depend on the username/password to logon.

As always, feedback is welcome!