0 comments

AX Enterprise Portal Webparts Only Deployment

Published on Tuesday, January 28, 2014 in

Lately a nice challenge was presented to me: make the Dynamics AX webparts work on a SharePoint site other than the actual Enterprise Portal. As per TechNet (Deploy Microsoft Dynamics AX Web parts to a SharePoint site [AX 2012]) this is a valid scenario. In our case we would like to use the AX report viewer webpart to render some reports in an extranet scenario. One of the steps to enable this is to install the AX Portal components without checking the create site option in the setup and obviously target the site you wish to install the webparts on. During the AX portal components installations I got greeted with an error:

2013-11-14 13:07:45Z    Bezig met invoeren van functie ConfigureAuthenticationMode
2013-11-14 13:07:45Z    An error occurred during setup of Enterprise Portal (EP).
2013-11-14 13:07:45Z   Reason: The given key was not present in the dictionary.
2013-11-14 13:07:45Z    Registering tracing manifest file "D:\Program Files\Microsoft Dynamics AX\60\Server\Common\TraceProviderCrimson.man".
2013-11-14 13:07:45Z    WEvtUtil.exe install-manifest "C:\Users\lab admin thomas\AppData\Local\Temp\3\tmp4243.tmp"
2013-11-14 13:07:45Z        **** Warning: Publisher {8e410b1f-eb34-4417-be16-478a22c98916} is installed on
2013-11-14 13:07:45Z        the system. Only new values would be added. If you want to update previous
2013-11-14 13:07:45Z        settings, uninstall the manifest first.

Ok… Fair enough… Which key is being looked after? In which dictionary? I could guess it was trying to find a match of a piece of information in a list (dictionary) and that didn’t went so well. Searching the web didn’t gave me any clues. So time to start the reverse engineering again… The DLL I analyzed was this one: Microsoft.Dynamics.Framework.Deployment.Portal.dll. I used ILSpy to reverse engineer it.

Here’s a screenshot of the relevant code I found. I used the information “bezig met invoeren van functie ConfigureAuthenticationMode” (translated: busy entering function ConfigurationAuthenticationMode”) to get to this point.

image

I didn’t saw any dictionaries being accessed but following the call to GetSPIisSettings led me to the following:

image

And that correlates to the authentication provider configuration (per zone) in the SharePoint Central Administration. You can find this by going to the web application management section. Select your webapp and choose manage authentication providers.

image

The reason our site was extended is because we had users authenticating using claims issued by an ADFS instance. So we had one web application which was configured with 2 authentication providers:

  • Default: ADFS tokens (user access)
  • Custom: Windows Authentication (access for the crawling process)

We were aware of the fact that the site couldn’t be extended, so we (temporary) un-extended the site to only leave Windows Authentication active. However as you can see in the code snippet above, the code really expects the “default” zone to be populated…

Summary:

If you are trying to install the AX webparts on a SharePoint site you have to make sure the following prerequisites are OK:

  • The site cannot be extended
  • The site has to have an authentication provider configured on the Default zone
  • The service account (application pool identity) for the web application should be the BCP account. If you don’t do it the setup will do it for you. It doesn’t very best practice, but in the AX world it seems to be a common requirement to run a lot of code under the context of the BCP account…
  • Make sure the site is available over Windows Authentication. This seems to be necessary in order to successfully register the site (your SharePoint site hosting the webparts) within the sites section of AX. If you don’t do this your site will not be authorized to make requests to AX.
  • If you want your webpart to be available on a site that have users authenticate by claims, you’ll have to register those users as “claims user” within AX.

Once you got everything installed and configured within AX you’re free to extended or modify the authentication providers again.

Good Luck!

0 comments

SQL: Delete a Large Amount of Records

Published on in , ,

I’ve got a setup where an UAG array logs into a remote SQL. I’m still wondering how other people handle the database size. I’ve got a very small SQL agent job which runs once a week and deletes all records older than a month. The size of the database is still pretty large: somewhere around 150 GB. A while ago the job seemed to have problems to finish successfully. We also saw the log file of the database growing to a worrying size: 160 GB. FYI: the database was in simple recovery mode.

Each time I started the job I could see the log file filling up from 0 GB all the way to 160 GB and then it stopped as we set 160 GB as a fixed limit. We did this (temporary) to protect other log files on that volume. Here’s the SQL script (query) used in the cleanup job:

DELETE

FROM FirewallLog

WHERE logTime < DATEADD(MONTH, -1, GETDATE())

As you can see it’s a very simple query. The problem lies in the fact that SQL tries to perform this as one transaction. I hope I get the terminology right btw. A SQL database in simple mode should reuse log space quite fast. If I read correctly, about every minute a checkpoint is issued and it will start overwrite/reusing previously written bits in the log file. Now the problem with my query is that its seen as one large transaction and thus needs to be written away entirely in the log file. Hence the space is not reused during the execution. Here's a script which I found online and which does the job way more gently. In my example WebProxyLog is name of the table I’m targeting.

DECLARE @continue INT

DECLARE @rowcount INT

SET @continue = 1

WHILE @continue = 1

BEGIN

--PRINT GETDATE()

SET ROWCOUNT 10000

BEGIN TRANSACTION

DELETE FROM WebProxyLog WHERE  logTime < DATEADD(MONTH, -1, GETDATE())

SET @rowcount = @@rowcount

COMMIT

--PRINT GETDATE()

IF @rowcount = 0

BEGIN

SET @continue = 0

END

END

This script will have the same outcome, but it will delete records in chunks of 10.000 records at a time. This way each transaction is limited in size and the SQL server can benefit from the checkpoints being issued and can thus reuse database space. Using this approach my logging space was somewhere between 0 and 7 GB during the execution of this task. The ideal value for the maximum amount of records deleted at once might differ depending on your situation. I tend to execute this on calmer moments of the day and thus a bit of additional load is not that worrying.

Bonus tip:  you can easily poll for the free space in the log files using this statement:

DBCC SQLPERF(LOGSPACE);

GO

0 comments

Quick Tip: Multiple IP’s on Adapter and Firewalls

Published on Friday, January 3, 2014 in

Typically a server has one network interface with one IP on it. Especially in virtualized environments. However in certain scenario’s, like web servers, multiple IP’s can be bound to one network interface. When configuring firewalls external to the host, e.g. a hardware device shielding the server segment from other segments, people often wonder what address the server is going to use for outgoing traffic. People tend to think that the first address on the adapter is the one that is used for all outgoing traffic. Perhaps that was true for some earlier versions for Windows, but it seems that somewhere in time this has shifted:

It seems that the server verifies which address has the longest matching prefix with the gateway configured on the adapter.

You can read the details here: http://blogs.technet.com/b/networking/archive/2009/04/25/source-ip-address-selection-on-a-multi-homed-windows-computer.aspx

The example the blog uses:

There’s a server with address 192.168.1.14 and 192.168.1.68 (gateway: 192.168.1.127). The server will use the 192.168.1.68 address because it has the longest matching prefix. To see this more clearly, consider the IP addresses in binary:

  • 11000000 10101000 00000001 00001110 = 192.168.1.14 (Bits matching the gateway = 25)
  • 11000000 10101000 00000001 01000100 = 192.168.1.68 (Bits matching the gateway = 26)
  • 11000000 10101000 00000001 01111111 = 192.168.1.127

The 192.168.1.68 address has more matching high order bits with the gateway address 192.168.1.127. Therefore, it is used for off-link communication.

In the above example you could force the 192.168.1.14 address by using the SkipAsSource parameter you can pass along with netsh. In order to use SkipAsSource we have to add additional address from the command line:

  • Netsh int ipv4 add address <Interface Name> <ip address> <netmask> skipassource=true

In order to verify this we can execute the following command:

  • Netsh int ipv4 show ipaddresses level=verbose

0 comments

Generate Self Signed Certificate for Demo Purposes

Published on in

From time to time you might require a certificate and you want it fast. Mostly you see openssl commands flying around to get this job done. But recently I came across the following information and it’s actually pretty use to do with certreq.exe as well!

Cert.txt content:

[NewRequest]
; At least one value must be set in this section
Subject = "CN=sts.realdolmen.com"
KeyLength = 2048
Exportable = true
MachineKeySet = true
FriendlyName = "ADFS"
ValidityPeriodUnits = 3
ValidityPeriod = Years
RequestType = Cert
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
KeyUsage = 0xa0
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication

The command:

certreq.exe -new .\cert.txt

A popup will appear asking you to save your certificate to a location of choice. And voila:

image

0 comments

SharePoint Configure Super Accounts

Published on in ,

This post will try to explain how you can easily configure a SharePoint web application its superuser and superreader accounts. SharePoint uses these accounts for its caching system. Out of the box system accounts are used for this and you might get a warning in your event log periodically. However, if you get this part wrong, all of your users might end with an access denied message.

1. Here’s how you can do it for a claims based Web application that’s configured with a claims provider as authentication provider.

$webappurl = https://portal.contoso.com
###
### encode users
###
$mgr = Get-SPClaimProviderManager
$tp = Get-SPTrustedIdentityTokenIssuer -Identity "CONTOSO ADFS Provider"
#set super user to windows account (claims based)
$superuser = S_SPS_SU@CONTOSO.COM
$superuserclaim = New-SPClaimsPrincipal –ClaimValue $superuser -ClaimType http://schemas.xmlsoap.org/claims/UPN -TrustedIdentityTokenIssuer $tp
$superuserclaimstring = $mgr.EncodeClaim($superuserclaim)

#set read user to windows account (claims based)
$readuser = S_SPS_SR@CONTOSO.COM
$readuserclaim = New-SPClaimsPrincipal –ClaimValue $readuser -ClaimType http://schemas.xmlsoap.org/claims/UPN -TrustedIdentityTokenIssuer $tp
$readuserclaimstring = $mgr.EncodeClaim($readuserclaim)

###
### web policies
###
$webApp = Get-SPWebApplication $webappurl

#SuperUser
$policy = $webApp.Policies.Add($superuserclaimstring, $superuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()
#ReadUser
$policy = $webApp.Policies.Add($readuserclaimstring, $readuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullRead)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

###
### web properties
###

#$webApp = Get-SPWebApplication webappurl
$webApp.Properties["portalsuperuseraccount"] = $superuserclaimstring
$webApp.Properties["portalsuperreaderaccount"] = $readuserclaimstring
$webApp.update()

2. Here’s how you can do it for a claims based Web application that’s configured with Windows authentication.

$webappurl = https://portal.contoso.com
###
### encode users
###
$mgr = Get-SPClaimProviderManager
#set super user to windows account (claims based)
$superuser = "CONTOSO\S_SPS_SU"
$superuserclaim = New-SPClaimsPrincipal -identity $superuser -IdentityType "WindowsSamAccountName"
$superuserclaimstring = $mgr.EncodeClaim($superuserclaim)

#set read user to windows account (claims based)
$readuser = "CONTOSO\S_SPS_SR"
$readuserclaim = New-SPClaimsPrincipal -identity $readuser -IdentityType "WindowsSamAccountName"
$readuserclaimstring = $mgr.EncodeClaim($readuserclaim)

###
### web policies
###
$webApp = Get-SPWebApplication $webappurl

#SuperUser
$policy = $webApp.Policies.Add($superuserclaimstring, $superuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()
#ReadUser
$policy = $webApp.Policies.Add($readuserclaimstring, $readuser)
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullRead)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

###
### web properties
###

#$webApp = Get-SPWebApplication webappurl
$webApp.Properties["portalsuperuseraccount"] = $superuserclaimstring
$webApp.Properties["portalsuperreaderaccount"] = $readuserclaimstring
$webApp.update()

3. And here’s for a SharePoint web application that is in classic (windows) authentication mode:

#for a Windows site:
$webappurl = https://portal.contoso.com
#Windows users the domain\group notation
$superuser = "CONTOSO\S_SPS_SU"
$readuser = "CONTOSO\S_SPS_SR"

#add the policies
$webApp = Get-SPWebApplication $webappurl

$policy = $webApp.Policies.Add($superuser , $superuser )
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

$policy = $webApp.Policies.Add($readuser , $readuser )
$policyRole = $webApp.PolicyRoles.GetSpecialRole([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)
$policy.PolicyRoleBindings.Add($policyRole)
$webApp.Update()

Bonus: here’s how to encode a group instead of a user. Not useful for the superuser/superreader account, but it might come in handy if you want to configure user policies.

$groupnameClaims = "GG_SPS_ADMINS"
#Windows users the domain\group notation
$groupnameWindows = "CONTOSO\GG_SPS_ADMINS"

$mgr = Get-SPClaimProviderManager
$tp = Get-SPTrustedIdentityTokenIssuer -Identity "CONTOSO ADFS Provider"

#get the string for users authenticating over claims
$claim = New-SPClaimsPrincipal -ClaimValue $groupnameClaims -ClaimType http://schemas.xmlsoap.org/claims/Group -TrustedIdentityTokenIssuer $tp
$claimstr = $mgr.EncodeClaim($claim)

#get the string for users authenticating over classic windows
$windowsprincipal = New-SPClaimsPrincipal -identity $groupnameWindows -IdentityType "WindowsSamAccountName"
$windowsstr = $mgr.EncodeClaim($windowsprincipal)

4 comments

The Processing of Group Policy Failed: logged on user session…

Published on in

Really weird one. A while ago we were being notified by SCOM (System Center Operations Manager) that one of our domain controllers had issues processing group policies. The event in the event log:

clip_image002

The actual error: The processing of Group Policy failed. Windows attempted to read the file \\contoso.com\sysvol\contoso.com\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following:

a) Name Resolution/Network Connectivity to the current domain controller.

b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller).

c) The Distributed File System (DFS) client has been disabled.

First thing I checked was whether each of the domain controllers actually did have the gpt.ini file for that specific GPO:

PS C:\Users\thomas> Get-ADDomainController -filter * |% {gci \\$($_.name)\sysvol\contoso.com\ Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}}

This showed me that indeed all domain controllers had that file present. Somewhere online I found the following suggestion:

C:\Windows\system32>\\machine_with_management_tools_installed\c$\windows\system32\dfsutil.exe /spcflush

And that seemed to stop the error from returning. But after a few minutes I had a little “doh I saw this before moment”. The real cause (and solution) is to log off any old remote desktop sessions on that server which are left open for a considerate amount of time. So here’s a post for myself hoping this little knowledge bit will stick. So whilst the actual error might sound quit scary, there’s no real impact to your endusers or services.