Friday, August 30, 2013

Sharepoint error: The Module DLL 'C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\isapi\spnativerequestmodule.dll' could not be loaded due to a configuration problem.

Last week I deployed a sample application to my test IIS environment. But when I tried to browse to the site I got a 503 Service Unavailable Error. When I looked inside IIS, I noticed that the application pool had a problem starting up.

The event log brings more inside into the issue:

The Module DLL 'C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\isapi\spnativerequestmodule.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a x86 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349.

‘Web Server Extensions’? This rings a bell, I have a Sharepoint 2013 installation on the same server. But why does IIS tries to load a Sharepoint DLL inside my own site? The website I deployed has to run as a 32bit process, something that Sharepoint 2013 doesn’t seems to like. The finger points to a new ISAPI module in SharePoint 2013 and its stopping our 32bit site from loading.

It looks like there is a bug in Sharepoint where a global module isn’t conditionally loading the right DLL depending if it’s a 32 or 64bit application. Let’s have a look at the globalmodules by executing the following command:

appcmd.exe list config  /section:globalmodules

And indeed we find a reference to the DLL mentioned inside the event log without any kind of precondition:

<add name="SPNativeRequestModule" image="C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\isapi\spnativerequestmodule.dll"  />         

Let’s fix this by applying the following command

appcmd.exe set config -section:system.webServer/globalModules /[name='SPNativeRequestModule'].preCondition:integratedMode,bitness64

This changes the configuration to the following:

<add name="SPNativeRequestModule" image="C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\isapi\spnativerequestmodule.dll" preCondition="integratedMode,bitness64" />         

And now our 32bit application loads without any issues…

Thursday, August 29, 2013

Visual Studio: Speed up your build time

One of the options that people overlook in Visual Studio, is the ability to build multiple projects in parallel. This can give you a huge improvement in build time, so it’s certainly a good idea to try to change this number.

You can find this option under Tools –> Options –> Projects and Solutions –> Build and Run:

image

Wednesday, August 28, 2013

Impress your colleagues with your knowledge about... the ExceptionDispatchInfo class

Sometimes when working with .NET you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I discovered some days ago is the ExceptionDispatchInfo class. This class allows you to capture an exception that occurred in one place(e.g. a thread) and then rethrow it in another place(e.g. another thread). This seems not that special at first but what makes different is that allows you to maintain the full fidelity of the original stack trace and exception information.

How do you use this class?

Start by capturing an exception in one place by using  ExceptionDispatchInfo.Capture to create an ExceptionDispatchInfo instance. This instance can now be transferred from place to place.

Next, you can inspect the captured exception using the SourceException property; and finally, you rethrow the captured exception using the Throw method.

Remark: Microsoft is using this class in the Task Parallel Library to capture and rethrow exceptions.

Tuesday, August 27, 2013

The one question a manager needs to ask

Recently I started reading some of the books by Scott Berkun. Most of these books talk about management. Not that I have short-term plans to become a manager, but I’m trying to understand what's going on in my boss’s head.

One of the quotes that Scott mentions is that a manager only needs to ask one simple question to every person in a team:

“What can I(as your manager) do to help you do your best work?”

I always knew that a manager’s job was easy Knipogende emoticon. But I do agree with this statement, management is about empowerment and bringing the best above in your people. I don’t like the ‘command-and-control’ style of managers.

What do you think a manager should focus on?

Dilbert%GoodManager

Monday, August 26, 2013

Technical Interviewing–You’re Doing it Wrong

As a consultant I have experience in both sides of the interviewing process. I have been the interviewee and interviewer in multiple occasions. One of the things I noticed is that no 2 interviews are the same and everyone has it’s own way of working.

What I always try to do, is to take some of the stress away when I’m interviewing someone. No idea if I’m successful in doing this, but I do my best. When people are put under stress, they suddenly can’t remember some of the easiest stuff. Making an interview a stressful experience will not give you good insight in the quality of a candidate.

If you want to improve your skills as an interview, have a look at this interesting (and funny) Channel 9 video:

Friday, August 23, 2013

Stand alone installer for the TFS object model

Microsoft made it possible from the beginning to extend TFS or do some basic programming against a TFS server by using the TFS object model. The only problem(next to the lack of good documentation) is that you had to install Visual Studio or Team Explorer to be able to use this.

In TFS 2012, Microsoft finally create a stand-alone installer that would just install the programmability components – making for a faster, smaller and less impactful install.

You can download the installer here: http://visualstudiogallery.msdn.microsoft.com/f30e5cc7-036e-449c-a541-d522299445aa

Remark: you still need proper licenses/CALs.

ASP.NET MVC 4 Bug: HttpParseException: Unexpected "this" keyword after "@" character.

After upgrading an ASP.NET MVC 3 project to ASP.NET MVC 4, my page failed to load with the following error message:

HttpParseException: Unexpected "this" keyword after "@" character. 

Here is the failing code block. The exception occurs on line 5:

This seems to be a bug in Razor V2 as mentioned here: http://aspnetwebstack.codeplex.com/workitem/458. The ASP.NET team decided not to fix this, so you have to use a workaround.

By adding some extra ‘()’ the error goes away:

Wednesday, August 21, 2013

ASP.NET MVC: [A]System.Web.WebPages.Razor.Configuration.HostSection cannot be cast to [B]System.Web.WebPages.Razor.Configuration.HostSection.

After upgrading an ASP.NET MVC 3 application to ASP.NET MVC 4, I got the following error message back:

[A]System.Web.WebPages.Razor.Configuration.HostSection cannot be cast to [B]System.Web.WebPages.Razor.Configuration.HostSection. Type A originates from 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Web.WebPages.Razor\v4.0_1.0.0.0__31bf3856ad364e35\System.Web.WebPages.Razor.dll'. Type B originates from 'System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Web.WebPages.Razor\v4.0_2.0.0.0__31bf3856ad364e35\System.Web.WebPages.Razor.dll'.

clip_image002

To solve it, double check if the following values are set correctly:

  • In the root web.config, check that the Page version is set to 2.0.0.0:
  • <appSettings>
        <add key="webpages:Version" value="2.0.0.0" />
        <add key="webpages:Enabled" value="false" />
        <add key="PreserveLoginUrl" value="true" />
        <add key="ClientValidationEnabled" value="true" />
        <add key="UnobtrusiveJavaScriptEnabled" value="true" />
      </appSettings>

  • In the views web.config, check that the Razor version is also set to 2.0.0.0:

<configSections>
    <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
      <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
    </sectionGroup>
  </configSections>

Tuesday, August 20, 2013

curl: (1) Protocol 'http not supported or disabled in libcurl

To test a REST API I’m building, I decided to use curl.

From the website:

“curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a busload of other useful tricks.”

So after downloading the curl version for Windows, I opened a command prompt and browsed to the folder where I extracted the zip file.

I started with a simple command to invoke a PUT request to my API:

curl -XPUT 'http://localhost:9200/api/twittervnext/tweet'

Woops, first try and already an error:

curl: (1) Protocol 'http not supported or disabled in libcurl

The reason for this error is kind of stupid, Windows doesn’t like it when you are using single quotes for commands. So the correct command is:

curl –XPUT http://localhost:9200/api/twittervnext/tweet

Monday, August 19, 2013

Forget the long planning sessions and work break down: use Blink estimation

One of the less fun parts of my job is that I spend a lot of time doing estimations. As with all estimates, sometimes they are quiet correct and sometimes they are horribly wrong. Either way it cost us time and money to get some estimates, which in the end are… just estimates.

Most of the time I use a work breakdown to create my estimates. This approach has 3 big disadvantages(in my opinion):

  • It’s very time-consuming
  • It leads to overestimating. Typically the more depth you add to the work breakdown, the higher the estimation will be.
  • It doesn’t allow you to manage uncertainty. In the end you still have to use some gut feeling to apply a factor depending on the level of uncertainty or risk.

Last week I was reading about an alternative approach: Blink estimation.

The idea of Blink estimation is that if you bring the right mix of people with sufficient experience in the room you can reasonably assess how this project is similar to, and different from, previous ones, and how much you should reasonably invest to realize the business impact associated with it. If everyone's gut feeling is that the project can be done in 5 months with 4 people, than chances are that we will indeed succeed in realizing the project in 5 months with 4 people.

There are a small number of “rules” to increase the likelihood of it working. There are three things you need to do blink estimation:

Experts estimating

Blink estimation is a comparison exercise that draws on the context you’ve built up becoming an expert. So you need a context of many previous projects. You need to gather people from different disciplines: testers, analysts, project managers, programmers, architects, operations and support engineers among others. Choose disciplines that are likely to have overlapping but distinct areas of expertise. First discuss the potential risks and pitfalls.When people are moving towards a decision point, everyone to estimate size and duration, and then if there are large variances you can explore the assumptions behind the outliers.

Expert messenger

The expert messenger is about knowing how to frame the output of the exercise so it makes sense to the customer. You need to be comfortable describing how experts rely on instinct and intuition, and defending that as a legitimate basis for estimation

Expert customer

Having an expert customer is probably the hardest part. You need a customer that understands the concept is convinced that it can work. Spend some time and effort into educating your customer or client. Once they are on board with the idea of trusting the delivery team’s experience lots of things become a lot smoother.

Read the full article by Dan North and get inspired!

Friday, August 16, 2013

WIF: IsAuthenticated is false on a ClaimsIdentity

Let’s end this week with a last WIF related post. When you create a new ClaimsIdentity in your code and you check the IsAuthenticated property it will return false by default. This is a breaking change compared to the previous identity types. The idea is that in a claims model your users can be anonymous although they have specified some claims(e.g. an age claim).

If you want your IsAuthenticated property to return true, you need to set an authentication type when instantiating the ClaimsIdentity object:

Thursday, August 15, 2013

WCF: Enable the WIF security integration

WCF has it’s own security model build in. This security model has some overlap what WIF(Windows Identity Foundation) has to offer. Starting from .NET 4.5 you can replace the WCF security pipeline with a WIF equivalent.  This means that we can start using class like ClaimsAuthenticationManager and ClaimsAuthorizationManager to manage claims security in our WCF service.

To enable this new model, you have to set the UseIdentityConfiguration property on the ServiceCredentials behavior to true. This tells WCF to get its token handling configuration from the <system.identityModel> configuration section. Internally this replaces the standard ServiceCredentials with FederatedServiceCredentials.

Wednesday, August 14, 2013

WCF: Thread.CurrentPrincipal is null when using the WIF pipeline in .NET 4.5

When using the new and improved WIF functionality in .NET 4.5 in a WCF service I noticed that although the OperationContext.Current.ClaimsPrincipal was set correctly, the Thread.CurrentPrincipal was null. To tell WCF to put the ClaimsPrincipal coming from the token handler on Thread.CurrentPrincipal you have to add the following service behavior to your configuration:

The end result is a ClaimsPrincipal containing the username, authentication method and authentication instant claims. Also the claims transformation/validation/authorization pipeline will be called if configured.

Tuesday, August 13, 2013

The configuration section 'system.identityModel' cannot be read because it is missing a section declaration

If you see the following error message in your WIF enabled application:

The configuration section 'system.identityModel' cannot be read because it is missing a section declaration.

Don’t search the solution too far, the issue is exactly as the error message is saying. You probably forgot to add he configuration section and namespace declaration under the <configSections> element in your app.config or web.config:

Who is saying that error messages can’t be descriptive? Glimlach

Monday, August 12, 2013

Configuring a sliding expiration on the System.Runtime.Caching.MemoryCache

One of the most useful caching configurations I use is a sliding expiration. This means that as long as an item in the cache is requested frequently, the item remains in the cache. This gives you a good balance between cache size and performance.

To configure this on the .NET MemoryCache you can use the following code:

Friday, August 9, 2013

Error: TF400534: Package (tfs_objectmodel_x64) caching failed with the following status: 0x80070001

If you see a TFS installation(or the installation of an update) fail with the following error message :

Error: TF400534: Package (tfs_objectmodel_x64) caching failed with the following status: 0x80070001

it probably means that your installation media is corrupt. Just download it again and restart the installation procedure.

clip_image002

Thursday, August 8, 2013

Simplify the debugging of Single Page Applications: disable the browser cache

Browsers are really good at caching your data. Most time this is exactly what you need until you are building a Single Page Applications and continuously are changing the used HTML,CSS and JavaScript files.

To simplify the debugging process you can (temporarily) disable the browser cache:

  • In Internet Explorer:
    • Open the Developer Tools(F12)
    • Go to the Cache menu item and enable the Always refresh from server option.

clip_image002

  • In Chrome:
    • Open the Developer Tools(F12)
    • Go to the Settings by clicking on the small icon in the right corner:

 image

    • On the Settings page check the checkbox next to Disable cache(while DevTools is open).

clip_image002[6]

  • In FireFox:
    • Open FireBug(F12)
    • Go the Net menu item and click on the small arrow next to it. Select the Disable Browser Cache option.

clip_image002

Wednesday, August 7, 2013

Clearing the System.Runtime.Caching.MemoryCache

In one of the applications I’m building I wanted to abstract away the usage of the .NET MemoryCache behind a nice clean interface:

Implementing most parts of this interface wasn’t hard but I struggled with the Clear method. There is no easy way to just clear all items from the MemoryCache. I tried the Trim() method but it had not the desired effect. I was able to solve it by disposing the existing cache instance and creating a new one behind the scenes. Here is the complete implementation:

Tuesday, August 6, 2013

Enterprise Library 6: InvalidOperationException - Database provider factory not set for the static DatabaseFactory. Set a provider factory invoking the DatabaseFactory.SetProviderFactory method or by specifying custom mappings by calling the DatabaseFactory.SetDatabases method

Last week I finally had some time to have a look at Enterprise Library version 6. The Microsoft Patterns and Practices team decided to remove the dependency on Unity, so creating an EntLib object should be done using one of the available factory classes.
In this case I tried to created an instance of a Database object:
Database db = DatabaseFactory.CreateDatabase("Northwind");
Life is never so simple, so it failed with the following exception message:
“Database provider factory not set for the static DatabaseFactory. Set a provider factory invoking the DatabaseFactory.SetProviderFactory method or by specifying custom mappings by calling the DatabaseFactory.SetDatabases method.”
With Enterprise Library 6 we first have to specify how the application block should load it configuration information. For the Database Application Block you can use the SetDatabaseProviderFactory method:
DatabaseFactory.SetDatabaseProviderFactory(new DatabaseProviderFactory());
Probably even better is to avoid the static DatabaseFactory at all and use the DatabaseProviderFactory class directly:
DatabaseProviderFactory factory = new DatabaseProviderFactory();
Database db = factory.Create("Northwind");

Monday, August 5, 2013

Attempt by security transparent method ‘SomeMethod’ to access security critical method ‘SomeOtherMethod’ failed.

Last week I was extending the Enterprise Library Logging Application Block to include some extra information when logging some data. I wanted to validate incoming parameters so I added a reference to a helper library I created that contained a set of validation extension methods.

Everything compiled nicely but when I tried to run the application it failed with the following error message:

“Attempt by security transparent method ‘FormattedDatabaseTraceListener.ctor()’ to access security critical method ‘StringExtensions.IsEmpty(System.String) failed.’”

clip_image002

This error is caused by our good old friend CAS(=Code Access Security). Most people never used CAS because it was way to complex to debug issues. In .NET 4.0 it got replaced by a much simpler security model (but I don’t think anyone will give it a second try).  But in this case Microsoft Patterns and Practices team gives the good example and is using it in the Enterprise Library codebase.

If we look inside the AssemblyInfo.cs file, we see that the assembly is marked with the SecurityTransparent attribute. This means that the code is transparent; the entire assembly will not do anything privileged or unsafe.

image

The problem is that the moment you start adding these security attributes, you also have to configure the CAS for any other assemblies you call.

In this case I don’t care about CAS, so I just removed the [assembly: SecurityTransparent] line from the AssemblyInfo.cs file.

Friday, August 2, 2013

Team Foundation Server 2012 Updates: should I install previous updates first?

Short anwer: NO. Glimlach

Somewhat longer answer: Starting from TFS 2012 Microsoft switched to a faster pace, releasing new functionality every 3 months through so called ‘Updates’. There have been 3 updates for TFS 2012. If you want to upgrade just take the latest version, all previous updates will be applied as well.

Thursday, August 1, 2013

Team Foundation Server 2012 Build: The deployed status

When looking at the TFS 2012 Web Access, I noticed that there are 3 possible Build States: ‘queued’, ‘completed’, ‘deployed’

image

‘queued’ and ‘completed’ I was aware of but ‘deployed’ I had never noticed before.

I found out that the ‘deployed’ value is only set when you are using the Azure integration. But if you really want to use it yourself, you can always set the value through Powershell: