Saturday, September 19, 2009

Instantiating types with no public constructors

In some occasions you want to instantiate a type without a public constructor. Why should you want to do this?

For example, imagine you have a customer class in your system. One of the rules you always try to follow is that an object can never be in an invalid state. For the customer class this means that we always need at least a firstname and lastname. How can we make sure that a customer always has these two values? That's easy.

By using a public constructor that asks you to provide these two values and by readonly properties for first and lastname, you're certain that your customer object is valid.

But when your data layer is fetching customers from the database, it's far more easy to let it use a parameterless constructor than when it has to call your constructor with two parameters(in the assumption that you want to make your data layer as generic as possible). So in that case you can add a second protected constructor without any parameters. No one from outside the class can call this constructor, so developers will always use the constructor with the first and lastname parameter.

Anyway, how can you get this done? One of the overloads of the Activator.CreateInstance method makes this possible.

.NET 4.0 Goodies: Linq to Filesystem

One of the new additions in .NET 4.0 are the new enumerable file and directory API’s. This has really been a problem when you have a large number of files or folders and loading them all into an array at once is a dumb idea. The new API’s return an IEnumerable<> so that you can work with one item at a time. And as we all know Linq and IEnumerable<> are best friends.

I like it!

Impress your colleagues with your knowledge about... the ICustomTypeDescriptor interface.

Sometimes when working with C# you discover some hidden gems. Some of them very usefull, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I discovered some days ago is the ICustomTypeDescriptor interface.

This interface lives in an obscure part of the System.ComponentModel namespace where a lot of other interesting components could be found. As I had no clue about the possible usage of this interface, I started to play around with it. It seems that the ICustomTypeDescriptor interface can be used to craft at run time your own pseudo-properties that wrap each of an object's fields. What does this mean?

For example it makes it possible to display and edit fields in the WinForms PropertyGrid control. This feature also works in a databinding context.When binding an object to a Windows Form or ASP.NET control, the Framework uses reflection to get bindable properties of the object. However, when an object implements the ICustomTypeDescriptor interface, the framework gets object properties from ICustomTypeDescriptor.GetProperties() method. So the solution is to include all the properties we need to display.

In this way it allows you to bind to not only properties but also the fields of an object. You fool the databinding system and let it think that your object has some extra properties available. You even can create properties that don't exist at all!

I don't see immediately a situation where I can use this interface but I'm sure I'll find something ;-). In the meanwhile you can use it to impress your colleagues with your knowledge of C#.

If you want to know more about it, I suggest reading ICustomTypeDescriptor, Part 1 and ICustomTypeDescriptor, Part 2 articles in the MSDN magazine.

Wednesday, September 16, 2009

Design Fundamentals for Developers

You'll have to admit. Most developers are not very good at creating fancy user interfaces. If you want to improve your design skills, have a look at the Design Fundamentals for Developers session from Mix 2009.

I almost forgot that C# supports pointers

As I was working on a new feature today, I noticed the following when creating a new string instance:

What's that strange '*' again? It's the syntax to use a pointer of course! I almost forgot that C# supported pointers. (Those d*mned managed languages!) .

If you can't remember what a pointer is, it's an address references to a memory position. In .NET the garbage collector (GC) manages the memory. This means that whenever the GC cleans up the memory, it can reallocate the data and as a consequence those address references can change. If you don't like it, the fixed keyword prevents this reallocation.

This sample shows you how to use pointers in your code:

using System;

class Test
{
      unsafe static void WriteLocations()
     {
            int *p;
            int i;


            i = 10;
            p = &i;


           string addr;
           addr = Convert.ToString((int)p);


           //Format pointer
           Console.WriteLine(addr);
           Console.WriteLine(*p);
          
           *p = 100;


            Console.WriteLine(i);


           //Change value through pointer
           i = *(&i) + 15;


          Console.WriteLine(i);
     }


     static void Main()
     {
          WriteLocations();
     }
}

Don't forget that using pointers in your code makes the assembly unsafe. So you'll have to use the /unsafe parameter to compile it. As a general rule avoid the direct use of pointers. There is a good reason why we have managed languages in the first place.

Help! SQL Server is eating my memory.

Some of my team members were complaining that Team Foundation Server was really slow. So assigned as the 'TFS guru', I was the ultimate victim to look into the issue.
As I logged on the TFS server (and waited a looooong time), I noticed in the Task Manager that CPU usage was rather normal but all the memory was used and an enormous page file was eating up the hard disk space.After som sorting magic on the Mem usage column I saw that sqlservr.exe was using all the memory. More than 2 gigs of Mem Usage seems a little bit over the edge.
There were two things that helped me investigate the issue.

The first one are following commands; DBCC MemoryStatus and DBCC MemUsage
Second you can have a look at the huge amount of SQL Server performance counters that can help finding the problem.

But these didn't help me to pinpoint the exact problem.

One of my colleagues suggested that SQL Server was probably using all this memory because it prefers caching the data in memory over doing a lot of disk I/O (which sounded logical). So I opened up the SQL Server Management Studio and had a look at the properties of the database server.
I browsed to the Memory page where I limited the Maximum server memory and everything worked smoothly.

The application-specific permission settings do not grant Local Activation permission for the COM Server application with CLSID...

As I arrived today at work, I was confronted with some Team Foundation Server error messages in my mailbox. There are better ways to start a working day, so I opened the eventlog of our TFS server and found a repeating list of the following error message:


The application-specific permission settings do not grant Local Activation permission for the COM Server application with CLSID {61738644-F196-11D0-9953-00C04FD919C1} to the user <serverName>\tfsservices SID (S-1-5-21-<serviceSID>).








No idea what caused this error (all tips are welcome).
Solve it by executing following actions:
  • Copy the GUID following the CLSID above, and let's dive in the Windows Registry. So fire up RegEdit.
  • In the menu choose Edit-->Find and paste in the GUID. It'll stop at the application entry.As you look on the right side pane, you'll note the application name . In this example, it was the IIS WAMREG admin service that popped up.
  • Now, open Component Services (typically, from the server - Start-->Administrative Tools-->Component Services), expand Component Services, Computers, My Computer, DCOM Config. Scroll down and find the application (IIS WAMREG in this case). Right-Click-->Properties and select the Security tab. You'll have some options here - the first block Launch and Activation Permissions - ensure that the Customize radio button is selected, and click Edit. Now, add your tfsservices account - giving it launch and activate - and in some requirements - remote launch / activate permission.
  • Restart IIS and continue on.

All lights are green again ...

Monday, September 14, 2009

Truncate database logs

If you create a SQL Server database, by default it's configured to use the 'Full' recovery model. As a consequence your transaction logs are only cleaned up after a full backup. So if you're database is not backed up(hopefully it is), your log files keep growing and growing.
You can always clean up the log files yourself. First check for any open transactions by using following command:
DBCC OPENTRAN(<TransactionLogName>)
If there are no running transactions, you can safely run following commands:
USE DatabaseName
GO
DBCC SHRINKFILE(<TransactionLogName>, 1)
BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY
DBCC SHRINKFILE(<TransactionLogName>, 1)
GO

This will shrink the logs to the minimum size possible.

Friday, September 11, 2009

Caliburn - The mighty sword is almost RTM

I do like WPF and Silverlight. And tools build on top of it like Prism and Caliburn make it even better. If you don't know what Caliburn is, it can be best described as a lightweight framework to help you implement some important UI patterns on top of WPF and Silverlight. It makes your life a lot easier and your boss happy because you'll be a lot more productive.

So its good to see that Caliburn finally made it to Release Candidate 3. Let's hope we get a first release soon.

Go grab the bits!

Monday, September 7, 2009

Scott Guthrie about VS 2010 and .NET 4

Scott Guthrie, one of the all time Microsoft guru's, announced a series of blog posts about the upcoming release of Visual Studio 2010 and the .NET 4.0 framework.
A must read!

Saturday, September 5, 2009

Will MS Test always be the little brother of NUnit?

As the release of Visual Studio 2010 is coming closer, I'm looking more and more at the new and exciting stuff that will be provided. Features like the historical debugger, automated UI testing, and so on will make my life as a developer much easier.
But as a big believer in TDD, I spend a lot of time writing unit tests. I was always a little bit disappointed that the Visual Studio unit testing possibilities were rather limited, especially compared to the rich API provided by open source alternatives like NUnit and xUnit. While reading about the upcoming release of VS2010, I understood that a big investment was done improving the testing capabilities. Excited by this news, I started to play around with the beta 1, only to discover that nothing has changed in the Unit testing space :-(

Am I missing something? I think Microsoft is missing a big opportunity here, especially since people inside Microsoft are advocating unit testing themself.

Thursday, September 3, 2009

NHibernate - Embedded Resource


Although I'm using NHibernate for years, I keep making the same mistake. If you're using the default XML mapping to map your domain model to your database, you'll end up with a lot of .hbm.xml files.

So you start up your application, initialize the SessionFactory, and start your first HQL query and then oops...

NHibernate.QueryException: in expected: (possibly an invalid or unmapped class name was used in the query)

What happened? I forgot to change the Build Action of the .hbm.xml files to "Embedded Resource". This error could be very annoying if you start looking in the wrong places. So always start with checking your build actions if you get an error after adding some new mapping files. It could save you a lot of time.

Of course nowadays you no longer need to be confronted with this type of error because you have 2 alternatives available: Attribute based mapping and Fluent NHibernate.