Skip to main content

Posts

Showing posts from August, 2024

ADFS claim rules - Lessons learned

ADFS has the concept of claim rules which allow you to enumerate, add, delete, and modify claims. This is useful when you want for example introduce extra claims (based on data in a database or AD) or transform incoming claims. I wrote about claim rules before but I want to give a heads up about some of the lessons I learned along the way. Lesson 1 - Claim rules can be configured at 2 levels There are 2 locations in ADFS where you can configure claim rules. The first one is at the level of the relying party: The second is at the level of the claims provider: If you don’t know what either is, a short explanation: In Active Directory Federation Services (ADFS), a claims provider is the entity that authenticates users and issues claims about them. This can be Active Directory but also another IP-STS. A relying party is an application or service that relies on the claims provided by the claims provider to make authorization decisions. Essentially, the claims provider verif

Log Parser–Parse IIS Logs with custom fields–Part II

Yesterday I talked about Log Parser and Log Parser Studio as tools to help parse files(in my case IIS log files). I was struggling to find a way to parse IIS Log files with custom fields in Log Parser Studio. I solved it by directly calling log parser. However after writing my post yesterday I continued experimenting a little bit and I noticed that when I changed the log type from IISW3C logs to W3C logs, Log Parser Studio was able to process the files and return results.   I don’t know why I didn’t try this before. But the good news is it works!

Log Parser–Parse IIS Logs with custom fields

Although a really old tool, I still use Log Parser from time to time to parse log files. It is really fast and can parse large amount of data in a short time. And if you are afraid of the complexity of the tool, you can first try the Log Parser Studio which simplifies the usage of Log Parser. My main use case is to parse IIS logs and extract useful information from it. However recently Log Parser no longer processed the IIS log files and the results remained empty. What is going on?! Let’s find out… The first thing I noticed that was different compared to before is that the IIS Log files had a slightly different naming format; a ‘_x’ was added to every log file: Inside the documentation I found the following explanation: Once custom fields have been configured, IIS will create new text log files with "_x" appended to the file name to indicate that the file contains custom fields. And indeed when I check the log file I could see that a few ‘crypt-‘ fields are adde

Error upgrading Podman

If you are using Podman Desktop, you are in fact using a combination of 2 tools. One is Podman , an open source container, pod, and container image management engine. The other is Podman Desktop itself which is a graphical layer on top of Podman (and other container engines). This means that you can use different versions of Podman in combination with different versions of Podman Desktop.  For example, as you can see in the screenshot below I’m using version 5.2.0 of Podman whereas my Podman Desktop version is v1.12.0. Error: vm "podman-machine-default" already exists on hypervisor Today after upgrading Podman, Podman Desktop started to return the following error message when I tried to initialize the podman virtual machine: Error: vm "podman-machine-default" already exists on hypervisor I tried to fix it by removing the virtual machine distribution from wsl: wsl --unregister podman-machine-default When I tried to initialize the podman virtual ma

Visual Studio–Cannot run when setup is in progress

This morning wen I tried to open up my Visual Studio, it refused with the following error message:   Strange! I had updated Visual Studio some time before and I was quite certain I used it a few times since then. I started to look in the task manager if I could identify a process that could be related to a Visual Studio update. Unfortunately I couldn’t find an obvious candidate. In the end I grabbed another coffee and after about 10 minutes, I was able to start Visual Studio without a problem. Problem solved?!

.NET 8– Improved build output

Before .NET 8 when you build an application from the command line, the output looked like this: Starting with .NET 8 a new terminal logger is available that has the following improvements compared to the default console logger: Better use of colors Display of the execution time Better indication of the build status Improved grouping of warnings and errors Hyperlink to the output file if the build succeeds Unfortunately this new terminal logger is not enabled by default. You need to use the ‘--tl’ flag to enable it: There is an option to always use this new terminal logger by setting the MSBUILDTERMINALLOGGER environment variable to any of the following values: true : Always use the new terminal logger false : Never use the new terminal logger auto : Use the new terminal logger when supported by your console More information dotnet build command - .NET CLI | Microsoft Learn

Combining Semantic Kernel with Podman AI Labs

Yesterday I talked about Podman AI Labs as an alternative to Ollama to run your Large Language Models locally. Among the list of features I noticed the following one: Mmh, an OpenAI compatible API… That made me wonder if I could use Semantic Kernel to talk to the local service. Let’s give it a try… I first add the minimal amount of code to use Semantic Kernel. Compared to the same code using Ollama there are only 2 important things to notice: I adapted the URI to match the service URI running inside Podman I could set the ModelId to any value I want as the endpoint only hosts one specific model(granite in this example) And just to proof that it really works, here are the results I got back: This is again a great example how the abstraction that Semantic Kernel has to offer simplifies interacting with multiple LLM’s. Nice! IMPORTANT: I first tried to get it working with the latest prerelease of Semantic Kernel(1.18.0-rc). However when I used that version

Run LLMs locally using Podman AI Lab

So far I’ve always used Ollama as a way to run LLM's locally on my development machine. However recently I discovered the Podman AI Lab extension as an alternative solution to work with Large Language Models on your local machine. In this post I’ll share my experience when trying the Podman AI lab extension. Remark: I assume you already have Podman Desktop up and running on your machine. (If not check one of my previous posts; Kubernetes–Setup a local cluster through Podman Desktop ) Installation You need to have at least the following versions to get started: Podman Desktop 1.8.0+ Podman 4.9.0+ If the prerequisites are ok, installing the extension should be really easy. Based on the documentation it should be sufficient to click on the installation link to get going. Unfortunately this didn’t work on my machine and I got the following error message instead: Turns out that the update had failed for an unknown reason and that I was still running an older P

.NET Conf–Focus on AI

Hi there! Free up your agenda as today a new edition of .NET Conf is happening. I think that you can already guess based on the title that the focus this time will be on AI. So if you are a .NET developer and you didn’t had time yet to dive in the possibilities that .NET has to offer to build smarter AI infused applications, this day is for you. The day starts with a keynote session by Scott Hanselman and Maria Naggaga Nakanwagi where they will discuss the basics of AI and LLMs, and provide an overview of the latest advancements in AI. Some of the other sessions I’m looking forward to: Better Together: .NET Aspire and Semantic Kernel by Steve Sanderson & Matthew Bolanos RAG on Your Data with .NET, AI, and Azure SQL by Davide Mauri H&R Block: Lessons Learned from Applying Generative AI to Apps with .NET and Azure by Vin Kamat Of course there are more sessions to watch. So there will certainly be something you like. More information Announcing .NET Con

Use dotnet pack with nuspec file

My understanding has always been that when packaging a library in a nuget package, I had 2 options available: Option 1 - Using a nuspec file with nuget pack command Option 2 - Using a csproj file with dotnet pack command Turns out I was wrong. So I want to use this post to explain where I was wrong. But before I do that let me dive into the 2 options above… Using a nuspec file with nuget pack One option you have is to put all the metadata for your nuget package inside a nuspec file. Here is how such a file can look like: To transform this nuspec file to a valid nuget package, use the following command: nuget pack example.nuspec Using a csproj file with dotnet pack A second option is to put all the metadata inside your csproj file: Now we can transform this to a valid nuget package, use the following command dotnet pack example.csproj Where was I wrong? Both of the options above work, but first of all I though that these are the only options(my first

SqlException - Transaction (Process ID XX) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

I’m currently migrating data between 2 systems. Therefore I build a small migration tool (using the great TPL Dataflow library). While everything worked fine during development, I noticed that the migration failed on production with the following exception: 'Microsoft.Data.SqlClient.SqlException' in System.Private.CoreLib.dll ("Transaction (Process ID 153) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.") Before I showed you how I fixed the problem, let me first give you some hints on how to investigate this issue. Investigating a deadlock The first thing I did was opening up my SQL Server Management Studio and checking the ‘Resource locking statistics by object’ report (check this link if you don’t know where to find this report): In the report above I could see that both the doc.Document and doc.DocumentInfo tables were recently locked. That made sens

HotChocolate 13 - Unable to infer or resolve a schema type from the type reference `@CostDirective`

Yesterday I talked about measuring and monitoring the operational complexity of your GraphQL operations. I talked specifically about the custom HotChocolate cost directive as a way to assign a complexity to a field. However after upgrading an ASP.NET Core GraphQL api to HotChocolate 13, the GraphQL schema could no longer be loaded. Instead I got the following error message back: HotChocolate.SchemaException: For more details look at the `Errors` property. Unable to infer or resolve a schema type from the type reference `@CostDirective`. (IRD3.API.GraphQL.LandbouwerType) at HotChocolate.Configuration.TypeInitializer.DiscoverTypes() at HotChocolate.Configuration.TypeInitializer.Initialize() at HotChocolate.SchemaBuilder.Setup.InitializeTypes(SchemaBuilder builder, IDescriptorContext context, IReadOnlyList`1 types) at HotChocolate.SchemaBuilder.Setup.Create(SchemaBuilder builder, LazySchema lazySchema, IDescriptorContext context) at HotChocolate.SchemaBuild

GraphQL–Always monitor the operational complexity

Today GraphQL is a mature alternative for building API's. Many developers have discovering its flexibility, expressiveness, and efficiency. However, with this flexibility comes the challenge of managing and tracking operational complexity, especially as APIs scale. Without proper monitoring and optimization, GraphQL queries can become performance bottlenecks, leading to slow response times, server overload, and a suboptimal user experience. Therefore it is important to monitor the Operational Complexity of the queries send to your API endpoint. What is Operational Complexity in GraphQL? Operational complexity in GraphQL refers to the computational and resource costs associated with executing a query. Unlike REST, where each endpoint is typically associated with a fixed cost, GraphQL's flexible nature means that the cost can vary dramatically depending on the structure of the query. For instance, a simple query fetching a list of users might be relatively cheap. However, i

BinaryFormatter serialization and deserialization are disabled within this application

Knowing that the support for .NET 6 will end soon (I’m writing this post August 2024, support ends in November 2024), I’m helping my customers move to .NET 8. UPDATE: After writing this article, Microsoft created a blog post with more details about the removal of the Binary Formatter in .NET 9.  Although Microsoft does a lot of effort to guarantee backwards compatibility, we still encountered some problems. In one (older) application where we were using (Fluent)NHibernate, we got the following error after upgrading: FluentNHibernate.Cfg.FluentConfigurationException: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. ---> System.NotSupportedException: BinaryFormatter serialization and deserialization are disabled within this application. See https://aka.ms/binaryformatter for more information.    at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serializ

C# 12- Type Aliasing

During a code review I noticed a file I had not seen before in this application; a GlobalUsings.cs file. When opening the file I noticed it had a combination of global using statements and type aliases . Turns out that this is an emerging pattern emerge in .NET applications where developers are defining a GlobalUsings.cs file to encapsulate all (or most) using directives into a single file. If you have no clue what  these 2 language features are, here is a short summary for you. Global Usings Global usings were introduced in C# 10 and allows you to declare a namespace once in your application and make it available everywhere. So if we want to use a specific namespace, we need to add the following line to any source file: Remark: As I mentioned in the introduction, I would recommend to centralize these global usings in one file instead of spreading them out over multiple files in your codebase. What we can also do is instead of declaring this inside a source file, is t

You don’t have a platform if it doesn’t have self service

Although the concept is not new (it was introduced in the Thoughtworks technology radar in 2017), I see a recent grow in platform teams at my customers. Partially this could probably be explained by the success of the great Team Topologies book that can be found on the bookshelf of almost every IT manager today.   What are platform teams? Platform teams are specialized groups within an organization that focus on building and maintaining the foundational technology and infrastructure that other development teams use to create applications. Their primary goal is to provide reusable tools, frameworks, and services that streamline the development process and enable feature teams to focus on delivering business value without worrying about underlying technical complexities. Platform teams typically handle: Infrastructure Management: Setting up and maintaining cloud services, CI/CD pipelines, monitoring tools, and other foundational infrastructure. Developer Tools: Cr