Skip to main content

Posts

Showing posts from October, 2024

Web Deploy error - Source does not support parameter called 'IIS Web Application Name'.

At one of my customers, everything is still on premise hosted on multiple IIS web servers. To deploy web applications, we are using Web Deploy . This works quite nicely and allows us to deploy web application in an automated way. Last week, a colleague contacted me after configuring the deployment pipeline in Azure DevOps. When the pipeline tried to deploy the application, it failed with the following error message: "System.Exception: Error: Source does not support parameter called 'IIS Web Application Name'. Must be one of (Environment)" Here is a more complete build log to get some extra context: Starting deployment of IIS Web Deploy Package : \DevDrop\BOSS.Intern.Web.zip">\DevDrop\BOSS.Intern.Web.zip">\DevDrop\BOSS.Intern.Web.zip">\DevDrop\BOSS.Intern.Web.zip">\\<servername>\DevDrop\BOSS.Intern.Web.zip Performing deployment in parallel on all the machines. Deployment started for machine: <servername> with port

MassTransit ConsumerDefinitions vs EndpointConfiguration - Understanding the Differences

In message-driven systems, configuring consumers correctly is key to achieving maintainable, scalable, and flexible systems. In MassTransit, two common approaches to achieve this are using ConsumerDefinitions and Endpoint configuration . While both serve the purpose of defining how consumers work within the system, they differ in terms of flexibility, separation of concerns, and implementation details. In this post, we’ll explore the differences and best practices for using them. MassTransit Consumers: The Basics Before diving into the comparison, let’s briefly cover what a consumer is in MassTransit. A consumer is a class responsible for handling incoming messages in a message broker (such as RabbitMQ or Azure Service Bus). Consumers are central to the processing pipeline and play a key role in event-driven architectures. Here is a simple example: Consumer Configuration The question is now how will this consumer be wired to the underlying transport mechanism. As mentioned

Azure Monitor Log Analytics–Identify high memory usage

Last week we had a production issue at one of my customers where a server went offline due to high memory usage. So far the bad news. The good news is that we had Azure Application Insights monitoring in place, so we could easily validate that the high memory usage  was causing the issue as our Application Insight logs showed a long list of OutOfMemoryException s. However as a separate Application Insights instance was used per application, we couldn’t easily pinpoint which application was the main culprit. Remark: Unfortunately it isn’t  possible to show multiple resources on the Metrics tab, so that is not an option(you can upvote the feature if you like it): I could go through every Application Insights resource one by one, but that wouldn’t be very efficient. Therefore I decided to turn to KQL and write a query on top of the Log Analytics workspace where all the data was consolidated. Here is the query I used in the end: And here is how the result looked like when I exec

Semantic Kernel–Giving the new Ollama connector a try

As Semantic Kernel could work with any OpenAI compatible endpoint, and Ollama exposes it language models through an OpenAI compatible API, combining the 2 was always possible. However not all features of Ollama were accessible through Semantic Kernel. With the recent release of a dedicated Ollama connector for Semantic Kernel, we can start using some of the more advanced Semantic Kernel features directly targetting Ollama deployed models. The new connector is using Ollama Sharp(I talked about it in this post ) so you can directly access the library if needed. Giving the new connector a try… Create a new Console application and add the Microsoft.SemanticKernel.Connectors.Ollama NuGet package: dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --version 1.21.1-alpha Now instead of creating a Semantic Kernel instance, we can directly create an OllamaChatCompletionService instance: The remaining part of the code remains the same as with the default

Azure DevOps–Update to the v3.x build agents

A colleague trying to add SonarQube integration to his build pipeline in Azure DevOps, contacted me because he got an error in his build pipeline. This is the error he got: ##[error]No agent found in pool default which satisfies the following demand: Agent.Version. All demands: msbuild, visualstudio, Agent.Version -gtVersion 3.218.0 The problem was caused by the SonarQubePrepare task he just added to his build pipeline: It still worked with an older SonarQubePrepare version however the task is marked as deprecated: ##[warning]This task is deprecated. Please upgrade to the latest version. For more information, refer to https://docs.sonarsource.com/sonarqube/latest/analyzing-source-code/scanners/sonarqube-extension-for-azure-devops/ Updating a build agent I thought the fix would be easy; just update the outdated build agent(s). Therefore go to Project Settings –> Agent Pools . Click on the Agent pool which contains the agent you want to update:  On the