Skip to main content

Posts

Apply effective naming conventions in Azure using the Azure Naming Tool

Everyone who works in software development knows this: naming things is hard. And when you need to create a lot of resources in Azure, naming things there can even be harder. If you recognize this struggle, I have some good news for you. With the help of the Azure Naming Tool, applying a good and consistent naming strategy becomes a lot easier. Why naming conventions matter Consistency and Clarity : Consistent naming conventions provide clarity, making it easier for team members to understand the purpose and function of each resource. This reduces confusion and enhances productivity. Simplified Management : A structured naming convention simplifies resource management by grouping related resources together and enabling straightforward identification. This is particularly useful in large-scale environments where numerous resources are deployed. Enhanced Security and Compliance : Proper naming conventions can help ensure compliance with security policies and regulatory requi...
Recent posts

Why the DeepSeek R1 Model is good news for all of us

The introduction of the DeepSeek R1 model has sent shockwaves through the AI industry, challenging established norms and redefining the economics of AI development. This model has demonstrated that we can train AI models more cost-effectively and in an environmentally friendly manner, without sacrificing performance. By leveraging innovative techniques, DeepSeek has shown that it's possible to achieve remarkable results without the exorbitant costs and environmental impact typically associated with AI training. I think this is good news for all of us. As a big believer in the advantages that LLMs has to offer, I always feel somewhat uncomfortable knowing the environmental impact that these models have both during training and execution. DeepSeek has shown us that a different path is possible, providing a better balance between productivity and (environmental) cost. My hope is that other AI players will now re-evaluate on how to move forward and start applying the same techniques ...

Explore and test local modals using Ollama and OpenWebUI

If you are following my blog you probably noticed that I'm experimenting a lot with Large Language models locally. I typically expose these LLM's locally through Ollama and use either Semantic Kernel or the API directly to test and interact with these models. Recently I discovered OpenWebUI , an open source web interface designed primarily for interacting with AI language models. It offers a clean, intuitive interface that makes it easy to have conversations with AI models while providing advanced features for developers and power users. Some of the key features of OpenWebUI are: OpenAI API Integration : Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Granular Permissions and User Groups : Create detailed user roles and permissions for a secure and customized user environment. Full Markdown and LaTeX Support : Comprehensive Markdown and LaTeX capabilities for enriched interaction. Mod...

Impersonation in ASP.NET Core

A colleague reached out to me with a specific scenario he had to tackle. He had an IIS hosted ASP.NET Core application using Windows Authentication where he had to execute a specific action on behalf of the current user. The typical way to do this is through impersonation. In this post I’ll explain what impersonation is and how you can get this working in ASP.NET Core. What is impersonation? Impersonation is a security feature that allows an application to execute code under the context of a different user identity. This capability is often used when an application needs to access resources, such as files, databases, or network services, with permissions different from those of the current user or the application's process. By using impersonation the application temporarily assumes the identity of another user to perform specific actions, reverting back to its original identity afterward. Some common use cases for impersonation are: Accessing network shares or file sys...

Join the Github Copilot Bootcamp

Enhance your AI programming skills and take your abilities to the next level! Starting this week, from February 4th to 13th, Microsoft hosts a series of four live classes designed to teach you tips and best practices for using GitHub Copilot. So if you really want to get the most out of your Github Copilot, don’t hesitate to register .   Here is the agenda for the English sessions: February 4, 2025: Prompt Engineering with GitHub Copilot February 6, 2025: Building an AI Web Application with Python and Flask February 11, 2025: Productivity with GitHub Copilot: Docs and Unit Tests February 13, 2025: Collaboration and Deploy with GitHub Copilot Hope to see you all there! More information GitHub Copilot Global Bootcamp | Microsoft Community Hub

Azure DevOps Pipelines–Ignore build trigger for specific paths

Last week a colleague complained to me that he had to wait for an available build agent on our Azure DevOps build server. The reason that all build agents were busy was because I was doing a lot of changes to the documentation(also stored in an Azure DevOps Git repository) causing a lot of builds to be triggered resulting in long wait queues. As only changing the markdown files in the wiki doesn’t need a build to be created, he asked me to configure the build trigger to ignore documentation changes. Let me show you how to configure this… Classic Pipelines To configure this for the ‘classic’ pipelines in Azure DevOps, you need to go to the Triggers tab for your pipeline. There you’ll find a Path filters section on the right where you can specify specific paths to include or exclude. In our example, we want to exclude everything inside the ‘SOFACore/Docs’ folder:   YAML Pipelines If you are using a YAML pipeline, you need to edit the trigger section and add a path e...

Podman–Accessing the host from inside a container

Yesterday I showed how to run STRIDE GPT, an AI based threat modelling tool, locally using docker. I demonstrated how I used a local language model through Ollama running on the same machine as Docker Desktop. To be able to access the Ollama endpoint from inside the docker container, I had to use the host.docker.internal as you can see in this .env file: A colleague asked me, what if you are using Podman instead of Docker? Will host.docker.internal still work? The short answer is no. Luckily this doesn’t have to be the end of this post as an alternative exists for podman. Instead of using host.docker.internal you need to use host.containers.internal .