Skip to main content

Posts

Showing posts from September, 2024

Becoming a professional software developer: Identity over skills

In my experience as a software architect working with developers, I’ve seen a common struggle: the inability to consistently apply good practices, like unit testing, refactoring, or writing clean code, especially when deadlines loom. At the start of a project, everyone’s committed to following best practices—writing tests, maintaining code quality, and ensuring scalability. But as the pressure of deadlines kicks in, those good intentions often get thrown overboard in favor of quick fixes and shortcuts. What I’ve realized is that the real issue isn’t a lack of skill or knowledge; it’s a mindset problem. Developers may know what the best practices are, but they don’t always see themselves as the type of developer who religiously follows them, no matter the circumstances. This ties into an insight I gained from reading Atomic Habits by James Clear: True change happens not when we aim to achieve specific goals, but when we shift our identity. For developers, this means moving

Github Actions–Deprecation warnings

Today I had to tweak some older Github Action workflows. When I took a look at the workflow output I noticed the following warnings: Here is a simplified version of the workflow I was using: The fix was easy, I had to update the action steps to use the latest versions(v4):

Semantic Kernel - Multi agent systems

Yesterday I talked about the new agent abstraction in Semantic Kernel and how it can simplify the steps required to build your own AI agent.  But what could be better than having one agent? Multiple agents of course! And that is exactly what was recently introduced as a preview in Semantic Kernel. As explained in this blog post , there are multiple ways that multiple agents can work together. The simplest way is as a group chat where multiple agents can talk back-and-forth with each other. To avoid that these agents get stuck in a loop this is combined with a custom termination strategy that specifies when the conversation is over. Here is a small example. I start with the default Semantic Kernel configuration to create a kernel instance: Now I define the instructions for the different agents and create them: Remark: Notice that I can use different kernels with different models if I want to. To make sure that the conversation is ended I need to specify a TerminationStr

Semantic Kernel–Agent Framework

In this post I show you the recently introduced Semantic Kernel agents feature and how it simplifies building your own AI agents. But maybe I should start with a short recap about Semantic Kernel. On the documentation pages , Semantic Kernel is described like this: Semantic Kernel is a lightweight, open-source development kit that lets you easily build AI agents and integrate the latest AI models into your C#, Python, or Java codebase. It serves as an efficient middleware that enables rapid delivery of enterprise-grade solutions. It gives you all the building blocks required to build your own agent; a chat completion model, a plugin system, a planner and more. However until recently you had to bring all this building blocks together yourself. Here is a small code snippet I copied from an existing project: There are a lot of things going on in the code above and if you have hard time to understand all of this I have some good news for you. Starting with the Python (1.6.0) and

Azure Charts- Deprecations

I talked briefly about Azure Charts before as it is a great way to see how the Microsoft Azure platform is evolving. The last 2 weeks I got a lot of deprecation warnings in my mailbox. This brought me back to the Azure Charts website as there is a specific Azure Deprecations Board showing a timeline of coming up service capabilities deprecations. One to bookmark! More information Azure Charts–Help! Azure is evolving too fast… (bartwullems.blogspot.com)

Github Actions–Resource not accessible by integration

When trying to run the release pipeline I showed you last Thursday , it originally failed with the following error message: Error: Resource not accessible by integration Here is the original Github Actions workflow I was using: The error above was caused by the fact that the Github Actions workflow couldn’t access the Releases inside Github. There are multiple ways to fix this: Create a separate PAT ( Personal Access Token ) with specific rights needed to create a release Override  the default permissions in the workflow file itself Update the permissions in the Github Actions settings I decided to go for the second option and updated the workflow file to include the specific permission needed: Remark: It can take some time to identify the right permission you need, so certainly check out the documentation or ask Github Copilot for help. More information Controlling permissions for GITHUB_TOKEN - GitHub Docs Github–Create a new release–The automated ver

Github Actions - Change the workflow name

 A short post this Friday.  After creating a new Github Actions workflow I noticed that the name was referring to the filename: I couldn't start my weekend without fixing this first. I opened up the yaml file and explicitly set a name: When I came back to the list of Github Actions after committing the change, the list was updated and got a nice name instead of the filename: Me happy!

Github–Create a new release–The automated version

Yesterday I showed you how to create a new release in Github manually. This was a good starting point as it introduced the different elements of a release and the options we have. Today let us automate the process of creating a release on GitHub using GitHub Actions. Create a new release using Github Actions Go to Actions inside Github and click on new workflow . We’ll not use an existing template but instead choose to setup the workflow ourself : Paste the following yaml content in the editor screen. I’ll explain it afterwards… Explanation of the Workflow name: This is the name of your workflow. You can name it something like "Release". on: Specifies the event that will trigger the workflow. In this case, it’s triggered by a push to a tag that matches the pattern v*.*.* (e.g., v1.0.0 ). jobs: This defines the job that will be run by the workflow. runs-on: Specifies the environment where the job will ru

Github- Create a new release–The manual approach

Being new to Github I decided to write a few post on how to create a new 'release' in Github. I'll start with a 100% manual approach(this post) and will continue with other posts showing a more and more automated process. But today we'll start simple and focus on creating a release by hand. This allows me to introduce the different elements that we can configure when using releases inside Github, knowledge that will be useful later when we go the automated route. Let’s dive in… Creating a new release (manually) Before you can create a release, make sure you are signed in to your GitHub account. Navigate to the repository where you want to create the release. Once you’re in the repository, click on Releases on the right of the list of files.   On the Releases page, you’ll see a button labeled “ Draft a new release .” Click on it to start creating your new release.   The first thing you need to do is to assign a tag to your release. A tag is usuall

Operational limits when using Azure DevOps

Azure DevOps is the SaaS(Software-as-a-Service) version of Azure DevOps Server(the on-premise alternative). As with most SaaS solutions, certain limits apply. Some are operational limits placed on work tracking operations and some are object limits. In addition to the specified hard limits on specific objects, some practical limits apply. All of these limits are documented . but only operational limits like pipeline usage and top commands can be monitored directly in Azure DevOps through the Usage tab . However, object limits—such as the number of projects, dashboards, or teams—have not been traceable so far. With the latest update you can now see the object limits both at the project level: Go to Project Settings and scroll down on the Overview tab: As at the organizational level: Go to Organization Settings and scroll down on the Overview tab: Nice! More information What is Azure DevOps? - Azure DevOps | Microsoft Learn

HotChocolate 13 - Schema comparison

One of the main design principles we follow when building a GraphQL API is to maximize backwards compatibility. To avoid unexpected breaking changes, we have integrated GraphQL Inspector into our build pipeline. After upgrading to HotChocolate 13, the schema comparison started to fail: It complains that the defer and stream directive are no longer there. And indeed if we open up the schema before the upgrade: And compare it to the schema after the upgrade: We can see that the 2 directives are no longer there. We can easily fix this by re-enabling both directives in our GraphQL configuration code: Remark: If you are paying attention, you can see that there was another breaking change regarding the cost directive. More about that in the following post . More information GraphQL Inspector (bartwullems.blogspot.com) Migrate Hot Chocolate from 12 to 13 - Hot Chocolate - ChilliCream GraphQL Platform