I'm working on a (small) project where I'm using Microsoft Semantic Kernel. Although it makes it really easy to build your own copilots and AI agents, it is not that easy to understand what is going on (or what is going wrong).
Just in case you have no clue what Semantic Kernel is:
Semantic Kernel is an open-source SDK that lets you easily build agents that can call your existing code. It is highly extensible and you can use Semantic Kernel with models from OpenAI, Azure OpenAI, Hugging Face, and more. It can be used in C#, Python and Java code.
After adding the Microsoft.Semantic.Kernel.Core and Microsoft.Semantic.Kernel.Connectors.OpenAI nuget packages, the simplest implementation to write is the following:
That’s easy and there is not much magic going on as you are directly controlling the prompt that is shared with the LLM.
However when you start to use the more complex features like the planner, you don't know what exactly is shared with the LLM. In that case it can be useful to enable debugging. Therefore update the kernel configuration code to add logging:
If we now run our Semantic Kernel enabled application, we can see the prompt that is send to the LLM in the output window:
Happy AI coding!