I was reading the following blog post when I thought: "How good (or bad) would Github Copilot handle the scenario's mentioned in the post?" This post is the answer on this question.
But before we dive in, I would suggest to first read the original post on the JetBrains blog: Critical Thinking in an AI-Powered World | The .NET Tools Blog (jetbrains.com).
Back?
Let’s get started!
I already created an XUnit Test project targeting .NET 8 and pasted the first snippet used in the post in a test class:
Now let’s ask the same question but not to the JetBrains AI assistant but to Github Copilot in Visual Studio:
Suggest a way to refactor the variable `now` so that I can control the value without depending on `DateTime.UtcNow`
Here is the response I got:
Similar to the JetBrains AI Assistant it suggests me to create my own abstraction and create an IDateTimeProvider interface. Too bad! Let us also mention the TimeProvider
class and see if we get a better result:
Please use the ‘System.TimeProvider’ class found in .NET 8 and C# 12 instead
But again, the results turn out quite similar:
So far, we cannot make a different conclusion as in the original post.
Take 2!
Ok, let’s refocus our attention on the second part of the post and let us see how Copilot helps wht the implementation of the CalculateFallTimeAndVelocity
method:
Let’s see what the suggestion is that the system comes up with:
Not bad either!
But let us improve our understanding of the magic values by using the following prompt:
Move all constants to descriptive variables.
I just apply the suggestion and the result looks like this:
She(he?) didn’t isolate the Factor value but with the comment in place I can understand the role of the 2 in the code above.
We continue with the next prompt:
Set the value of Gravity to Earth’s gravity up to four decimal places of precision
This gives us the following result:
Too bad! Although the value itself is correct, it gives me a suggestion with 5 decimal places of precision:We end with the last prompt:
Comment each line with valuable information that explains what’s happening
And this is our final result:
Conclusion
Based on this I could only agree with the conclusion of the original post but applied to Github Copilot:
Github Copilot can help you solve a new fascinating set of problems but does not claim to be infallible. Since it uses models trained on human data, it can sometimes be wrong. That’s why you should think critically about responses and always take steps to understand and verify the results of any LLM-based product.