It’s always a good signal to write about something when I get several emails saying “Have you seen this?” along with a link. I’ve received more of those emails this week about GitHub Copilot than for anything else I can remember recently.

If you’ve been reading for a while, you won’t be surprised to hear that I can see both the positive and negative sides of this technology. 😱

Before I start, It’s important to note that I don’t yet have access to Copilot. I signed up, but I’m still on the waiting list. Everything I write here comes from what I see on the marketing page. That’s not ideal, but it’s what I have!

The negative side of things is the obvious place to start. AI-produced code is going to be buggy, just as human-written code is buggy. However, Copilot doesn’t have the context of what you’re trying to achieve or why you’re trying to achieve it. It also doesn’t benefit from the clarifications and adjustments you naturally make as you progress through a problem.

Then, there’s a more subtle side of things. The consequences of AI making a wrong choice in a self-driving car are immediate and drastic. 💥 The results of a mistake in a generated function are neither and will be much easier to miss.

Since I haven’t had a chance to experience the product, part of my concern comes from how GitHub is marketing it. For example, I really couldn’t disagree more with this statement:

Whether you’re working in a new language or framework, or just learning to code, GitHub Copilot can help you find your way.

Surely learning a new language or framework is the last time you should use a tool like this. I think we can all agree that it’s good to understand the code you write, and without knowledge of the language or framework, you’ll miss plenty of mistakes. If GitHub can’t be sure that the code won’t contain biased, discriminatory, abusive, or offensive outputs or someone’s personal data, will it really be well written? To get sidetracked slightly, I also saw a few people ask good questions about whether any of the training data was GPL licensed, because that will be a problem. 😬

It’s surely also a terrible way to learn to code. The code produced by Copilot comes with no information on why the AI wrote what it did. Is GitHub genuinely thinking about this as a teaching tool? 🙄

I have to stop myself here, or my negative take will take you all day to read. It’s almost too easy to find problems with this product. Yet, when I think about it, I want to support what GitHub is doing here.

If not for experiments and research like this, how would we move software development forward? I’ll be surprised if Copilot ends up being more than a curiosity in its current form, but who knows if some of the ideas and techniques that come out of this project could lead to significant advances in the field?

Ironically, I’d be more excited about this if it did a little less! 😂 If Copilot generated one line at a time rather than whole blocks of code, it’d feel like enhanced code completion rather than code generation. The slower pace would mean you’d be reviewing a line at a time rather than a whole block. Unfortunately, that doesn’t seem feasible given that the result of the AI seems to be full blocks of code. It’s maybe one way to think about how AI could make our lives as developers easier in the future, though.

Now, where’s the Copilot for writing newsletters? 🤔

Dave Verwer  



Business and Marketing

And finally...

I 😍 this attention to detail.