Sponsored Link
Monitor stability across iOS, macOS, tvOS, and now watchOS
Every Apple Watch user expects a seamless experience between their phone and watch. With our new watchOS support, Bugsnag captures unhandled exceptions automatically, allowing you to gain actionable insights into stability and make data-driven decisions to prioritize and fix the bugs that matter. Take a look at our docs to learn more.
News
Apple announces biggest upgrade to App Store pricing
I don’t think we’ll ever see freeform pricing on the App Store, given how complex that would be with hundreds of currencies around the world, but this is the next best thing, covering everything from $0.29 to $10,000.
Ask Apple starts again on December 12
One last chance before the new year! I love the regular schedule of these events. ❤️
Tools
Coduo
This new tool from Ben Harraway is great. Pick either your whole screen or a specific Xcode project and grab a link you that allows shared control of your screen with anyone who has a web browser. It’s super easy to get started with and you get 100 hours of streaming for free, then a one-time cost unlocks unlimited use. 👍
Code
Prototyping SwiftUI interfaces with OpenAI's ChatGPT
Here’s a much less troubling use of ChatGPT from Moritz Philip Recke because any mistakes it makes are instantly apparent. Its ability to generate code that works at all is remarkable, let alone to refine it into something that resembles an actual app.
Soto and Swift Build Plugin experiments
Code generation during a Swift package build process is a powerful concept. As Adam Fowler explains in this post, replacing the entirety of soto with code generated at build time would be possible. That’s not what he’s doing here, but seeing people experiment with these features is fun.
Setting up a build tool plugin for a Swift package
Talking of build plugins, Toomas Vahter has written up how he approached building one.
SwiftUI view modifier for paid app features
I love the idea of this SwiftUI view modifier from Marin Todorov. It’s such a neat way to add a consistent bit of UI and behaviour to an app.
Jobs
Senior Mobile Engineer @ Emerge Tools – At Emerge you'll help build the future of mobile development and contribute directly to products used by many of the biggest mobile companies in the world. An ideal candidate would be passionate about the intersection of operating systems, runtimes, and developer tools. – Remote (Anywhere)
Senior iOS Engineer @ Doximity – Doximity, the medical network used by over 80% of US clinicians, is hiring passionate iOS engineers (fully remote!). Come be part of an amazing product team + work on an app that is constantly evolving. Use your skills (Swift, TCA, Combine) to be an integral part of our growing telemed feature. – Remote (within US timezones)
iOS SDK Developer @ Stream – Do you want to work on an open-source chat SDK used by hundreds of high-profile companies and startups that impact billions of users? If you are a product-minded engineer and care about software quality, apply on the link below. – Remote (within European timezones) or on-site (Netherlands)
Freelance Interview Engineer (US Only) @ Karat – We're dedicated to improving access in tech. If you are too, join us as a Karat Interview Engineer. As such, you'll conduct technical interviews of developers like you on behalf of our hiring clients (including Duolingo, Indeed, and more) using the Karat Platform and its data-tested questions. – Remote (within US timezones)
Do you have any open positions at your company? You can post them for free over at iOS Dev Jobs. There’s really nothing to lose!
And finally...
Glorp! 🤖
Comment
Let me say before I dive into this topic again. ChatGPT is already a remarkable piece of software, and I’ve lost count of how many times I’ve been amazed at screenshots from it in the last few days.
That said, again, I have a really uneasy feeling that I can’t shake.
The possibilities for this software are mind-blowing. It feels like we just jumped leaps and bounds from our conversations with voice assistants, where we ask a single question and listen to it read the opening paragraph of Wikipedia back to us. Yes, that’s simplifying it, but having conversations span several cycles back and forth with sensible and believable results is remarkable.
At the same time, I can’t help feeling like all the breathless praise of the last couple of weeks is very premature. I remember chatting with Kim Silverman in the WWDC labs in 2008 about speech synthesis and recognition. He talked about the early days of speech synthesis in the late 1970s and how developers quickly progressed to 90% of the way there and then spent the next 30 years getting to 95%. 😬 Using AI technology like ChatGPT, DALL·E, or Copilot often makes me think back to that conversation with Kim.
Self-driving car software has the same issue. It’s been a while since self-driving cars were feasible, and here we are 40+ years later, and it still feels “a few years away”. I bet someone said that back in the 1980s, too! I’m not saying there’s not been progress, just that it has moved slower than everyone expected.
Naturally, given how new it is, some of what ChatGPT comes up with is dead wrong but where things get problematic is with how confidently it presents answers. Take the first question that Ben Thompson asked was incorrect as a perfect example. He describes the answer he received as:
If you’ve seen the AlphaGo documentary, then you’ll remember where they can’t figure out why the AI is doing what it’s doing during a game. It’s not only about code, either. It’s the training data and model that’s the problem. From what I can learn about ChatGPT training, it almost certainly has that same problem. Here comes that deeply uneasy feeling again. Yes, people are fact-checking it and examining its output now, but how long before we blindly trust it? 😬
Finally, I’ve seen people suggest that Google is in trouble with this on the horizon. That may be true, but our use of a search engine is fundamentally different to how ChatGPT works. With search, we type a query and get back a set of results, but it’s our responsibility to figure out which results contain accurate, unbiased information. To think that any single training model could be impartial and accurate enough to replace that process seems impossible, or at least well beyond what we see here.
I’ll be the first to admit I’m not an expert on this subject, and I am sure people are working hard on the issues I have mentioned here. I see how quickly people are rushing to find ways this type of technology can integrate with everything we do, and I can’t shake that uneasy feeling. Or maybe I should lighten up and assume it’ll all be fine. 😬
Dave VerwerNote: Thanks so much to Dave DeLong, Carter Jernigan, and Daniel Jalkut for helping me pinpoint who I spoke to at WWDC 2008 about speech synthesis! I may not have remembered his name well, but I have thought about that conversation we had a lot over the years.