Sponsored Link
Join the mobile team at Konrad
Whether you’re a seasoned developer or just getting started, we’re looking for lifelong learners passionate about mobile technology to join our iOS and Android development teams in Canada and Costa Rica. We build native Swift apps using the latest technologies for the most exciting companies in many industries. Apply today!
News
Transition to the App Store Connect API from the XML feed
Do you still have anything that relies on the old App Store Connect XML feeds? They’re going away. Better you find out now than when they aren’t there anymore in November!
Tools
LaunchBuddy
Do you have a problem keeping track of the 26 indie apps you’ve released over the years? Yes, any task list app can keep track of multiple projects, but how many come customised for a release-focused workflow that apps have, and with pre-defined checklists for releasing an app, common first launch issues, and an accessibility audit? I like this take on the problem from Florian Schweizer. 🚀
Oh, and it’s just as useful for one or two apps as it is for 26! 😂
Code
deep-codable
I’m sure we all agree that Codable was a significant step forward for Swift when it arrived a couple of years ago, especially if you need to decode some JSON data from a web server. Remember how many JSON parsing libraries there were? 😳 That doesn’t mean it solves every case perfectly, though, and you might find Mike Lewis’s new package useful if you need to flatten out data from a deep hierarchy or only take one or two pieces of data from a large JSON object.
He wrote more about it on the Swift Forums if you want more context.
Drops
This package from Omar Albeik isn't brand new, but it did catch my eye this week. It’s a wonderfully true-to-the-original re-creation of the toasts you see when you use an Apple Pencil with a modern iPad. I’ve not compared them side-by-side, but they look incredibly similar to me, and I’ve always liked their design.
SwiftUI Split View Configuration
Split views were always a little tricky with SwiftUI before this year's re-think of the navigation APIs. Did Apple manage to make everyone happy with the changes? Probably not, but they made Keith Harrison happy enough that he spent some time writing about them! 🚀
SwiftUI Field Notes: DocumentGroup
Talking of the new SwiftUI navigation APIs, Joseph Heck wrote up some notes on document-based apps using the new APIs. Again, this was a weak area in previous SwiftUI versions, so it’s great to see this quintessential macOS feature get more attention.
(I know, I know. It’s also on iOS and has been for a long time, but I’d say it’s a quintessential feature of macOS)
Jobs
Senior iOS Software Engineer @ Cambridge Mobile Telematics – Cambridge Mobile Telematics (CMT) is on a mission to make the world’s roads and drivers safer. We measure driving quality, incentivize safer driving, assist users in crashes in real-time, and improve safety for millions of drivers every day around the world. – Remote (within US timezones) or on-site (United States in MA)
Only one featured job this week? That means it's the perfect time to get your company's open positions noticed with a featured listing over at iOS Dev Jobs. What are you waiting for? 🚀
And finally...
How did we ever cope without this feature? 🌈
Comment
Did you see Steve Troughton-Smith’s experiments using Midjourney to generate sample album art for his Broadcasts app or to generate app icons this week?
It’s hard to look at Steve’s examples and some of the incredible results that DALL-E and similar projects are generating and not be impressed. At least, I was!
AI is all around us already. Every time you get in your car, your iPhone knows where you want to go. Everyday interactions with Alexa and Siri are all powered by AI, and that’s before considering the emerging future of bots that can mimic human conversation, GPT-3, and these image generation APIs.
But it set me thinking. I had an adverse reaction to GitHub Copilot, and I haven’t changed my mind since writing that, especially on the issues around licensing and attribution. Why wasn’t my gut feeling about these image generation APIs the same?
I think it’s probably something to do with how transformed the raw material appears. With Copilot, I can imagine the code it suggests coming directly from one of the sources. It’s hard to know how accurate that impression is, but that’s how it appears. With GPT-3 or the image generation APIs, it’s harder to make that connection. Maybe it’s because I’m less familiar with those training data sets? It’s no different, though. Thousands and thousands of human artists and authors created the training data these systems run off.
I said I wanted to get on board with these new technologies when I first wrote about GitHub Copilot, and I feel the same way deep down. The world changes all the time, whether I like it or not. It’s been less than 50 years since a computer running at 80Mhz with 8MB (not GB) memory looked like this, and less than 100 years since this photo. I should probably get over myself and move with the inevitable progress these technologies will eventually bring.
The idea that a few giant companies will use this to make even more money from the work of countless thousands of unpaid humans that produced the training data doesn’t feel great, though. Let alone the legal issues that’ll inevitably land in courts over the next few years. They’re making it hard for me to take this grumpy old man hat off for now. 👴🏻
Dave Verwer