So Google had their I/O conference this week and you almost certainly saw the demo of Google Duplex which has been everwhere since it happened. I don't want to get into it in too much detail here, but my thoughts can be summed up as follows:

  1. The assistant should probably identify itself at the start of the call by saying something like "This is the Google Assistant calling on behalf of ..." for a few reasons mentioned below.
  2. I don't have a problem with the ethics or concept of the technology for this specific use case. If the business receiving the call doesn't want this kind of call, they just implement a compatible online booking system and Google will use that instead of calling them. Of course we need to be very careful with how this kind of technology is used in the future, but I don't think this is the start of the apocalypse just yet.
  3. Yes, the tone of the assistant was a little direct/short but I wonder if this is to potentially limit the type of responses that the human will give? If I start a conversation being direct, the tone of the conversation is set by that and the assistant won't have to try and deal with the more complex responses of a less formal conversation. This would also happen if the assistant identified itself as mentioned in point 1.
  4. This is going to go wrong in the real world all the time and I'm really curious love to hear how it copes when that happens. At what point (if any) does it admit that it's a computer? Also, implementing point 1 would make this a lot easier.

I'm quite sure Google thought about all of these points, at length, but these were my initial takes on it.

Anyway, all of that has very little to do with iOS (or even mobile) development, so let's move on. As with WWDC, that main keynote is for the general public and it's the developer keynote that contains the real news for us, and there were several things that stood out to me:

  1. Google Assistant has a huge amount of support for third party apps, and so much more announced in this keynote. SiriKit started with a very limited set of possible integrations so that the supported domains are well understood. Google started with a much more open API and are now trying to use the data gathered to better understand the domains. Two very different approaches.
  2. There's a comedy naming situation going on with AR and ML frameworks but those technologies were definitely the theme for almost everything that Google talked about in the keynotes. MLKit is also available on iOS as it is part of Firebase which is interesting.
  3. App slices are really cool and I can see this becoming a much bigger part of how users interact with apps. It feels like iOS app extensions, but again with a different approach.
  4. Android Studio (and other Linux apps) can now run on Chrome OS. THIS IS THE YEAR OF LINUX ON THE DESKTOP! 😂

Anyway all of that is interesting to look at but we're here to talk about iOS, so let's get on with those links.

Dave Verwer





Business and Marketing


And finally...