

It’s usually harder to do for admins. They’re usually the ones who do the suspending.
It’s usually harder to do for admins. They’re usually the ones who do the suspending.
UI designs are rarely exactly the same as the final product. There’s many tweaks that occur after the design is implemented. Sometimes doing exactly what the design requiress is too difficult or requires too many resources.
I’ve presented a few WWDC sessions including two video sessions, though nothing as huge as the keynote or platform state of the union. I can answer most questions you have about the process.
The screens shown in WWDC sessions are usually screen captures from real devices. Development of the slide decks starts with a template deck that has the styles, fonts, and color themes for that year’s sessions. It includes slides that look like the latest devices, with precise rectangles the right size where screen captures will fit. As people develop their sessions they use these slides as placeholders for screenshots, animations and videos.
During development of the OSes the code branches for what will become the first developer seed. Before WWDC, one of the builds of this branch gets marked as ready for final screenshots/videos. The idea is that the UI is close enough to what will ship in the first developer seed that the OS and sessions will match.
Once that build is marked, the presenters take their screenshots and those get incorporated into the slides.
You wrote “It wasn’t just a screen recorder thing”. What makes you say that?
You asked about specialized software. Apple OS engineers have to use what are called “internal variants” of the OSes during development. These have special controls for all sorts of things. One fun thing to look for in WWDC sessions: the status bar almost always has the same details, with the same time, battery level, Wi-Fi signal strength, etc. These are real screenshots, but the people taking the videos used special overrides in the internal variants to force the status bar to show those values rather than the actual values. That makes things consistent. I think it avoids weird things like viewers being distracted by a demo device with a low battery.
Cats here, cats there, Cats and kittens everywhere. Hundreds of cats, thousands of cats, Millions and billions and trillions of cats
it’s sociopaths who lack empathy. And that leaves them behaving like capuchins. The one on the left is upset at the unfairness. But the one on the right doesn’t care at all. It just keeps taking its unfair advantage.
Part of that is the responsibility of the app developer, since they define the payload that appears in the APNs push message. It’s possible for them to design it such that the push message really just says “time to ping your app server because something changed”. That minimizes the amount of data exposed to Apple, and therefore to law enforcement.
For instance the MDM protocol uses APNS. It tells the device that it’s time to reach out to the MDM server for new commands. The body of the message does not contain the commands.
That still necessarily reveals some metadata, like the fact that a message was sent to a device at a particular time. Often metadata is all that law enforcement wants for fishing expeditions. I think we should be pushing back on law enforcement’s use of broad requests (warrants?) for server data. We can and should minimize the data that servers have, but there’s limits. If servers can hold nothing, then we no longer have a functional Internet. Law enforcement shouldn’t feel entitled to all server data.
In market terms, bad news was already priced in. The fact that the steep drop wasn’t as bad as some analysts predicted means it was better news than expected, so the stock went up a bit.