Demangle Link Map Swift Symbols

I’ve been working on a command-line app to analyze the link map generated when we build our app and report on the size of each module and each file within that module. I also wanted to provide a means of showing information about the individual symbols within each file, but the names of the symbols are mangled:

_$sSo31AVCaptureVideoStabilizationModeVSHSCSH9hashValueSivgTW

There is a command-line demangling tool that is part of the Swift distribution, but it simply refused to spit out a value. I kept getting a value of _ ---> _, which indicates it couldn’t understand the string.

Eventually, I discovered that you need to replace the leading “_” with a “\” backslash character. This is needed because the “$” symbol is reserved in the shell and needs to be escaped. If you try to demangle a symbol with a leading “$” the shell will hang until you press Ctrl+C. So, the string becomes:

\$sSo31AVCaptureVideoStabilizationModeVSHSCSH9hashValueSivgTW

When fed into swift demangle, I finally got the output I was looking for:

protocol witness for Swift.Hashable.hashValue.getter : Swift.Int in conformance __C.AVCaptureVideoStabilizationMode : Swift.Hashable in __C_Synthesized

Certainly more readable, but an awfully long “name” for the symbol. I’ll figure that out next.

GitHub Actions: Using Older Simulators

GitHub Actions is a relatively recent feature update to GitHub that allows you to run CI/CD tasks in response to repository changes. The service is free to public repositories and available to the paid tiers of private organizations.

One shortcoming I’ve encountered is that each version of Xcode only has the default shipping simulators available to it. Thus, if you want to do tests 1 or even 2 iOS versions back, you need to do some setup work. Thankfully, there is a solution less drastic than downloading the required simulator at the start of every CI run (which would slow things down terribly).

The solution comes from the fact that the default MacOS environment has many previous versions of Xcode available. You can use symbolic links to make an older simulator appear to be installed when you run xcodebuild. This can be done as a step just before you begin build / test operations:

[gist https://gist.github.com/JoshuaSullivan/8a1455bc9813b5926235053e1de8c93b]

You can see a complete list of software installed on the virtual MacOS environments on this page. You can see they maintain all current- and previous-generation versions of Xcode as well as one version from 2 generations ago. To pick a particular simulator, you just need to modify the workflow step to refer to the correct version of Xcode and name the linked simulator core appropriately.

To use the simulator, you simply need to update the destination flag of your xcodebuild command to look something like this:

-destination 'platform=iOS Simulator,name=iPhone 11,OS=13.7'

The next time you run your workflow, you should see it link to the specified simulator and use it for your tests.

Combine: “Lazy” CurrentValueSubject

This is a technique suggested to me by Jordan Gustafson (@minnesota_gus) on how to make a CurrentValueSubject that doesn’t require an initial value. This should probably be developed into an actual type which conforms to Subject, but I haven’t figured that out yet.

[gist https://gist.github.com/JoshuaSullivan/0985e00b9828a70d25383fadb4cbf013]

This will produce the following output:

First subscription.
[Example] sending value: 1
sub1: 1
Second subscription.
sub2: 1
[Example] sending value: 2
sub2: 2
sub1: 2

As you can see, the first subscription did not receive a value until the first time doSomething() was called. Conversely, the second subscription—which was added after the first value had been sent—received a value immediately. Both subscriptions receive all values thereafter.

CoreImage: New Filters in iOS 14.0

iOS 14.0 recently arrived and with it, 3 new CoreImage filters:

  1. Color Absolute Difference
  2. Color Threshold
  3. Color Threshold Otsu

Unfortunately, Apple has not yet updated the CoreImage documentation to include these new filters. Instead, I’ve created a Swift playground that enumerates all of the properties of a CIFilter. Here are the results for those 3 filters:


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorAbsoluteDifference.txt”]


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorThreshold.txt”]


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorThresholdOtsu.txt”]


I have no idea what the difference is between the Color Threshold and the Color Threshold Otsu filters. I’ll follow up if I can produce a workable demo.

Mocking Static Method Protocols

I ran into an issue on my project where I wanted to add unit tests for my AnalyticsService facade class, which hooks up to Firebase Analytics behind the scenes. The issue is that the Firebase class uses only static method calls for interaction: Analytics.log(“event_name”)

I created an AnalyticsBackEnd protocol that declared a static method and added an extension to the Analytics class to conform to it:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsBackEndProtocol.swift”]

The sticking point was the initializer for the AnalyticsService class: how do I say that I want to receive a class conforming to AnalyticsBackEnd without passing an instance of the class? The solution ended up looking like this:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsService.swift”]

In this way, the AnalyticsService uses the Analytics by default, but allows me to initialize it with the MockAnalyticsBackEnd class for unit testing purposes:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsUnitTest.swift”]

Core Image: Correcting Orientation

UIImage and UIImageView work together to ensure that photos are displayed in the correct orientation, based on the EXIF metadata for the image. However, Core Image does not respect orientation by default and this can cause Core Image backed rendering tasks (thumbnails, effects, etc.) to produce output with an unexpected results.

Lots of the correction techniques available in answers on Stack Overflow and other programming forums are wildly inefficient, producing a second full-resolution copy of the source image. Other techniques are based on the UIKit coordinate system (origin in upper-left corner, positive y pointing down) rather than the Core Image coordinate system (origin in lower-left corner, positive y pointing up).

Fortunately, there is a simple solution built into Core Image as of iOS 11: orientationTransform(for:). When invoked on a CIImage instance, it returns a CGAffineTransform which will convert the image from its current orientation to the .up orientation. This transform can be combined with any other transformed being performed or applied via the CIImage.transformed(by:) instance method. There is another form of the call which calculates and applies the transform, returning the resulting CIImage. See the documentation here.

Trying to correct orientation yourself is tedious and error-prone, using these methods will make it a snap and save you many hours of work.

Note: The enum cases used by the method are NOT the same as the UIImage.Orientation enum values. I’ve made a helper extension for UIImage.Orientation to convert it into the appropriate enum case for the orientation transform method.

[gist https://gist.github.com/JoshuaSullivan/7b01141fe82c308519ee537d9120cc70 /]

The Pros and Cons of RxSwift…

I recently wrapped up a major app project for Google’s Cloud Next ’18 conference. We made extensive use of RxSwift throughout the project, centered around a couple of key objectives:

  1. Model-layer parity with Android – Design nearly-identical APIs for the view model and persistence layers on iOS and Android to help avoid the “iOS does one thing, but Android does another” class of bugs. Internally the implementations are quite different, but having their public interfaces documented in a Wiki allowed each platform to feel out the requirements for a particular object, then document it for the other team. This saved massive amounts of time over the course of the project and resulted in very few platform discrepancy bugs.

  2. React to schedule changes in real time – Google Cloud Next is a BIG conference; second only to Google I/O in terms of attendance. With limited seating at the various sessions, it is very important that attendees know exactly which sessions have available seats as well as receiving important updates about their reserved sessions ASAP. To accomplish this, we used RxSwift to transform observations of the real-time session details coming out of our Firebase Cloud Firestore back end into data streams that could be easily bound to views in the user’s schedule.

This was my first production experience with Rx and, as such, I experienced a considerable ramp-up curve over the course of the first month. If you are working on your first Rx project, expect to lose about 50% of your productivity for the first month and 20% of it the second month as you come to grips with the different style of data flow that Rx enables as well as learning which tool in the Rx toolbox is appropriate for each different data scenario. The up-side is that techniques learned for Rx on one platform are broadly applicable to others (which was a big reason why we chose to work with it on this project).

Once I became more familiar with Rx, I started being able to model data transformations in my head and implement them with a bare minimum of fuss. This was gratifying, but it always felt like there were some sharp edges around the boundaries between Rx code and more traditional UIKit code governing things like user interaction. I’ve created a quick list of the major points to be aware of when considering RxSwift for your iOS project:

Pros:

  • Able to describe a common interface for model layer APIs between iOS and Android. This was the biggest win for us on this project, saving many dozens of hours of QA bug fixing time.
  • Avoid nested-closure hell that typifies complex asynchronous data transformations in Foundation/UIKit.

Cons:

  • Steep learning curve makes ramping new developers onto the project difficult (and toward the end of the project, completely impractical). This is the #1 reason you should consider avoiding RxSwift: when it’s crunch time, you won’t be able to add developers to the project unless they’re already Rx veterans.
  • Debugging Rx data transformations is horrible. When Rx is working as intended, it’s borderline magical. When it has a problem, the debugging process is considerably more difficult. Any breakpoint you hit within a data stream will present a 40+ entry backtrace stack with dozens of inscrutable internal Rx methods separating and obscuring the code you actually wrote.
  • Rx metastasizes throughout your code base. The entry and exit-points where Rx interacts with UIKit are awkward and difficult to parse. We often found ourselves saying “oh, well if this service’s method returned an Observable instead of a variable, we could do this particular transformation more easily…” and so Rx spreads to another class in your app.

In the end, we can’t say we dramatically cut development time by using RxSwift, it simply replaced one class of problems (maintaining cross-platform consistency) with another (figuring out how to best use RxSwift). We will be launching into the next phase of the project soon, updating the app for Google Cloud Next ’19. I’m sure I will have more to talk about once that effort has completed next year.

Getting Swift 4 KVO working…

Here are 2 common pitfalls to avoid when you’re trying to use Swift 4 Key-Value Observation for the first time:

Keep that Observation object!

Calling YourObject.observe(_:options:changeHandler:) returns a NSKeyValueObservation object. The observation will only continue for as long as the observation object exists! If you fail to store the observation object in a persistent variable or array, then the it will immediately deallocate and no observations will occur.

Always Specify Options!

The 2nd parameter of the observe() method has a default value which is unhelpfully just called default in the quick documentation. It is the equivalent of providing an empty option set, which means your change handler closure will be invoked when the value changes, but you will not get any information about the new or old value! Even if this is the behavior you want, it is better to explicitly specify an empty option set so that someone else looking at your code immediately knows not to expect a value in the change handler.

Handling Drag & Drop Raw Photos

There is no shortage of tutorials on the topic of Drag & Drop, but I wanted to get into a particular special case which creators of apps that support dropped images should be aware of. If your user is a photographer that shoots with a SLR and imports their images in camera raw format, those images will not be accepted for drop unless you handle them specially.

Specifically, Raw images conform to the kUTTypeRawImage URI, which is not included in UIImage’s readableTypeIdentifiersForItemProvider. This is because UIImage cannot import raw images directly, requiring a detour through Core Image. Luckily, this is pretty easy to accomplish, as Core Image has a simple way to handle Raw images.

Note: My project is using a UICollection view as the drop target, so I’m working with the methods in the UICollectionViewDropDelegate protocol. Working with custom views should be broadly similar, but I haven’t tried it out yet.

Step 1: Getting the Raw Image

When a Raw image is dropped on an app, the image data is not sent as with UIImage-compatible images accessed via UIDropSession’s loadObjects(ofClass:completion:) convenience method. Instead, a URL is provided to your app. However, if you attempt to read or copy the file at the URL, it will always fail as being unavailable. You may be tempted to try to use NSItemProvider’s loadItem(forTypeIdentifier:options:completionHandler:) method, but it has awful ergonomics in Swift (you can’t implicitly coerce a protocol into a conforming type) and what’s more, even if you do force it to give you the URLs for the Raw images, they will all be unavailable and useless, possibly due to sandbox restrictions.

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”load-images-wrong.swift”]

The correct way to do this is to iterate over the list of UIDragItems, filtering by those which have an NSItemProvider that responds affirmatively to hasItemConformingToTypeIdentifier(_:) for kUTTypeRawImage. You can then iterate over the filtered NSItemProviders and call loadFileRepresentation(forTypeIdentifier:completionHandler:) on each one:

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”load-images-correct.swift”]

This method makes an accessible copy of the Raw image and returns the URL of the copy to the completion block. Why use this method and not, say, loadDataRepresentation(forTypeIdentifier:completionHandler:)? The answer is memory: Raw images tend to be very large (dozens of MB each) and if your user has dropped a bunch of them on your app, attempting to hold all of them in memory could cause the system to kill your app for eating up too much memory. Using the URL instead of the contents of the file consumes basically no memory until a specific image needs to be loaded for processing. In tests, I was able to drop 10+ Raw images onto my app for processing and never see the memory go above about 70MB, dropping back down to 20-30MB when processing completed.

Warning: The URLs provided by loadDataRepresentation only seem to be valid for the scope of the completion closure. You shouldn’t try to hold on to them and load them later, because it will fail. Instead, copy the file in your app’s sandbox (such as the Caches directory) and use the URL of the local copy to access the Raw image later.

Step 2: Converting the Raw Image

Now we have a local URL for the Raw images, but we still can’t do much with them since UIImage is unable to initialize with Raw image data. Thankfully, Apple has a simple and robust conversion mechanism based on Core Image. Instantiating a CIFilter using the init(imageURL:options:) method creates an instance of CIRawFilter, which can process all of the various Raw image formats that Apple officially supports. The filter’s outputImage property (a CIImage) can then be sent to a CIContext for rendering to a CGImage or used directly to apply filter effects and image adjustments:

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”convert-raw-to-uiimage.swift”]

In practice, the first run of this conversion is pretty slow as the CIContext sets itself up, taking up to several seconds. Subsequent uses are very fast, requiring only a faction of a second. At the end of this process, you have a UIImage that can be used just like a regular dropped image.

ARKit + RGB Sampling

I’ve been working on an ARKit app to paint with “pixels” floating in space. When the ARSession invokes its delegate method for ARFrame capture, I want to capture the colors the camera sees at the detected feature points. I then create a simple 3D box at that point with the sampled color and can then pan around the pixel. This is pretty neat when you scan someone’s face and then have them leave.

Anyway, that whole “sample the color of the camera’s captured image at an arbitrary 2D coordinate” turned out to be a dramatically more difficult problem than I had anticipated. Obstacles include:

  • Image is in CVPixelBuffer format.
  • The pixel buffer is in YCbCr planar format (the camera’s raw format), not RGB.
  • Converting individual samples from YCbCr to RGB is non-trivial and involves doing matrix multiplication.
  • There are several different conversion matrices out for handling different color spaces, just in case you wanted to convert an image captured off a VHS tape, I guess?
  • Apple’s Accelerate framework can do this conversion on the entire image very quickly, but the setup is quite complex and consists of invoking a chain of complex C functions. Once properly configured, it is spectacularly fast, converting an entire camera image in roughly 1/2 of a millisecond.
  • The Accelerate framework has not received much love since Apple’s switch to the unified documentation style last year: hundreds of functions appear nowhere in the documentation. The only way to figure out that they exist and how to use them is to browse the Accelerate header files, which are robustly commented.
  • Swift’s type safety is a big pain in the butt when you’re dealing with unsafe data structures like image buffers.

Setting up ARKit to display the “pixels” took about 2 hours (my first ARKit experiment and my first exposure to SceneKit). Getting the colors samples to color the pixels took about 2 days. I don’t feel like this learning process is anything that is particularly valuable for your average ARKit developer to master, so I’ve tidied it up and released it as a gist.

Check it out: CapturedImageSampler.swift

Usage: when your app receives a new ARFrame via the ARSession’s delegate callback, instantiate a new CapturedImageSampler with it. You are then free to query it for the color of a particular coordinate. I’m using scalar coordinates so that the sampling is scale-independent. If you want to find the color under a user’s tap, for instance, simply convert the x and y coordinates to scalars by dividing them by the screen width and screen height, respectively. When you’re done sampling (which must occur before the next frame arrives), simply discard the CapturedImageSampler by letting it go out of scope. Do not retain the sampler, use it asynchronously or pass it between threads. It should not live longer than the ARFrame that created it.

A word of warning: this object is not at all thread-safe due to the private use of a shared static buffer. I chose this implementation for maximum performance, since a new buffer does not need to be allocated for every frame received from ARSession. However, if you get into a situation where 2 instances of CapturedImageSampler are simultaneously attempting to access the shared buffer you will have a very bad day. If you need to have a thread-safe version of this, I suggest you make the rawRGBBuffer property non-static and add a “release” method that frees up the buffer’s memory when you’re done with it. Failure to manage this process correctly will result in a catastrophic memory leak that will get your app terminated within a couple of seconds.