Using Sigmoid Functions to Create Data “Envelopes”

I had the challenge of plotting a sine wave of arbitrary frequency, but the beginning and ends of the plot had to fade in/out so that it was at 0 on both ends.

My first attempt was using a piecewise function:

  1. 0 <= x <= 0.2 : Sine wave segment rising from 0 to 1.
  2. 0.2 < x < 0.8 : Constant value of 1.
  3. 0.8 <= x <= 1 : Sine wave segment falling from 1 to 0.

However, this approach always produced a visible “seam” at the transition points between functions. I wanted a solution where there was no visual cue where the fade in/out ranges began.

I turned to Sigmoid functions, as they had the smooth curve I was looking for, but didn’t have to be clipped to prevent them from oscillating in a direction I didn’t want. However, a single Sigmoid function will only do a single traversal from 0 to 1 or 1 to 0 (or other values based on how you scale it). I realized since the functions approach 1, they could be multiplied together to produce an envelope shape that rose smoothly from 0 to 1 at the start, stayed at 1 through the middle, and fell smoothly from 1 to 0 at the end. I applied various tweaks to equation to change the slope of the functions and position them within the 0…1 range (with everything outside that range being a 0).

This is the result.

A break-down of the key equation components:

  • The -60 term controls the slope of the curve. Higher values produce a steeper curve. In my case, I wanted a curve that finished with a width of about 0.2. There is no requirement that both of the Sigmoid functions have the same slope value, so if you wanted a faster rise and a slower fall, that’s fine.
  • The (x - 0.1) term shifted the leading edge of the “fade in” Sigmoid right so that it crosses the y-midpoint (0.5) at the x = 0.1.
  • The “fade out” Sigmoid flips the function by using (0.9 - x) as the multiplier. This ensures that the function crosses the y-midpoint at x = 0.9.
  • There is nothing magical about the use of the natural log (e) number. You can use any base, with lower values producing more gradual curves and higher values producing steeper ones. This is just how the sigmoid function is defined, so I went with it.

Here’s the full function in Swift:

(1 / (1 + pow(2.718281828459, -60 * (x - 0.1)))) * (1 / (1 + pow(2.718281828459, -60 * (0.9 - x))))

Demangle Link Map Swift Symbols

I’ve been working on a command-line app to analyze the link map generated when we build our app and report on the size of each module and each file within that module. I also wanted to provide a means of showing information about the individual symbols within each file, but the names of the symbols are mangled:

_$sSo31AVCaptureVideoStabilizationModeVSHSCSH9hashValueSivgTW

There is a command-line demangling tool that is part of the Swift distribution, but it simply refused to spit out a value. I kept getting a value of _ ---> _, which indicates it couldn’t understand the string.

Eventually, I discovered that you need to replace the leading “_” with a “\” backslash character. This is needed because the “$” symbol is reserved in the shell and needs to be escaped. If you try to demangle a symbol with a leading “$” the shell will hang until you press Ctrl+C. So, the string becomes:

\$sSo31AVCaptureVideoStabilizationModeVSHSCSH9hashValueSivgTW

When fed into swift demangle, I finally got the output I was looking for:

protocol witness for Swift.Hashable.hashValue.getter : Swift.Int in conformance __C.AVCaptureVideoStabilizationMode : Swift.Hashable in __C_Synthesized

Certainly more readable, but an awfully long “name” for the symbol. I’ll figure that out next.

GitHub Actions: Using Older Simulators

GitHub Actions is a relatively recent feature update to GitHub that allows you to run CI/CD tasks in response to repository changes. The service is free to public repositories and available to the paid tiers of private organizations.

One shortcoming I’ve encountered is that each version of Xcode only has the default shipping simulators available to it. Thus, if you want to do tests 1 or even 2 iOS versions back, you need to do some setup work. Thankfully, there is a solution less drastic than downloading the required simulator at the start of every CI run (which would slow things down terribly).

The solution comes from the fact that the default MacOS environment has many previous versions of Xcode available. You can use symbolic links to make an older simulator appear to be installed when you run xcodebuild. This can be done as a step just before you begin build / test operations:

[gist https://gist.github.com/JoshuaSullivan/8a1455bc9813b5926235053e1de8c93b]

You can see a complete list of software installed on the virtual MacOS environments on this page. You can see they maintain all current- and previous-generation versions of Xcode as well as one version from 2 generations ago. To pick a particular simulator, you just need to modify the workflow step to refer to the correct version of Xcode and name the linked simulator core appropriately.

To use the simulator, you simply need to update the destination flag of your xcodebuild command to look something like this:

-destination 'platform=iOS Simulator,name=iPhone 11,OS=13.7'

The next time you run your workflow, you should see it link to the specified simulator and use it for your tests.

Combine: “Lazy” CurrentValueSubject

This is a technique suggested to me by Jordan Gustafson (@minnesota_gus) on how to make a CurrentValueSubject that doesn’t require an initial value. This should probably be developed into an actual type which conforms to Subject, but I haven’t figured that out yet.

[gist https://gist.github.com/JoshuaSullivan/0985e00b9828a70d25383fadb4cbf013]

This will produce the following output:

First subscription.
[Example] sending value: 1
sub1: 1
Second subscription.
sub2: 1
[Example] sending value: 2
sub2: 2
sub1: 2

As you can see, the first subscription did not receive a value until the first time doSomething() was called. Conversely, the second subscription—which was added after the first value had been sent—received a value immediately. Both subscriptions receive all values thereafter.

CoreImage: New Filters in iOS 14.0

iOS 14.0 recently arrived and with it, 3 new CoreImage filters:

  1. Color Absolute Difference
  2. Color Threshold
  3. Color Threshold Otsu

Unfortunately, Apple has not yet updated the CoreImage documentation to include these new filters. Instead, I’ve created a Swift playground that enumerates all of the properties of a CIFilter. Here are the results for those 3 filters:


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorAbsoluteDifference.txt”]


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorThreshold.txt”]


[gist https://gist.github.com/JoshuaSullivan/c3c167ac2b503efbfe6916a7405a0b98 file=”ColorThresholdOtsu.txt”]


I have no idea what the difference is between the Color Threshold and the Color Threshold Otsu filters. I’ll follow up if I can produce a workable demo.

Mocking Static Method Protocols

I ran into an issue on my project where I wanted to add unit tests for my AnalyticsService facade class, which hooks up to Firebase Analytics behind the scenes. The issue is that the Firebase class uses only static method calls for interaction: Analytics.log(“event_name”)

I created an AnalyticsBackEnd protocol that declared a static method and added an extension to the Analytics class to conform to it:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsBackEndProtocol.swift”]

The sticking point was the initializer for the AnalyticsService class: how do I say that I want to receive a class conforming to AnalyticsBackEnd without passing an instance of the class? The solution ended up looking like this:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsService.swift”]

In this way, the AnalyticsService uses the Analytics by default, but allows me to initialize it with the MockAnalyticsBackEnd class for unit testing purposes:

[gist https://gist.github.com/JoshuaSullivan/ec730a453f0a1c24642cbd014229aee5 file=”AnalyticsUnitTest.swift”]

Core Image: Correcting Orientation

UIImage and UIImageView work together to ensure that photos are displayed in the correct orientation, based on the EXIF metadata for the image. However, Core Image does not respect orientation by default and this can cause Core Image backed rendering tasks (thumbnails, effects, etc.) to produce output with an unexpected results.

Lots of the correction techniques available in answers on Stack Overflow and other programming forums are wildly inefficient, producing a second full-resolution copy of the source image. Other techniques are based on the UIKit coordinate system (origin in upper-left corner, positive y pointing down) rather than the Core Image coordinate system (origin in lower-left corner, positive y pointing up).

Fortunately, there is a simple solution built into Core Image as of iOS 11: orientationTransform(for:). When invoked on a CIImage instance, it returns a CGAffineTransform which will convert the image from its current orientation to the .up orientation. This transform can be combined with any other transformed being performed or applied via the CIImage.transformed(by:) instance method. There is another form of the call which calculates and applies the transform, returning the resulting CIImage. See the documentation here.

Trying to correct orientation yourself is tedious and error-prone, using these methods will make it a snap and save you many hours of work.

Note: The enum cases used by the method are NOT the same as the UIImage.Orientation enum values. I’ve made a helper extension for UIImage.Orientation to convert it into the appropriate enum case for the orientation transform method.

[gist https://gist.github.com/JoshuaSullivan/7b01141fe82c308519ee537d9120cc70 /]

The Pros and Cons of RxSwift…

I recently wrapped up a major app project for Google’s Cloud Next ’18 conference. We made extensive use of RxSwift throughout the project, centered around a couple of key objectives:

  1. Model-layer parity with Android – Design nearly-identical APIs for the view model and persistence layers on iOS and Android to help avoid the “iOS does one thing, but Android does another” class of bugs. Internally the implementations are quite different, but having their public interfaces documented in a Wiki allowed each platform to feel out the requirements for a particular object, then document it for the other team. This saved massive amounts of time over the course of the project and resulted in very few platform discrepancy bugs.

  2. React to schedule changes in real time – Google Cloud Next is a BIG conference; second only to Google I/O in terms of attendance. With limited seating at the various sessions, it is very important that attendees know exactly which sessions have available seats as well as receiving important updates about their reserved sessions ASAP. To accomplish this, we used RxSwift to transform observations of the real-time session details coming out of our Firebase Cloud Firestore back end into data streams that could be easily bound to views in the user’s schedule.

This was my first production experience with Rx and, as such, I experienced a considerable ramp-up curve over the course of the first month. If you are working on your first Rx project, expect to lose about 50% of your productivity for the first month and 20% of it the second month as you come to grips with the different style of data flow that Rx enables as well as learning which tool in the Rx toolbox is appropriate for each different data scenario. The up-side is that techniques learned for Rx on one platform are broadly applicable to others (which was a big reason why we chose to work with it on this project).

Once I became more familiar with Rx, I started being able to model data transformations in my head and implement them with a bare minimum of fuss. This was gratifying, but it always felt like there were some sharp edges around the boundaries between Rx code and more traditional UIKit code governing things like user interaction. I’ve created a quick list of the major points to be aware of when considering RxSwift for your iOS project:

Pros:

  • Able to describe a common interface for model layer APIs between iOS and Android. This was the biggest win for us on this project, saving many dozens of hours of QA bug fixing time.
  • Avoid nested-closure hell that typifies complex asynchronous data transformations in Foundation/UIKit.

Cons:

  • Steep learning curve makes ramping new developers onto the project difficult (and toward the end of the project, completely impractical). This is the #1 reason you should consider avoiding RxSwift: when it’s crunch time, you won’t be able to add developers to the project unless they’re already Rx veterans.
  • Debugging Rx data transformations is horrible. When Rx is working as intended, it’s borderline magical. When it has a problem, the debugging process is considerably more difficult. Any breakpoint you hit within a data stream will present a 40+ entry backtrace stack with dozens of inscrutable internal Rx methods separating and obscuring the code you actually wrote.
  • Rx metastasizes throughout your code base. The entry and exit-points where Rx interacts with UIKit are awkward and difficult to parse. We often found ourselves saying “oh, well if this service’s method returned an Observable instead of a variable, we could do this particular transformation more easily…” and so Rx spreads to another class in your app.

In the end, we can’t say we dramatically cut development time by using RxSwift, it simply replaced one class of problems (maintaining cross-platform consistency) with another (figuring out how to best use RxSwift). We will be launching into the next phase of the project soon, updating the app for Google Cloud Next ’19. I’m sure I will have more to talk about once that effort has completed next year.

Getting Swift 4 KVO working…

Here are 2 common pitfalls to avoid when you’re trying to use Swift 4 Key-Value Observation for the first time:

Keep that Observation object!

Calling YourObject.observe(_:options:changeHandler:) returns a NSKeyValueObservation object. The observation will only continue for as long as the observation object exists! If you fail to store the observation object in a persistent variable or array, then the it will immediately deallocate and no observations will occur.

Always Specify Options!

The 2nd parameter of the observe() method has a default value which is unhelpfully just called default in the quick documentation. It is the equivalent of providing an empty option set, which means your change handler closure will be invoked when the value changes, but you will not get any information about the new or old value! Even if this is the behavior you want, it is better to explicitly specify an empty option set so that someone else looking at your code immediately knows not to expect a value in the change handler.

Handling Drag & Drop Raw Photos

There is no shortage of tutorials on the topic of Drag & Drop, but I wanted to get into a particular special case which creators of apps that support dropped images should be aware of. If your user is a photographer that shoots with a SLR and imports their images in camera raw format, those images will not be accepted for drop unless you handle them specially.

Specifically, Raw images conform to the kUTTypeRawImage URI, which is not included in UIImage’s readableTypeIdentifiersForItemProvider. This is because UIImage cannot import raw images directly, requiring a detour through Core Image. Luckily, this is pretty easy to accomplish, as Core Image has a simple way to handle Raw images.

Note: My project is using a UICollection view as the drop target, so I’m working with the methods in the UICollectionViewDropDelegate protocol. Working with custom views should be broadly similar, but I haven’t tried it out yet.

Step 1: Getting the Raw Image

When a Raw image is dropped on an app, the image data is not sent as with UIImage-compatible images accessed via UIDropSession’s loadObjects(ofClass:completion:) convenience method. Instead, a URL is provided to your app. However, if you attempt to read or copy the file at the URL, it will always fail as being unavailable. You may be tempted to try to use NSItemProvider’s loadItem(forTypeIdentifier:options:completionHandler:) method, but it has awful ergonomics in Swift (you can’t implicitly coerce a protocol into a conforming type) and what’s more, even if you do force it to give you the URLs for the Raw images, they will all be unavailable and useless, possibly due to sandbox restrictions.

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”load-images-wrong.swift”]

The correct way to do this is to iterate over the list of UIDragItems, filtering by those which have an NSItemProvider that responds affirmatively to hasItemConformingToTypeIdentifier(_:) for kUTTypeRawImage. You can then iterate over the filtered NSItemProviders and call loadFileRepresentation(forTypeIdentifier:completionHandler:) on each one:

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”load-images-correct.swift”]

This method makes an accessible copy of the Raw image and returns the URL of the copy to the completion block. Why use this method and not, say, loadDataRepresentation(forTypeIdentifier:completionHandler:)? The answer is memory: Raw images tend to be very large (dozens of MB each) and if your user has dropped a bunch of them on your app, attempting to hold all of them in memory could cause the system to kill your app for eating up too much memory. Using the URL instead of the contents of the file consumes basically no memory until a specific image needs to be loaded for processing. In tests, I was able to drop 10+ Raw images onto my app for processing and never see the memory go above about 70MB, dropping back down to 20-30MB when processing completed.

Warning: The URLs provided by loadDataRepresentation only seem to be valid for the scope of the completion closure. You shouldn’t try to hold on to them and load them later, because it will fail. Instead, copy the file in your app’s sandbox (such as the Caches directory) and use the URL of the local copy to access the Raw image later.

Step 2: Converting the Raw Image

Now we have a local URL for the Raw images, but we still can’t do much with them since UIImage is unable to initialize with Raw image data. Thankfully, Apple has a simple and robust conversion mechanism based on Core Image. Instantiating a CIFilter using the init(imageURL:options:) method creates an instance of CIRawFilter, which can process all of the various Raw image formats that Apple officially supports. The filter’s outputImage property (a CIImage) can then be sent to a CIContext for rendering to a CGImage or used directly to apply filter effects and image adjustments:

[gist https://gist.github.com/JoshuaSullivan/92941351244fbbea4ff2bc6ce4426f4f file=”convert-raw-to-uiimage.swift”]

In practice, the first run of this conversion is pretty slow as the CIContext sets itself up, taking up to several seconds. Subsequent uses are very fast, requiring only a faction of a second. At the end of this process, you have a UIImage that can be used just like a regular dropped image.