2021-06-22: Shortcuts for Mac, coming soon to TimeStory

Shortcuts is finally coming to the Mac, and TimeStory will support it on day one. (Or as near as possible!)

I’m excited about this feature. I like the Shortcuts app in the Monterey beta so far, and I like that Apple now has a single, consistent automation story across their platforms. And thanks to iOS, it’s launching on the Mac with far more user and developer mindshare than Automator or AppleScript have had in a long time (if ever).

On WWDC Week

After the WWDC21 announcements, it was Shortcuts which prompted me to immediately grab the Monterey and Xcode betas. The Meet Shortcuts for Mac was well-done, and all I needed to get oriented. The first roughed-in shortcut or two came together fairly quickly, having never touched this API before. I was able to follow this up with a lab appointment where I had a fantastic conversation with a couple of Apple folks, including one of the original creators of the app. (Thanks, Ari. And as an aside, thanks, Apple; I signed up for four labs this year, never having done a WWDC lab before, and they were an incredibly helpful and motivational resource.)

On composable actions vs. features

I can sum up much of my enthusiasm for Shortcuts support by describing the other features I no longer have to build.

Need to find the next seven days’ worth of events in a TimeStory timeline, filtered for some tag, and add them to your Reminders? Need to import vacations or other events from a company calendar into your project plan? Want a hot key to quickly pop up a two-week timeline view, with a specific project open?

With robust automation support, these can all be done without any special features on my part, and I can respond to a customer request with a shortcut file rather than a promise to add it to my backlog. TimeStory doesn’t even need to request access to your calendar, reminders, or other private data, because it gets handed whatever data it needs.

On Shortcuts vs. AppleScript

I wanted to mention that TimeStory does have a basic suite of AppleScript commands. I built it mainly for my own use, but it can do a fair number of simple things. I’m not removing it. But the Shortcuts UI is a far better fit for simple data flows or sequences of steps than Script Editor. It’s more discoverable, easier to use, and better integrated into the system, and I’m much more comfortable with pointing users to Shortcuts than I would be with pointing them to AppleScript.

2021-06-08: WWDC21 Post-Day-One Thoughts

There was a lot in the keynote and SOTU, and a lot more in the release notes once we all downloaded the betas. Here are just a few of my personal highlights and notes, written this morning over coffee. (Overall, I’m pretty impressed by the updates, and eager to dig in to the new stuff.)

  • No new hardware. I stand here money in hand, Apple!
  • TestFlight for Mac Apps… finally! (I can’t believe I forgot to put that on my wishlist.) I’ve never built proper beta-test support into TimeStory, so I’m eager to try it out.
  • Shortcuts for Mac is here. (I did have this one in my wishlist!) As hoped, they directly addressed AppleScript and Automator as well. I have sessions to watch, and work to do. I’m pretty excited about this.
  • Okay, that demo of Universal Control was just cool. I wonder if I have to do any specific work to allow dragging my data from one computer to another; my guess is no.
  • Very impressed by the serious leveling up of FaceTime. We’ve been using FaceTime to watch the Marvel TV shows together with our kids, so I’m eager to try out the new video-sharing in particular.
  • The new iPadOS multitasking UI looks great. I’d say 80% of my interactions with the current multitasking gestures are accidental, and 100% are frustrating. Visible and tappable buttons, and the slide-away access to the home screen, were both long overdue.
  • Foundation now gets a new AttributedString struct, corresponding to NSAttributedString, and SwiftUI Text supports it. I had recently been frustrated by this, which was why it was the one and only SwiftUI comment on my wishlist!
  • I’ve been following the Swift Concurrency evolution for some time now; none of it was a surprise, but I’m looking forward to the sessions and to see how it plays out in the APIs and sample code now that they’re released.
  • Xcode 13 now has a “Vim mode”. This got me unreasonably excited. (I’m typing this blog post in Vim right now.) But having tried the beta, well, it’s a great start, but it’s missing core Vi behaviors. . doesn’t work to repeat a command, for example; the reason, presumably, is that the editor just isn’t structured as command-based. This currently puts it in the uncanny valley of Vim emulation; good enough to get me into the flow, missing enough to kick me right back out again. I hope the Xcode team keeps iterating on this; what does work works well, and a fuller emulation would be amazing.
  • MacOS Monterey is actually pretty big. But I don’t see anything that will require most Mac devs to work just to stay in place, unlike the last several years. The work will all be in adding new things again, which is nice.

2021-06-03: WWDC21 Wishlist

WWDC is in a few days, as I write this. Joining in on Apple developer Internet tradition, I decided to chip in a very small and admittedly selfish wishlist.

  • I’d love to see new 16” MacBooks Pro in color. Maybe the normal boring choices plus one nice color, like recent iPhones Pro, or maybe even a full palette like the new iMacs (although that seems highly unlikely).
  • I’d love to see those new laptops announced during WWDC and available soon; my 2018 butterfly keyboard is, well, doing what those keyboardssss do.
  • It’s time for iPadOS to diverge more from iOS and give productivity apps more of the kind of built-in features and well-marked paths that we take for granted on the Mac. (This is especially on my mind as I’m bringing a Mac productivity app over to iPad.) Like many, I think Vidit Bhargava’s mockup nailed it.
  • It would be nice if SwiftUI acknowledged the existence of NSAttributedString. (That’s all the SwiftUI I’ll throw in here for now.)
  • It would be fantastic if Apple jumped off the annual macOS release train and switched to a year-round incremental update approach. They’ve climbed some big hills in the last few years, with the release of Catalyst and SwiftUI, with a UI overhaul, with big changes to OS and app security, with a new processor architecture; maybe it’s a good moment to pause.
  • Apple’s automation story is clearly centered on Shortcuts now, so I hope we finally get it on the Mac, but in a way which doesn’t ignore the fact that there are a lot of native Mac apps with existing scripting interfaces. I’ll leave the details vague, because I’m not sure how that looks!
  • In true daydream territory: we really need an update to the HelpBook spec, making it clear how to author offline help with the same features as Apple’s own apps, and a new cross-platform Help Viewer so I can deliver help in an iPad app with the same tools.

2021-04-10: Implementing More of SVG

Back in January 2020, I released TimeStory 1.7, which extended the set of built-in icons you could use to mark points in time on a timeline. The previous set of shapes had all been implemented as simple Core Graphics drawing code, but to make the process smoother, I switched to using SVGs for all new shapes.

This, of course, required an SVG rendering library1. So, naturally, I wrote one, named Yass. It was pretty minimal; I basically opened my SVGs up in a text editor, opened the SVG2 spec, and implemented exactly what was needed and no more.

Over the last month, I’ve been working towards TimeStory 2.5, with a major new (and often-requested) feature: custom event icons. You should be able to add any image you’d like, in any of several common formats, including SVGs. There are tons of free SVG icons all over the Web that people might want to use. So I set about grabbing a bunch of them and trying them out with my rendering library. And wow, did almost none of them work!

SVG is a very large spec to implement, even if you focus purely on drawing paths and shapes, and ignore things like animation and scriptability. There are a few predefined shapes, plus the ability to compose arbitrary paths from lines, Bézier curves, and arcs. These can all be grouped, transformed, and styled in a few different ways. The structure is all XML, but much of a typical SVG file is non-XML syntax embedded in attributes, including its own path-drawing commands and a bunch of CSS syntax. Most significantly to an implementor, many parts of SVG also have variant forms designed to make it more compact or easier to hand-author.

The net result is that you can draw the same shape, with the same style, in many different ways. And if you look at SVGs around the Web, you’ll find that you have to support quite a few of them—different authoring tools make different choices. For example:

  • Every path command has two forms: one for absolute coordinates, and one for coordinates relative to the prior end point.
  • When the same path command is repeated, you can omit its name, just stacking up sets of parameters.
  • When one Bézier curve follows another, an alternate command lets you choose to omit the first control point if it’s just a reflection of the prior curve’s last one.
  • CSS-based attributes tend to have many allowable syntaxes. For example, translate(1,0) and translate(1) and translate(1 0) and translateX(1) all mean the same thing, and are all found in the wild.
  • Many values can be given as XML attributes (<rect fill="red") or CSS inline styles (<rect style="fill:red; ..."). The SVG2 spec recommends the latter, but it looks to me like the former is more common; it’s certainly easier to parse!
  • Path commands allow omission of any unnecessary whitespace. If you see “10-10” where you’re expecting two numbers, it means (10, -10), but if you see it where you’re expecting two Boolean flags and a number (as in the A command), it means (1, 0, -10). That surprised me; I’d initially used one token-splitting pass followed by a parsing pass, but you can’t split on tokens without knowing the command you’re parsing.

I had also been totally missing a couple of path commands which I hadn’t needed at all at first. Of interest is the elliptical arc path segment command, which requests a rotated elliptical arc with a notably complex set of parameters, requiring a bit of math to map into the primitives offered by Core Graphics. (Everything else—lines, cubic curves, quadratic curves—map pretty much directly.)

After some iteration, Yass now correctly renders every SVG I’ve tried from FontAwesome, from GitHub Octicons, from Material Design Icons, and a bunch I’ve tried from Web resources like Flaticon. It’s still far from implementing the full SVG and CSS specs, but it seems to me like the current set is a good match for what common authoring tools use. It was a fun project, with a very visual and satisfying payoff for each fix.

It looks solid enough to tag Yass at version 1.2 and build the TimeStory 2.5 release atop it. (I hope to have it out soon; it contains more changes than just this, of course.) The good news is, even though SVG is a large standard, it’s still possible to handle a lot of what’s out there with a fairly simple implementation.

  1. Since Xcode 12, you’ve been able to put SVGs into asset catalogs; I’ve never used it, as it requires a minimum of Catalina, and TimeStory still supports Mojave. In any case, I knew I was ultimately going to need to add user-imported SVGs, which makes asset catalogs irrelevant. 

2021-01-22: Smooth mouse movements, and a library to intercept mouse events

Smoothing mouse movements for a demo video

The problem statement: I was trying to capture a demo video of my Mac app. But I wanted the mouse movements in that video to look smooth and precise, not an easy job for a human.

I found cliclick, a command-line program which can smoothly move the mouse around and inject clicks and keystrokes, and installed it from Homebrew. You can write a single cliclick command which seqences mouse movements, clicks of different types, keypresses, and pauses, or you can build a small shell script to sequence multiple cliclick actions with other things. (It works great to start the script with a pause, so you can kick it off and then hide everything but the app you’re demoing.)

This meant that I needed a sequence of click coordinates on the screen. You can get these a few different ways: position the mouse and run cliclick p, for example, or run the Digital Color Meter app and enable Show Mouse Location (under the View menu).

Writing code to intercept mouse input

But this gave me an excuse to play around with Quartz Event Services, an API built into macOS which lets you monitor, filter, transform, or synthesize user input events—mouse actions, keyboard actions, and more. All I cared about was intercepting mouse actions and printing them out so I could compose my demo script.

The core API of interest, defined in Core Graphics, is:


This accepts a callback (C-style plain function pointer plus optional untyped user pointer) and a set of event types (defined by the enumeration CGEventType), and returns a Mach port object which you must add to a run loop. That run loop will then dispatch callbacks for every intercepted event.

(This tap will only be enabled if the user allowed it in System Preferences, under the Input Monitoring settings. The first time your program runs, macOS will prompt the user; if those settings change, your program must restart.)

EventMonitor is a new Swift wrapper library I wrote to encapsulate this API safely and set up a passive event tap. It’s published as an SPM package but it’s really just a single, small file with embedded documentation comments.

To use it, create an EventMonitor object, register one or more handler functions or closures by their desired event types, and call start(). It will set up the tap, validate that the tap is active (allowed by the user), route messages to your handlers, and clean up when deallocated.

Here’s a little command-line tool I made, based on that library. It sets up an EventMonitor, reporting each mouse-up’s coordinates as c:X,Y, which is how cliclick expects click events. I also detect a press of a number key and print out a little divider, so as I navigated the screen, I could delimit the output. (As a command-line tool, I needed to end it with a call to RunLoop.run, or nothing would happen; GUI apps would of course already have an active run loop).

import Foundation
import EventMonitor

var tap = EventMonitor()

tap.handle(.leftMouseUp) { event in
tap.handleKeyDown { str, _ in
    if let first = str.first, first.isNumber {
        print("--- \(str) ---")

try! tap.start()
print("event tap installed")


Pretty simple. Event taps are much more powerful than this, of course, but this was a nice foray into an aspect of the Mac API I hadn’t touched before.

2020-12-30: AppKit notes: NSScrollView floating subviews

This is a collection of notes on using floating subviews in a a scroll view within an AppKit Mac app. When you add a floating subview, it scrolls in sync with the document along one axis, while remaining fixed in position along the other axis, and it does so efficiently and with minimal code.

I wanted to publish these notes after adopting floating subviews earlier this year and finding some gaps in the documentation, some surprising behaviors, and not many good search results for my questions. (AppKit developers often run into this, especially for technologies which are so different from their UIKit equivalents.) The below notes are an attempt to capture, in one place, some key aspects of how floating subviews work and some things to watch out for.

Note that this will all make a lot more sense if you’re already familiar with NSScrollView; in particular, with how it works with NSClipView to implement scrolling over its document view (which is all flattened together in the much simpler UIKit model).


Among a TimeStory document’s subviews are some which float along an axis; for example, the time index at the top, which scrolls left and right with the document body but stays at the same place vertically.

Back during 2.2, I switched some of these views to start using floating subviews; previously, I had beeen using bounds-change notifications to directly synchronize their layouts, and this switch simplified the code and improved scrolling performance. Worth doing, but with a few gotchas waiting; see “Caveats”, below.


  • A good but not very deep overview can be found back in WWDC13, when floating subviews were introduced.
  • In addFloatingSubview(_:for:), the supplied axis names the axis along which your floating subview does not scroll.
  • This will result in your NSScrollView first creating, if needed, a direct subview of private type _NSScrollViewFloatingSubviewsContainerView, as a sibling to and Z-ordered on top of its clip view (NSScrollView.contentView). Your floating subviews are added to this private container. You should ignore this container except when debugging, but it helps to understand how it works.
  • AppKit may create more than one such container, if you create these subviews over time. Don’t assume all your floating subviews have the same parent, and don’t assume that they don’t.
  • The floating-subview containers will synchronize their frame rect (position) with the clip view and their bounds origin’s coordinate along the non-floating axis with the clip view’s corresponding bounds origin coordinate. The other bounds origin coordinate will be set to zero. The view’s scale will always be 1.0; that is, its bounds size will always equal its frame size. This is important.


  • You can’t really mix floating subviews with NSScrollView.magnification. If the magnification is anything but 1.0, then your clip view’s bounds size will be scaled relative to its frame size, but the floating subview container view’s bounds won’t. This means that the floating subviews don’t magnify, and that relative placement of floating subviews will not match relative placement within your document view. (I ultimately resolved this by doing my own math on subview placement and magnifying within the document view and floating views.)
  • After programmatically scrolling your main clip view, if the floating subviews are no longer aligned, calling NSScrollView.reflectScrolledClipView(_:) will synchronize them. I found this necessary in some cases.
  • It would be nice to be able to use NSView.backgroundFilters to easily apply a Core Image filter (such as a blur) under the floating subview and over the scrolled document view. This won’t work, since the document view and the floating view have sibling parents. Layer background filters work over ancestor layers or sibling layers, but not over “cousin” layers like this.

2020-04-13: "Yass", a new SVG (subset) library for Swift

I’ve just released a new, small Swift library for loading and rendering a useful subset of SVG on macOS or iOS. I named it “Yass” - “yet another Swift SVG [library]”.

That “useful subset” consists of icons, drawn out of common shapes and paths, with simple colors; no scripting, gradients, text, animation, etc. See example icons below. I built it a few months ago, adding it to version 1.7 of my Mac app TimeStory, which has used it since, and I thought others might find it useful.

Yass is:

  • Clean and “Swifty”: Yass mainly consists of a set of value types modeling SVG shapes, paths, and attributes, along with code to parse them from SVG and Core Graphics extensions to render them. See below for details.
  • Not based on the SVG DOM: this is required for scripting or interop with other Web technologies, but I just didn’t really care
  • Tiny: just a few files
  • Focused: implements just enough of an SVG subset to render a set of simple icons, created in and exported by Sketch. (More complex drawings, or other SVG generators, may use features not included here). See below for examples.
  • Able to accept colors and sizes when rendering, to allow stamping a common SVG asset in different places
  • Standalone and platform-independent: depending only on Foundation (for XML parsing) for loading and manipulating shape data, and Core Graphics for rendering (I currently build it into the main AppKit-based TimeStory app for Mac, an incomplete UIKit-based TimeStory app for iOS, and even an internal command-line tool)

Yass’s design

As mentioned above, Yass uses Swift value types to model SVG data; its model is built around a few enums defining the available elements, shapes, and path instructions, and a few structs packaging up bundles of attributes. Here are a couple of examples:

public enum SVGElement {
    indirect case svg(String?, SVGFragmentAttributes, SVGPresentationAttributes, [SVGElement])
    indirect case group(String?, SVGPresentationAttributes, [SVGElement])
    case graphic(String?, SVGPresentationAttributes, SVGGraphic)

public enum SVGGraphic {
    case path([SVGPathInstruction])

    case rect(CGRect)
    case circle(c: CGPoint, r: CGFloat)
    case ellipse(c: CGPoint, rx: CGFloat, ry: CGFloat)
    case line(p1: CGPoint, p2: CGPoint)
    case polyline([CGPoint])
    case polygon([CGPoint])

This means that the drawing and path-construction code can use switch statements to cover all bases. For example, this is from my CGContext extension:

public func svg_buildPath(_ graphic: SVGGraphic) {
    switch graphic {
    case .path(let instrs):
    case .rect(let rect):
    case .circle(c: let c, r: let r):
        addEllipse(in: CGRect(x: c.x - r, y: c.y - r,
                              width: r * 2, height: r * 2))
    case .ellipse(c: let c, rx: let rx, ry: let ry):
        addEllipse(in: CGRect(x: c.x - rx, y: c.y - ry,
                              width: rx * 2, height: ry * 2))
    case .line(p1: let p1, p2: let p2):
        move(to: p1)
        addLine(to: p2)
    case .polyline(let points):
        addLines(between: points)
    case .polygon(let points):
        addLines(between: points)

Contrast this approach with an object-oriented or protocol-oriented approach, more common in other SVG libraries that I looked at, where each SVG element or shape exposes a “draw” or “addToPath” method and encapsulates its implementation.

Here, drawings are described by data structures which are independent of destination, and their types have no knowledge of Core Graphics (other than the use of CGRect, CGPoint, and CGFloat). The actual path-drawing code lives in one file, which uses Swift switch statements to pattern-match, destructure, and recurse. Now imagine extending this library to support SwiftUI by creating Path objects, or even Cairo for non-Apple platforms. Each of those would again be a simple, decoupled set of pattern-matching code.

Example shapes

Here is a screenshot of TimeStory’s shape picker. The shapes following the octagon are all rendered by Yass from embedded SVG assets. (The first few are older shapes, which I implemented with direct Core Graphics path-drawing code; I didn’t feel the need to delete that code.)

When you add one of these to your document, it is given a size and color based on other properties.

Screenshot of shape picker from TimeStory 2.0

The code

I use GitLab for all my development, private and public. Find Yass here. (I will probably set up a GitHub mirror at some point.) I have released it under the MIT license, so you can use this in your nonfree, closed-source apps, if you find it useful.

I’ve recently started using the Swift Package Manager to organize my own internal libraries. It’s quite nice, and Xcode’s integration with SPM works reasonably well. So Yass is packaged for SPM, with two targets, a static library and a dynamic framework; I use them both (the static library for my TimeStory CLI target, and the framework for my Mac and iOS targets).

(I don’t, at present, use CocoaPods or Carthage for anything, so no support for them out of the box.)

I’m happy to hear any input on this library, or just a quick note if you’re using it, but feel free to take it, modify it, and use it as you wish.

2019-12-31: TimeStory Year One

2019 was the year I switched to full-time independent software development and built TimeStory, my Mac app, with the help of my wife Hemi. I thought it would be fun to write a bit about this experience as the year comes to a close.

(Hemi also wrote her own 2019 reflection; there’s more context there about this, about her own job change this year, and more.)


The core idea behind TimeStory, for me, goes back to the several years I spent as an engineering manager starting in 2009. I was given a chance to switch to this management role during a time when we had a lot of interesting product work, a lot of remote people to work with, and a changing business environment. In those days, I often used Excel to map out upcoming work on a date grid, as people often do. This gets cumbersome to edit, and difficult to extract data from.

At one point, I fired up Xcode and prototyped a simple timeline sketching app for iOS. I didn’t get very far, and abandoned it. But the idea felt sound, and persisted. I wanted an app for sketching timelines. On the one hand, it shouldn’t just be a plain grid of cells or canvas of shapes; it should have a data model reflecting events in time. On the other hand, I didn’t care about having it keep me honest or help me with detailed project planning; I left that to the project managers.

Fast forward to 2018. Hemi was responsible for a large team which was building software across multiple global markets in parallel, and had many moving parts to keep track of. She was trying out various timeline-authoring tools, but hadn’t found one which checked off all of her boxes: among other things, something she could leave open all the time, was fast and responsive even with large numbers of events, and which offered quick entry of data.

So we started talking about building something new. I was enjoying my consulting gig at this time, working with great people on some interesting problems, but it was basically a coding role on an existing system. This would be a chance to start from scratch and craft something new that I knew people could use, and it really sounded exciting. On top of that, I knew this was a great chance to build a native Mac app, something new to me. I had shipped some iOS and Apple Watch code, and had delivered Java code on the Mac, but never anything native.

Then, Hemi found a great new job. Our second and youngest kid would be heading to college in 2019. Everything felt ready. So we made the decision, and I let the contract end in December.


I started right after the winter holidays. I blocked out a couple of weeks to dig in, figure out what I needed to learn, and make plans.

My primary tool at this point was Workflowy; I’ve never treated it as a task manager, just as a great, smooth, outlining tool. My outline grew to many branches, including everything from UX flows, to links to other apps, to a basic architecture, to the core data model, to proposed names for the product. There was a lot of iteration and backtracking, which is cheap when you’re working by yourself and don’t risk wasting other people’s time.

I specified a data model and basic document format, and hand-created the first few documents.

Then I fired up Xcode. I created the data model types, the file I/O logic, and a bit more, and wrapped it up in a purely textual command-line interface. I wrapped that CLI behind some test scripts (in Bash) which validated file reading, file writing, and preservation of data between the two, and I was off to the races! I proceeded to write document layout logic and even Quartz-based document rendering, extending the CLI to be able to spit out PNG renderings of document files, before ever having a line of GUI code.

Each of these areas would be incrementally changed, refactored, or even partially rewritten throughout the year, but never underestimate the value of having something that works at all times to keep yourself moving and your design sense clear. And especially never underestimate the value of adopting automated tests early in your development cycle—they transform design assertions into source-controlled code. (Even if you don’t have CI in place, and have to run them manually.)

Integration tests, in particular, are very powerful. If you have an app where you can run a lot of end-to-end use cases without any GUI involved, and make stable assertions at the level of program inputs and outputs, you can quickly build a lot of valuable test collateral that survives GUI rework and even internal API refactorings. Those tests caught many mistakes over the year, have grown to include a suite of both good and malformed document files, and were the best investment I’ve made. After all, if I ship a bug in the UI, I have to push a patch, but if I ship a bug in the file-saving code, you lose data.

It wasn’t till February that I created a Mac GUI app target and started writing AppKit code. This was mainly an exercise in learning AppKit and wrapping up my existing core, which worked pretty well. The document object in an NSDocument, the document rendering code in an NSView subclass, and so on. I became initiated into the mysteries of NSScrollView and NSClipView, I learned to supply NSToolbar with items, I puzzled over whether NSCell was really deprecated, and so on.

And I like it. I actually think I like AppKit better than UIKit, despite all its legacy and complexity, though I’m not sure I could tell you why. Maybe because of the legacy—sometimes I just enjoy the feel of an API with real history to it. People have solved a lot of problems with that code.

(After 1.0, I also added a simple iOS (UIKit) target to my build, wrapping the same core logic in a minimal UI. I’ll talk more about my plans for that below, but I felt it was important to add that to my process so I could avoid adding non-portable logic to the core layer.)


We didn’t settle on the name until June. Throughout the whole process, I maintained an outline in Workflowy with proposed names we’d brainstormed, cross-checking domain names, Twitter handles, Google search results, Mac and iOS app store searches, and more.

“TimeStory” was Hemi’s invention. I came to love it. It evokes storytelling; it is a clever inversion of “story time”; it had an .app domain available; it’s easy to remember.

The App Store

I have no experience in directly selling software from my own site. I want to get there. But I’m not there yet. The Mac App Store’s 30% tax isn’t great, but for someone like me, the handling of worldwide payments, taxes, and sales support is worth a lot, and the discoverability (as noted below) is actually better than I’d expected.

Release and Sales

I released it in July, and got my first sale on day 1, from someone who found it via App Store search. Since then, there’s been a pretty steady rate of customers who found it that way. My keywords are obvious, and I don’t even know how to spell ASO, so there’s obviously at least some demand.

And I started getting feedback.

I got my first negative feedback, a pretty angry email, early on. But this customer had run into real problems.

I do most of my work on my MacBook and tend to use the built-in trackpad for everything. At that point, I had done too little testing with an external mouse; on macOS, mice cause the default transparent overlay scrollers to become actual persistent scroll bars, and my code didn’t handle that right, causing misaligned and cut off layouts in some cases. More, I relied on gestures for navigation that are not obvious for non-trackpad users. It’s actually a bit embarrassing to write that I neglected mouse support on a Mac, but there you have it. The fixes were quick and, as such fixes usually do, led to a better product for everyone, as I improved the navigation controls and layout code overall.

I was, and remain, very grateful that this user took the time to write and tell me that my app looked incomplete and unprofessional rather than just delete it and one-star me in the App Store.

I also got happy emails and reviews. My first five-star review (alas, not on the US app store) led to an email follow-up thread, some great feature requests, a better product, and a tremendous personal boost. I’m actually surprised how many positive email messages I’ve gotten, often with feature requests or concrete feedback. It’s the best part of the work. It’s made me think more about reaching out to the developers of the apps I love.

And I iterated. My goal this year was really a “soft launch”: launch with a good, useful, but simple product, and iterate as I accumulate user input, use cases, and Mac development expertise. I think I hit that target: the December version of TimeStory is vastly better than the July version was, my ticket backlog is full of great things that I know I can ship, and my confidence is high.

Side note: WWDC2019

Just as I was stabilizing for 1.0, WWDC happened. I had planned it as a low-coding week, so I could keep up with videos and blogs as they appeared, and boy did Apple deliver.

I had, of course, followed the “Marzipan” rumors all along, and Catalyst was pretty much as expected. It’s very cool, but it’s not the right tool for TimeStory, for reasons hinted at earlier. I want a standard, document-based Mac app, which just looks and feels very different from a document-based iOS app. Pragmatically, I do a lot of custom mouse and keyboard handling, which sounds possible but not necessarily easy in Catalyst. And, personally, any “cross-platform” solution always has a high bar to clear for me, just because, as a long-time Java developer, I know how much work there is in that last 20%.

SwiftUI took me by surprise. I’d missed any hint of those rumors. It looks like a fantastic start. I’ve started playing with it on the side; for smaller, more focused apps, it’s exciting. I love that they announced it on day one with bidirectional integration with UIKit and AppKit, so we can adopt it one view controller at a time. I don’t love how quickly I can crash the preview or how hard it is to implement basic macOS or iOS behaviors sometimes, but those things will get fixed.

(Coding in SwiftUI also really brings back early 2000’s-era C++ to me, when many new template-based libraries were coming into use, faster than the compilers could improve their diagnostics. One typo could result in many pages of errors, sometimes pointing to lines unrelated to the error. But that will get fixed too.)


So, what’s next?

I need to start marketing more.

I want to deliver an iOS version or two. My plan has been to start with a read-only “TimeStory Viewer”, to help users who want to review or present a timeline, and as a first technical step towards a full-parity version (at least on the iPad). I’ve got working but incomplete iOS code now, but a good UX still needs a lot of work. It will also slow down future releases; once it’s out, I need same-day release of iOS and macOS apps, or people will use new features on one, and find their documents stranded, unable to be opened on the other.

There will be a 2.0. I’ve got a big list of features planned. Some of them will be transformative, I think; they will open the product to many new users and applications. The foundation is solid, and I know how to get them shipped.

Finally, TimeStory was designed, from day one, to be part of a larger system of software. I wanted it to be scriptable, to have solid importers and exporters, and to have its file format and data model in a form that I could reuse in multiple apps. 2020 is the year that I hope this starts to pay off. I hope to publish extensions and examples that tie it in to journaling, time-oriented data visualization, and other domains.

There’s a lot to do. I probably won’t get everything I want in 2020, but I like where I am, and I feel good about what’s to come.


Path with leaves and snow