Michael's Blog

No, Apple Does Not Share Your FaceID Data

The notch full of sensors on iPhone X enables Face ID to capture

accurate face data by projecting and analyzing over 30,000 invisible dots to create a depth map of your face and also captures an infrared image of your face. A portion of the A11 Bionic chip’s neural engine — protected within the Secure Enclave — transforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial data.

Meanwhile, from the same notch, third party developers can access

a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face.

These are 2 different things.

For more see Apple’s support article on Face ID and their developer documentation on ARKit Face Tracking.

Time to Start Developing Apps for Apple Watch

Apple watch is a huge success, we know that they’ve sold a lot, and people love them. Apple have managed to make the most capable smartwatch the only smartwatch that regular non early adopter technology enthusiasts actually want to wear: watch people, fashion people, getting fit people, and even people with smaller wrists.

If you always want to be there for your users, it makes perfect sense to target Apple Watch. It’s the computer that is always there.

Apple Watch, The Computer That's Always There

I’ve a phone in my pocket most of the time, Alexa is always waiting for me in the kitchen and I spend hours every day in front of an old fashioned PC but it’s the watch that stays with me all day long and goes places I’d never dream of taking anything else. But more than just sometimes being the only computation available, it’s always the least intrusive computation available. Every notification I can glance at and ignore, every smart reply I can dispatch with a single tap, every app I can get in and out of in a second without taking out the phone and taking me out of a moment is a big win.

As successful as Apple Watch has been though, it has so far failed as an app platform. Apple Watch is built on the same technology that runs iPhone, and the same tools that developers use to make iPhone apps are used to make Apple Watch apps, so why are there so few Apple Watch apps, and why are so many really bad?

The original watch hardware was very limited, and app support even more so. Apps actually ran on your phone and were sort of beamed onto the watch’s screen. If you managed to find the apps on the terrible app honeycomb grid they loaded really slowly and performed terribly. A lot of developers and users were instantly turned off of third party apps, but the watch got by with the excellent built in notifications and fitness tracking functionality.

Each release of watchOS and every hardware revision has seen huge improvements to third party app support though: apps actually running natively on the watch, custom watch face complications, new capabilities, better performance, and better ways of discovering and launching apps. But the great new app platform imagined when the watch was first announced has still to arrive and many apps on your Apple Watch today likely still date back to the original release. I think we’re at a point now with Apple Watch Series 3, and watchOS 4 where there’s a huge opportunity to reach users with new compelling experiences that are only possible on Apple Watch.

Why Can't We Just Pay for Free Unlimited iCloud Storage?

Over the past few years Apple has proven that they’re willing to try charging higher prices for iPhone. Just a couple of years ago the 6S plus was priced from $749, a year later the 7 plus was available from $769 and now the 8 plus is on sale from $799. Meanwhile, the market has shown it’s happy to pay those prices and I suspect it will prove so once more with the impending $999 iPhone X.

What I’d like to see next year is for Apple to charge us even more money for phones that don’t cost them anything extra to produce, and here’s why.

The experience of figuring out that you might need an iCloud subscription, figuring out how much space you might need, paying for it, dealing with the inevitable failures to renew when your card expires or your balance is low, and getting warnings about backups failing is awful. I’d love to see Apple try to figure out the cost of providing all new iPhone users with unlimited (with an asterisk that says there’s actually some limits) iCloud storage and build it into the price of the phone.

I pay Apple $35.88 for iCloud storage each year, I’d happily pay $99 more for the phone instead.

Audio Degapinator - The Poor Dev’s Smart Speed

I’ve been listening to podcasts with Overcast’s Smart Speed feature turned on for long enough to have saved 55 hours of not listening to the silences between every podcast host’s thoughts.

I decided to spend 1 of those hours today making my own very simple, very limited, but surprisingly effective AVAudioPlayer version of that feature. I’ll explain below how it works, but you can check out the full Swift iOS source (there’s not much to it) on GitHub: Audio Degapinator on GitHub.

AVAudioPlayer offers features for audio level metering:

/* metering */
    
open var isMeteringEnabled: Bool /* turns level metering on or off. default is off. */

open func updateMeters() /* call to refresh meter values */

open func peakPower(forChannel channelNumber: Int) -> Float /* returns peak power in decibels for a given channel */

open func averagePower(forChannel channelNumber: Int) -> Float /* returns average power in decibels for a given channel */

And for adjusting playback, including:

open var rate: Float /* See enableRate. The playback rate for the sound. 1.0 is normal, 0.5 is half speed, 2.0 is double speed. */

My code then:

  • turns metering on
  • updates meters with a timer
  • checks if there is currently silence playing using averagePower
    • increases the playback rate 2x until the silence ends

I tested using the latest episode of ATP and Debug episode 49. In both cases the silences were noticeably reduced and, to my ear, sounded completely natural. I listened to the entire episode of Debug and it had shaved off a little over 3 minutes by the end.

This was a fun little project, it’s the first time I’ve looked at anything related to audio playback on iOS in quite a while and it was super interesting … I fear I may just have to write my own podcast app now.

Following Humans

I used to follow blogs, almost exclusively tech or programming related blogs. There was a great RSS reader, Google Reader, that made it super easy. When that went away in 2013 I tried a bunch of alternative readers, some of them were ok but ultimately I decided to give up on consuming RSS feeds and switched to twitter. I’d already been using twitter for years, mostly just following real world friends that hardly tweeted and occasionally tweeting into the abyss, so changing how I used it wasn’t a big decision. My plan was to treat it just like a feed reader so I followed the accounts of several blogs that I was reading at the time and if a blog I wanted to keep up with didn’t have a dedicated twitter account for posting links as they went live, I followed the author instead. I’d changed how I consumed blogs - getting almost exactly the same content, just in a crappier interface. But a small part of what is good about Twitter had seeped in, which eventually would dramatically change what I read online entirely, and make twitter far more valuable for me than a mere feed reader.

It was those few annoying bloggers that didn’t have an account for their blogs. The tweets of those authors ended up being far more interesting than their blog posts alone, as they shared links to other people’s blogs, to news articles, comics, photos or their own 140 character thoughts. People I’d followed for their very narrow writing on a particular technical subject, it turned out, had things to say or share about other (sometimes interesting, sometimes baseball related) topics. Now the list of twitter accounts I follow is almost entirely made up of humans.

When I followed blogs, what I read every day never really changed. Now, from day to day, what I read can be dominated by entirely different subject matters, or not dominated by any subject in particular. I’ve learned so much that I otherwise wouldn’t have, and see and understand more of the world than I ever could have by reading proper news sources. Sometimes I end up not getting my techy blog fix from twitter at all, but that’s ok, I’ve a handful of my favourite blogs bookmarked that I visit on such occasions, and sometimes there are more important things going on in the world anyway.

Simulating Universal Gravitation with SpriteKit

Gravity in SpriteKit, as with Box2D that it’s (apparently) built on or any other 2D physics enging you’ve likely come across is a single planet sort of gravity. By that I mean that it applies a single force to all bodies in the simulation - basically everything falls down. But what if you wanted a mutliple planet sort of gravity, can that be achieved in SpriteKit?

The answer is yes, and it turns out it’s quite simple to get some fun and quite realistic results.

Screen capture

In this simple SpriteKit app, tapping on the screen creates a new ‘planet’.

So how’s it done?

Very simply, we turn off gravity as it normally applies in a SpriteKit scene and we apply Newton’s law of universal gravitation to all the nodes in the physics simulation on every tick.

F=G m1.m2/r^2

That is for every pair of nodes, apply a force to each one that is equal to the product of their masses (and the universal gravitational constant), divided by the distance between them, squared.

All the code you need to recreate the scene above is available on github.com/mbehan/fgmmr. There are some tweaks to the above formula to make the numbers a bit easier to deal with (i.e. smaller) and to make creating stable systems a bit easier, but sticking exactly to the formula above and plugging in some realistic numbers things work pretty much as you’d expect. The one surprise is that more often than not planets in stable systems have orbits that trace the shape of a propellar rather than straightforward repeating eliptical orbits (play around with it and you’ll see what I mean). I’m not sure if this is a result of an error in my code, the kinds of numbers I’m using or something else, perhaps it’s how the force is applied by the physics engine.

In addition to simulating gravity, I’m also combining planets that pass close to each other and adding trails to trace their paths and giving new planets random colour. It all results in a surprisingly fun and addictive little toy so even if you’re not interested in the code just build and run it on your iPhone and enjoy!

Detecting Which Complication Launched Your WatchKit App

One of the joys of working with watchOS, much like it was working with iPhone OS many years ago, is the enforced simplicity. Free from worrying about about the unending device combinations and configurations and the unlistible features and extension points of modern iOS the constraints of a limited SDK focus your creativity. Simple, robust, yet still delightful interfaces flow from your fingertips, designers designs are readily translated to working product.

Sadly though, we’re not content for long. Just like in the early days of iPhone OS, you soon find yourself wanting to do just a tiny bit more than Apple has made available, and so focus and delight makes way to our more common friend, the ugly hack. Today’s feature that just couldn’t wait for a proper API is: detecting which watch face complication launched my app.

How It Works

When your app is launched in response to the user tapping a complication, the handleUserActivity method of your WKExtensionDelegate is called. You’re given a userInfo dictionary, and this is where we’d hope to find the details of which complication had launched us. Sadly though there’s no CLKComplicationFamilyKey to let you know the user tapped the circual small rather than the utilitarian large to lauch the app, but there is something we can use, the CLKLaunchedTimelineEntryDateKey. This gives us the exact date and time that the complication was created at. By remembering exactly when we created which complication then we can figure out which complication resulted in the app being launched and acting accordingly.

The Code

In 1, we create a singleton (no shameful hack is complete wihtout one) to track when our various complications were made.

In 2, we setup the utilitarian large complication and store the creation date, just add more cases to the switch statement for other complication families that you are supporting.

Finally in 3 we check what time the complication that launched the app was created and check which one it was and launch the relavant interface controller.

Limitations

The code above has a couple of limitations that you may need to work around. First it doesn’t take Time Travel into account so if your app supports that each complication may have more than one corresponding datetime. Secondly (though in practice I haven’t seen this be an issue) I don’t see why two complications couldn’t have clashing datetimes, for that you could add a method to ComplicationTimeKeeper that returns the next unique date.

It’s Time For Complications

Apple made much of the value of complications at this years WWDC. Having originally not allowed you to make your own in watchOS 1, to allowing you but telling you its only if you really have something super important that gets updates throughout the day in watchOS 2, now this year they told us we really need to have a complication even if its just an icon to launch your app. It seems they’ve noticed, as anyone who has worn Apple Watch for any reasonable amount of time will tell you, that complications are the best way to access the functionality of an app. But everything they talked about at WWDC was about having a complication, singular. You can support multiple complication families, but you can only have one of each and they are treated as different views of a single feature, showing more data when you’ve the room, but not really doing anything different.

Ideally, we’d have the ability to provide multiple complications for each complication family. If that was the case you could have a watch face with each complication slot filled by the same applicaiton, each showing something else (the built in world clock complication can already do this, but nothing else) and crucially each performing a different function of your app when they’re tapped. I wouldn’t be surprised if this is something that is eventually supported in WatchKit, but for now at least we can ugly hack our way to using different complication families to provide different functionality.

Should Apple Deprecate UILongPressGestureRecognizer?

The answer is yes.

  • For anywhere you currently require a long press, move to 3D touch.
  • For anywhere you have different actions for both, make the long press action an option when 3D touching. (For example organising icons on the home screen.)
  • Make an accesability preference that makes a long press behave as a progressivly more forcefull 3D touch.

Is This The Apple Car?

The Apple Car

No.

Still, it’s fun to speculate.

My guess is that you won’t go to a showroom to pick out your Apple Car, you won’t take one for a test drive and you won’t leave it charging in your driveway at night. Instead, you’ll just ask Siri for one when you’ve got someplace you need to be.

It’s not just another ride sharing service, nor mere self driving Uber that I’m imagining. I see a car personalised just for you, customised in app while it was on the way: selecting exterior colours and adornments to match your style, interior lighting to suit your mood, configuring a sporty or relaxed ride, with your playlist starting as you open the door, and all set to be driven by an overly entusiastic t-shirt clad human driver (high fives available on request) or by you if you prefer.

If, as many speculate, the future of cars doesn’t involve owning your own or even driving all that much, the most interesting innovation in cars might be how to continue to allow people to express themselves (and show off their wealth or various other kinds of superiority) through their car. You wouldn’t be seen dead in some random could be a Toyota Lyft - no, you’re picked up in the electric Apple Car (not the Sport) with rose gold trim, you drive yourself because you’re into that sort of thing, and you adorn the outside with retro geek stickers and funny gifs.

Basically it’s the automotive version of the messages app from iOS 10, app platform and all.

(Or, perhas more likely, it’s just a regular car but a bit fancier and more expenseive. The edges will be unapologetically chamfered, the range lacking compared to a Tesla but forgivable for some reason and the biggest ovation of the introduction will be when Phil Schiller shows us how the wipers won’t stop halfway across the windshield if you turn it off in the rain.)

Marge Be Not Proud

A lot of people like to rationalise why they block ads on the web. It’s the trackers, the load times, the lack of a contract …. Here’s why I block ads:

Because ads suck.

I don’t want to pay for the content I read online with my attention, and fortunately I don’t have to. I also fast forward through TV ads, turn off the car radio during breaks and skip podcast ads unless it’s also a toaster oven review.

Screen capture from The Simpsons eipisode: Marge Be Not Proud

I do whitelist some sites that I both read with regularity and that have less sucky ads. For some sites if they detected my blocker and asked for some money instead I’d happily give it, but for most I wouldn’t.

If all this means some websites I read have to die, I don’t mind much.

Cheating on Swift Substrings

If you found yourself needing to get a substring of a String in Swift before you got around to the relavant chapter of the book you were probably left scratching your head for a bit. How could something so conceptually simple be so akward to perform?

Here’s a great article explaining how to find a substring in Swift, from Natasha The Robot.

It turns out that Swift Strings are much cooler than your old fashioned strings from other languages, and Swift Ranges are even cooler still. But unless you’re using them frequently, I find that

str.substringWithRange(Range(start: (advance(str.endIndex, -1)), end: str.endIndex))

doesn’t exactly roll off the tongue.

So here’s my cheat, which is to not use String at all. Arrays in Swift are super simple to chop up using plain integer ranges, and a String is just an Array<character>. Swift even lets you iterate over the contents of a String and access each Character in turn, but it doesnt give you String subscripting.

So theres a couple of cheat options, implement subscript on String yourself, or what I preferred, extend String to give you quick access to an Array representation of the String.

extension String {
    func asArray() -> [Character] {
        
        var array : [Character] = []
        for char in self {
            array.append(char)
        }
        return array
    }
}

You can then do fun stuff like this, which for me, reads very nicely.

let str = "Coma inducing corporate bollocks"
str.asArray().last // "s"
str.asArray()[10] // "i"
String(str.asArray()[2..<7]) // "ma in"

You don’t need to break out the big O notation to see this isn’t going to perform great, you’re iterating over the entire string everytime you want to get a piece of it, then the array methods are going to go do it again, so use with caution!

Death By Date Format String

Recently I learned that you probably always want “yyyy” and not “YYYY”.

let dateFormatter = NSDateFormatter()
dateFormatter.dateFormat = "YYYY-MM-dd"

println(dateFormatter.stringFromDate(
   dateFormatter.dateFromString("2015-12-26")!))

This prints 2015-12-26. Obviously. So what about

println(dateFormatter.stringFromDate(
   dateFormatter.dateFromString("2015-12-27")!))

It prints 2016-12-27.

Note that the year is 2016.

I was fortunte1 enough to get assinged a production crash bug this week that after a long day of head scratching, turned out to be caused by this.

Interestingly, the NSDate created with the format is the date I expected, it represents 27 December 2015 and it’s only getting a string from the date with that format that gives you the ‘wrong’ year. Similarly an NSDate constructed in any other way that represents 27 December 2015 will behave the same.

The NSDateFormatter docs point you at Unicode Technical Standard Number 35 for the definitions of date format strings. I’ve looked at this before and I expect I’m not alone in having paid more attention to the day and month parts of the format. They’re usually what we’re interested in because the year is always the year, at most we might prefer 2 or 4 digits but thats about as interesting as it gets. I suspect what happens fairly often (and what probably happened with our bug) was that the developer guessed at YYYY as the year format, and when it appeared to work just fine, assumed it was correct.

The relavant part of that standard states that y is the year, but Y is

Year (in “Week of Year” based calendars) … May not always be the same value as calendar year.

And the problem is that, as far as I can tell, it almost always is the same as the calendar year. The last few days of the year are the only ones I’ve seen causing problems. If it was more different, it would be spotted easier and perhaps I would have already known that YYYY was wrong and spotted that as the error right away.

  1. If I consider it a learning opportunity and not an annoying time suck of a bug! 

Optional Optionals

So here’s a confusing sentence.

With Swift functions you can have optional parameters, you can also have parameters that are optionals, and you can have optional parameters that are optionals.

A rather confused looking Donald Rumsfeld

Not taking the time to think about the 3 different levels of optionality in function parameters had me scratching my head for a few minutes today, but all the options (sorry) are useful and it’s not at all confusing once you remember them.

My scenario was that I created a function that does some stuff and then executes a closure supplied by the caller. Something like

func doSomeStuff(thenDoThis:()->())

Which a caller would call like

doSomeStuff {
	// and then do this stuff
}

But I want to let the caller decide whether they want to supply the closure or not, so if they like they could just call the function and be done.

doSomeStuff()

So let’s make the closure optional. Easy, as with any type in Swift, we can mark it optional by including a ?

func doSomeStuff(thenDoThis:(()->())?)

So then if we go ahead and call

doSomeStuff() // Error: Missing argument for parameter #1 in call

But it was optional, so why the error? Well it wasn’t optional in the sense that I could leave it out, it’s just that it was an optional type which we are still expected to provide every time. As our type is an optional closure with no parameters and no return, we have to supply a closure with no parameters and no return or nil. So we’d actually have to call

doSomeStuff(nil) // this works fine but isn't what we want

So how do you create an optional parameter, one that a caller can decide to leave out? To do that you provide a default value to be used for that parameter, right in the function declaration.

func doSomeStuff(thenDoThis:()->() = defaultClosure)

This means that if the caller doesn’t supply a value for thenDoThis we’ll use defaultClosure instead (assuming defaultClosure is defined elsewhere as a ()->().) We can now happily call the following if we don’t want to supply a closure.

doSomeStuff() // yay!

The behaviour I was interested in though was that if I didn’t supply a closure, that there would be no closure executed at all, not that some other one I had to define would be called instead. Well, I could just make defaultClosure do nothing, or just have the default value be {} like

func doSomeStuff(thenDoThis:()->() = {})

Which is fine, and maybe even the preferred way, but you can also have an optional optional parameter, and have it’s default value be nil.

func doSomeStuff(thenDoThis:(()->())? = nil)

Now if the caller omits the closure, thenDoThis will be nil, which makes more sense to me in this situation.

Do You Even Swift?

Just some thoughts on how I came to go all in learning Swift at home while still advocating Objective-C as our primary language in the office.

Taylor Swift, flexing massive arm muscles

When Apple annouced Swift at WWDC I was super excited at the chance to be in on the ground floor of a new and modern language, one that I expect to become widespread and long lived. I could fastrack to greybeard-dom and be one of the crusty old devs that was there at the beginning1. But of course there were projects to get done and old apps to support. Swift is sold as easy to integrate into existing projects and easy to interoperate with Objective-C, and it is in many respects, but the few tries I gave it brought frustration in the shape of Xcode crashes, debugger bugginess, confusing compiler errors and the additional cognative load that comes with juggling two languages at once. So Objective-C kept its place and while I toyed with the idea of writng a class or two in Swift I never bothered, it didn’t seem worth the hassle right now and I wasn’t missing out on much.

Fast forward a few months and, through no endevour of my own, I find myself in the middle of a large Swift codebase, helping out on the team that’s writing an app that will be responsible for selling flights and checking in passangers for one of the worlds largest airlines. I’ve been working full time with Swift for about a month now and I’ve been pleasantly surprised at how productive I’ve been with it. There are still a whole lot of Xcode frustrations for sure, but not so many more than any developer who jumped into Storyboards or Auto-layout when they first emerged will be accustomed to and obviously each release should improve the situation.

It only took about a week before Objecive-C became the wrong looking language, and the couple of side projects I work on at home have already been converted to Swift. I really like the language and I’m excited about it again just as I was when it was first announced and I expect most code snippets on this blog to be Swift from now on. But the interest in the new language, and my desire to use it personally still wouldn’t change my mind on the decision I made last summer, the next big project for a client that I choose the language on will not be Swift. The decision making process here is simple2 Swift is young and evolving rapidly and during this Swift version 1.1 project Swift version 1.2 appeared. For science I downloaded the Xcode beta and opened up the project - no surprises for guessing - it doesn’t build anymore.

There is a handy Edit > Convert > To Latest Swift menu option in Xcode, but the number of build errors before and after was roughly the same (interestingly you can run it multiple times and get a few more to go away) and while some of the remaining ones are simple renames and a lot of changing as to as! (why the tool couldn’t find all of these I don’t understand) there are a few that will require a bit more investigation.

Maybe there’s only an hour in fixing up all these build errors, but it would warrant another round of testing and I suspect at least one or two additional bugs would emerge. Not a huge problem for this particular project which will be actively worked on for the foreseeable future, when each Swift version can be migrated to in turn, but for the kinds of projects I often work on there could be a couple of years between releases. In that time Swift will probably change several times, perhaps in significant ways, and we’d likely be required to use latest Xcode versions to submit to the App Store for even the most minor bug fix. Swift could be the thing that turns an essential bug fix release from a simple 1 day turnaround to a weeks long nightmare.

  1. As opposed to joining the Objective-C party the same time as the iOS gold rush kids. 

  2. Which doesn’t nessecerily make my decision correct of course 

Super Basic ORM on top of FMDB and SQLite for iOS

Disclaimer: If you’re not sure if you should be using SQLite for your iOS project then you probably shouldn’t be, CoreData is worth the learning curve.

When you do have call to use SQLite then the FMDB wrapper makes using it through Objective-C a breeze. I won’t explain how to use FMDB, their API is very straightforward and you’ll find plenty of help elsewhere. A typical experience though is that you’ll execute a query, you get back a lovely FMResultSet object and you extract values from that using your database column names–nice.

What would be slightly nicer is automatically mapping that result set onto a model object. So lets make that a thing.

Very Basic Example Time

We have a table in our database called People with the following fields:

  • personId
  • firstName
  • lastName
  • address
  • favouriteTellytubby

And it makes sense for us to have a Person class in our app because maybe we’ll want to maintain a table of people and be able view the detail of a person by passing a Person from the table view to the detail view. The Person class will be defined something like this:

@interface Person : NSObject

@property(nonatomic) NSInteger personId;
@property(nonatomic, copy) NSString *firstName;
@property(nonatomic, copy) NSString *lastName;
@property(nonatomic, copy) NSString *address;
@property(nonatomic, copy) NSString *favouriteTellytubby;

@end

So to create some Person objects we could alloc init a bunch of them and set their properties based on what we get back from the database, alternatively we could create a custom initialiser method to take a FMResultSet and set them all that way. All of which is perfectly fine until you find yourself repeating it over and over again.

Homer Simpson making OJ the old fashioned way

For simple situations like this though, there is a better way (better as in less repetitive at least).

I’ve a simple class that I use as a base class for all my model objects, it provides an initialiser that takes a result set as a parameter and looks for columns in that result set with the same name as its properties.

@interface MBSimpleDBMappedObject : NSObject

-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet;

@end
-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet
{
    self = [super init];
    if(self)
    {
        unsigned int propertyCount = 0;
        objc_property_t * properties = class_copyPropertyList([self class], &propertyCount);
        
        for (unsigned int i = 0; i < propertyCount; ++i)
        {
            objc_property_t property = properties[i];
            NSString *propertyName = [NSString stringWithUTF8String:property_getName(property)];
            
            [self setValue:[resultSet objectForColumnName:propertyName] forKey:propertyName];
        }
        free(properties);
    }
    
    return self;
}

@end

What we’re doing here is quite simple, but it’s enabled by a couple of powerful Objective C features. Firstly, at runtime we can dynamically retrieve the names of a loaded classes properties, then we can simply set the values of properties using key value coding.

Those are the only 2 things happening here, get a list of the properties, then for each property set its value to the one from the result set with a matching column name.

This means all our model subclasses have to do is declare a bunch of properties, so all there is to those classes is the interface I described before, just subclassing MBSimpleDBMappedObject instead of NSObject like so.

@interface Person : MBSimpleDBMappedObject

@property(nonatomic, readonly) NSInteger personId;
@property(nonatomic, readonly, copy) NSString *firstName;
@property(nonatomic, readonly, copy) NSString *lastName;
@property(nonatomic, readonly, copy) NSString *address;
@property(nonatomic, readonly, copy) NSString *favouriteTellytubby;

@end

I’ve marked the properties read only, because all I’m interested in is a copy of what’s in the database, changing the values of those properties won’t update the database, though I do plan to add that functionality in the future. If this is all you need then you’re done, your Person implementation can be left blank.

A Note About Dates

If you’re familiar with SQLite and FMDB you’ll know they don’t really do dates, but you’ll probably find yourself wanting to keep track of some dates in the database. FMResultSet’s objectForColumnName will gladly give you a number or a string, but it doesn’t do NSDate’s. Here’s how I deal with that.

Example Time Again

Let’s change our People table a bit to make it a bit more useful, so our list of fields looks like:

  • personId
  • firstName
  • lastName
  • address
  • dateOfBirthTimestamp

and update our Person interface too

@interface Person : MBSimpleDBMappedObject

@property(nonatomic, readonly) NSInteger personId;
@property(nonatomic, readonly, copy) NSString *firstName;
@property(nonatomic, readonly, copy) NSString *lastName;
@property(nonatomic, readonly, copy) NSString *address;
@property(nonatomic, readonly)NSTimeInterval dateOfBirthTimestamp;
@property(nonatomic, readonly, strong)NSDate *dateOfBirth;

@end

With no other changes the dateOfBirthTimestamp property will be set correctly which may be enough, but you’d probably have to make an NSDate with it anytime you wanted to do anything useful with it. We’ve added an NSDate property, but as there is no corresponding column name, it will remain nil. That is until we override the initialiser as follows.

-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet
    {
        self = [super initWithDatabaseResultSet:resultSet];
        if(self)
        {
            _dateOfBirth = [NSDate dateWithTimeIntervalSince1970:self.dateOfBirthTimestamp];
        }
   return self;

}

@end

The base class will still map all the other properties, we just construct the NSDate.

Uploading Xcode Bot Builds to Testflight, with launchd

Continuous integration with Xcode is super easy to set up and does the basics of continuous integration really well. With almost no effort you’ll have nightly builds, test suites doing there thing, email alerts to committers, lovely graphs and even a cool dashboard thing for your big screen. I won’t go through setting that all up here, the Apple docs are excellent and there are plenty of other people who’ve already explained it better than I will.

Where things are less than straightforward is when you want to use the IPA file produced–to send it to your testers via TestFlights, or to your remote teammates, your client or whoever.

The server executes an Xcode scheme, which defines your targets, build configuration, and tests. In the scheme there’s an opportunity to include custom scripts that run at various points, pre and post each of the schemes actions, so you can run a script pre-build or post-archive etc.

This post-archive step is the last place we can do some work, so it’s the obvious place to go upload our build to TestFlight, right? Well it would be except the IPA file never exists at this point. The IPA file is generated some time after this. The process is:

  • Archive
  • Post archive scripts
  • ???
  • Generate IPA file

So if you want to upload to TestFlight what can you do? Well the solution offered by anyone I’ve seen blogging about it is to go make your own IPA using xcrun. That doesn’t sound so bad until you end up with code signing and keychain issues and it’s all to do something that is about to happen as soon as you’re done anyway.

My solution was to just wait until the IPA file was made. My initial naive attempts were to schedule the upload from the post-archive script using at or simply adding a delay for some amount of time while the IPA file didn’t exist. What I should have realised though is that the Bot will wait as long as I’m waiting and only when my script finishes will it continue and make the IPA file.

launchd to the rescue.

What I’ve ended up with, and which is working nicely for us, is a scheduled job on the build server which will notice any IPA files built by an Xcode bot, and upload them. I wasn’t familiar with launchd prior to this and was excepting to use cron, but it turns out this is the modern OSX way for scheduling jobs. There’s a great site showing you how to use launchd but I’ll show you what I have anyway.

What I have:

  1. A plist for launchd
  2. Plists for each project that explain where to send the build
  3. A shell script that looks for IPA files and sends them to TestFlight or FTP using the information from 2.

1. The launchd plist

This is placed in /Library/LaunchDaemons and simply tells launchd that we want to run our script every 600 seconds. You could schedule it to run once a day or any other interval, I left it at 10 minutes so any bots that are run on commit or are started manually will have their builds uploaded right away rather than at the end of the day.

<?xml version="1.0" encoding="UTF-8"?><plist version="1.0"><dict><key>Label</key><string>com.mbehan.upload-builds</string><key>ProgramArguments</key><array><string>/ci_scripts/build-uploader.sh<key>StartInterval</key><integer>600</integer><key>StandardOutPath</key><string>/tmp/build-uploads.log</string><key>StandardErrorPath</key><string>/tmp/build-uploads.log</string></dict></plist></key>

2. Per project plist

If we want the build to be uploaded automatically, it needs a plist telling it where to go. We share builds with one of our clients via FTP so there is a method key for that and a different set of keys are required if it’s value is to FTP rather than TestFlight. I keep these plists in the same directory as the script.

<?xml version="1.0" encoding="UTF-8"?><plist version="1.0"><dict><key>Method<string>TestFlight</string><key>ProductName</key><string>Some App.ipa<key>APIToken</key><string>GET THIS FROM TESTFLIGHT<key>TeamToken</key><string>AND THIS</string></dict></plist></key></key>

3. Checking for IPA files, uploading

We’re using find with the -mtime option here to find recently created files with the name specified in the plist. If we find a file we then either use curl to upload to TestFlight or we send it via FTP depending on the method indicated in the plist.

You can remove the stuff for FTP if you only care about TestFlight, and you might want to add extra detail to the plist such as distribution lists.

I’ve created a gist for the shell script:

This all assumes you’ve set your provisioning profile and code signing identity up correctly for the build configuration used by your Xcode scheme. Make sure the configuration used in the archive step (Release by default) will make a build the people you want to share builds with will be able to install.

Simple Dynamic Image Lighting with CoreImage

With the kind of apps I usually make, I often end up doing a lot of gamey looking things right inside of UIKit. The addition of UIDynamics made one of those jobs, gravity, super easy. I wanted the same kind of simplicity for lights.

Animated figure being dynamically lit by 3 moving coloured lights

Using The Code

It only works on image views for now, but it works well and frame rates are good (much better than the gif lets on) for all but very large images on older devices. You can get all the code on github and using it should be pretty simple.

You just create a lighting controller, add some light fixtures and image views you want to be lit to the controller, and let it know when you need to update the lighting (when we’re moving the lights in the example above). Here’s the interface for the MBLightingController:

@interface MBLightingController : NSObject

@property(nonatomic) BOOL lightsConstantlyUpdating;

-(void)addLightFixture:(id<MBLightFixture>)light;
-(void)addLitView:(MBLitAnimationView *)litView;
-(void)setNeedsLightingUpdate;

@end

Only set lightsConstantlyUpdating if the lighting is always changing (this came about because I was playing around with adding a light to a rope with UIDynamics, which you can see in the project on github.)

So, there are a couple of things there that you won’t know what they are, the MBLightFixture protocol, and MBLitAnimationView.

Anything can be a light, so long as it implements the protocol, which means it needs a position, intensity, range and color. I’ve just been using a UIView subclass but maybe your light will be a CAEmitterLayer or something.

MBLitAnimationView can be used everywhere you’d use a UIImageView, it just adds the ability to be lit, and makes working with animation easier.

Your view controller’s viewDidLoad might include something like this:

//create the ligthing controller
self.lightingController = [[MBLightingController alloc] init];
    
//add an image to be lit
MBLitAnimationView *bg = [[MBLitAnimationView alloc] initWithFrame:self.view.bounds];
bg.ambientLightLevel = 0.1; // very dark
[bg setImage:[UIImage imageNamed:@"wall"]];
[self.view addSubview:bg];
[_lightingController addLitView:bg];
    
//add a light
SimpleLightView *lightView = [[SimpleLightView alloc] initWithFrame:CGRectMake(200, 200, 25, 25)];
lightView.intensity = @0.8;
lightView.tintColor = [UIColor whiteColor];
lightView.range = @250.0;
    
[self.view addSubview:lightView];
[_lightingController addLightFixture:lightView];

How It Works

The light effect is achieved using CoreImage filters and everything happens in the applyLights method of MBLitAnimationView.

I experimented with a bunch of different filters trying to get the right effect, and there were several that worked so just try switching out the filters if you want something a little different.

Multiple filters are chained together, first up we need to darken the image using CIColorControls:

CIFilter *darkenFilter = [CIFilter filterWithName:@"CIColorControls"
                                           keysAndValues:
                                 @"inputImage", currentFrameStartImage,
                                 @"inputSaturation", @1.0,
                                 @"inputContrast", @1.0,
                                 @"inputBrightness", @(0-(1-_ambientLightLevel)), nil];

Then, for every light that we have, we create a CIRadialGradient:

CIFilter *gradientFilter = [CIFilter filterWithName:@"CIRadialGradient"
                                              keysAndValues:
                                    @"inputRadius0", [light constantIntensityOverRange] ? [light range] : @0.0,
                                    @"inputRadius1", [light range],
                                    @"inputCenter", [CIVector vectorWithCGPoint:inputPoint0],
                                    @"inputColor0", color0,
                                    @"inputColor1", color1, nil];

Then we composite the gradients with the darkened image using CIAdditionCompositing:

lightFilter = [CIFilter filterWithName:@"CIAdditionCompositing"
                                     keysAndValues:
                           @"inputImage", gradients[i],
                           @"inputBackgroundImage",[lightFilter outputImage],nil];

Finally, we mask the image to the shape of the original image:

CIFilter *maskFilter = [CIFilter filterWithName:@"CISourceInCompositing"
                                      keysAndValues:
                            @"inputImage", [lightFilter outputImage],
                            @"inputBackgroundImage",currentFrameStartImage,nil];

Just set the image view’s image property to a UIImage created from the final filter’s output and we’re done!

CGImageRef cgimg = [coreImageContext createCGImage:[maskFilter outputImage]
                                                  fromRect:[currentFrameStartImage extent]];
        
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
imageView.image = newImage;
        
CGImageRelease(cgimg);

What’s Next?

Playing with CoreImage was fun so I think I’ll revisit the code at some point in the future, I’d like to try it out with SpriteKit’s SKEffectNode where it really makes more sense for using with games. Or I might keep working with UIKit and get it working for any view–shiny / shadowy interfaces might be interesting.

UIImageView Animation, But Less Crashy

Animation with UIImageView is super simple and for basic animations it is just what you need. Just throw an array of images at your image view and tell it to go, and it will go. For animations of more than a few frames though its simplicity is also its failing–an array of UIImage s is handy to put together but if you want large images or a reasonable number of frames then that array could take up a serious chunk of memory. If you’ve tried any large animations with UIImageView you’ll know things get crashy very quickly.

There are also a few features, like being able to know what frame is currently being displayed and setting a completion block that you regularly find yourself wanting when dealing with animations, so I’ve created MBAnimationView to provide those, and to overcome the crash inducing memory problems.

My work was informed by the excellent Mo DeJong and you should check out his PNGAnimatorDemo which I’ve borrowed from for my class.

How It Works

The premise for the memory improvements is the fact that image data is compressed, and loading it into a UIImage decompresses it. So, instead of having an array of UIImage objects (the decompressed image data), we’re going to work with an array of NSData objects (the compressed image data). Of course, in order to ever see the image, it will have to be decompressed at some point, but what we’re going to do is create a UIImage on demand for the frame we want to display next, and let it go away when we’re done displaying it.

So the MBAniamtionView has a UIImageView, it creates an array of NSData objects and then on a timer creates the frame images from the data, and sets the image view’s image to it, it’s that simple.

Comparison

As expected crashes using the animationImages approach disappeared with MBAnimationView, but to understand why, I tested the following 2 pieces of code, for different numbers of frames recording memory usage, CPU utilisation and load time.

MBAnimationView *av = [[MBAnimationView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
    
[av playAnimation: @"animationFrame"
                       withRange : NSMakeRange(0, 80)
                  numberPadding  : 2
                          ofType : @"png"
                             fps : 25
                          repeat : kMBAnimationViewOptionRepeatForever
                      completion : nil];
    
[self.view addSubview:av];
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
    iv.animationImages = @[[UIImage imageNamed:@"animationFrame00"],
                           [UIImage imageNamed:@"animationFrame01"],
                           
                           ... 

                           [UIImage imageNamed:@"animationFrame79"]];
    
[self.view addSubview:iv];
[iv startAnimating];

Results

Starting off with small numbers of frames it’s not looking too good for our new class, UIImageView is using less memory and significantly less CPU.

10 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 4.1MB0% / 1%
MBAnimationView4.6MB / 4.6MB11% / 11%
20 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.4MB / 4.4MB0% / 1%
MBAnimationView4.9MB / 4.9MB11% / 11%

But things start looking up for us as more frames are added. MBAnimationView continues to use the same amount of CPU–memory usage is creeping up, but there are no spikes. UIImageView however is seeing some very large spikes during setup (highlighted in red).

40 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 65MB0% / 8%
MBAnimationView5.7MB / 5.7MB11% / 11%
80 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.5MB / 119MB0% / 72%
MBAnimationView8.4MB / 8.4MB11% / 11%

Those red memory numbers are big enough to start crashing in a lot of situations, and remember this is for a single animation.

The Trade Off

There has to be one of course, but it turns out not to be a deal breaker. Decompressing the image data takes time, we’re doing it during the animation rather than up front but it’s not preventing us playing animations up to 30 fps and even higher. On the lower end devices I’ve tested on (iPad 2, iPhone 4) there doesn’t seem to be any negative impact, in light of that I’m surprised the default animation mechanism provided by UIImageView doesn’t take the same approach as MBAnimationView.

MBAnimationView on github

Creating a Rope with UIDynamics

I’ve made rope simulations for games with Box2D before but I wanted to see if I could make a rope that could be used easily with UIKit elements, and without having to use Box2D directly. Below is the result, a highly practical user interface, I’m sure you’ll agree! (It’s smoother than the gif makes out.)

A UIButton dangling on the end of a swaying rope

There are 2 distinct problems to consider when create a rope:

  1. The physics joint that connects two elements together as though they were connected with a rope
  2. Drawing the rope

Box2D has a bunch of different joints for connecting physics bodies and b2RopeJoint is just what you need to solve problem 1. UIDynamics though, only exposes 1 joint for us to join dynamic bodies through UIAttachmentBehavior. Fortunately the joint it appears to be using under the hood is b2DistanceJoint which, with the right amount of parameter fiddling, can be made into a b2RopeJoint.

So thats problem 1 sorted, right? Just draw the rope with the help of some verlet integration and you’re done? Well I could be done, but I wanted to try something different and a bit more light weight, something that didn’t involve wikipedia pages full of equations to understand fully.

More Chain Than Rope

By simply connecting a series of small views by their ends with UIAttachmentBehavior you get a chain, with enough links in that chain, and with the right attachment parameters you can get something that behaves pretty rope like. You can attach one view to another like so:

UIAttachmentBehavior *chainAttachment = [[UIAttachmentBehavior alloc] initWithItem:view1 attachedToItem:view2];

I just do this in a loop, joining together a whole bunch of views. It ends up looking like this not very ropey looking thing.

Connected boxes make up the rope segments

But there’s no reason why, just because we’re using the views to create the rope like joint, that we have to look at the views. Instead I draw a path connecting their centres

[path moveToPoint:[links[0] center]];
for(int i = 1; i < links.count; i++)
{
   [path addLineToPoint:[links[i]center]];
}

and we end up with what you see up top.

The Code

All the code is on github. It still needs a bit of work but you can get a simple rope up and running with just a couple of lines of code. Import the header and do something like this:

MBRope *rope = [[MBRope alloc] initWithFrame:CGRectMake(350, 180, 5, 200) numSegments:15];
[self.view addSubview:rope];
[rope addRopeToAnimator:animator];

To attach something, like the button in the example, you can get the last view by calling attachmentView on the rope and attach your other view with your own UIAttachmentBehavior. The top of your rope will be fixed to the origin of the rect supplied when you init the rope, but it wouldn’t take much to change it so you can attach your own stuff to both ends.

Storyboards, Multiple Developers and Git.

Storyboards are great. You can get the flow of your app set up in a few minutes without writing a line of code, you can initialise your navigation controllers and tabs ridiculously easily and zoom out a bit and you get a lovely picture of your entire app on a single screen, with lines and boxes and everything.

But storyboards can be cruel if you’re not careful. Git pull’s become nervy affairs, a slip merging by hand can render your storyboard unreadable by Xcode and not knowing when to stop using them can turn your lovely lines and boxes into a maintenance nightmare.

John McClane, crawling through some ducting, wishing he still used nibs

“Come to the coast, we’ll get together, have a few segues”

We’ve done a bunch of apps of all shapes and sizes using storyboards over the last year or so and we’re working on perfecting our use of them. I’ve been investigating and testing storyboard best practice and this is what I’ve learned so far.

The Precap

(Wiktionary says it’s a word)

  • Everyone on the same build of Xcode
  • Multiple storyboards
  • Use Nibs for custom views
  • One person owns the storyboard setup / decides granularity
  • Think about which storyboards are involved when assigning tasks
  • Merge storyboards often
  • Xcode is your git client

Xcode

We’ve had hassle sharing storyboards across even minor versions of Xcode, storyboards created in one will do crazy stuff in another, or just plain won’t work. Don’t let anyone sneak ahead to the latest developer preview unless they’re doing a separate installation.

Multiple storyboards

But what about the lovely whole app view, all those lovely lines and boxes all perfectly arranged? That was never what storyboards were about, and you’ve still got your whiteboard for that. Divide and conquer is your mantra for everything else you do so it should be for storyboards too. It’s easier to reason about storyboards with a single purpose and devs are less likely to trip each other up if they’re not working on the same storyboards at the same time.

So how do you break it up?

Per user story is a decent approach, but that can be too granular at times. A separate storyboard for login and one for viewing an account makes sense, but maybe you should keep the lost password flow in with your login - If all you’ve got in each storyboard is a single view controller you might as well be using Nibs, the beauty of storyboards is making connections between view controllers.

But you said to keep using Nibs?

Yep, for custom views, table view cells and the like. A view can’t exist outside a view controller in a storyboard so if you don’t need a view controller for it you really shouldn’t be adding it to a storyboard.

So what about single view controllers in Nibs then?

We’ve a project in which we’ve used some Nibs from an existing project in conjunction with a storyboard and it hasn’t been a problem, but if I was making those components again now they’d be in a storyboard. Having a single view controller in a storyboard will make sense at times, and when you end up wanting to add additional screens to your account details say and all you have to do is drag a new view controller in beside the existing one and hook up a segue you’ll be happy.

Multiple devs

Even with all your concerns perfectly separated, and making sure everyone’s on the same page you’re going to end up working on the same storyboard as someone else at the same time. Having to wait for someone to give you their changes before you can do something is no fun so I wanted to see just how careful you really have to be.

I created a simple Xcode project with a single storyboard and set out with my new git friend to put together a few screens.

Test 1: Adding to the same empty view controller

I started off simply adding a label and having Testy add an empty Image View.

We’re working on the same view controller right away so I expected there to be a conflict to sort out and indeed there was.

The XML is clearly not intended to be parsed by human eyes but this looks straight forward enough, I can see two separate additions so accepting one followed by the other should work fine. This looks just like the kind of conflict that comes up on .xcodeproj files when two devs are adding files.

It worked, I had to look at some XML but nothing blew up, for extremely simple changes to the same view controller we don’t have to worry too much.

Test 2: Editing different view controllers

I added a lovely purple view to one view controller and had Testy add a view to a different view controller. We really shouldn’t have any problems here

And we don’t. It seems editing the same storyboard is fine so long as we keep to different view controllers. But, sometimes we might edit another view controller without meaning to, so I looked at some more scenarios …

Test 3: Rearranging parts of the storyboard that someone else changed

Here Testy has changed one of my purple boxes to green, and I’ve just been fiddling with the layout a bit, swapping the order of the 2 view controllers to the right.

This auto merged and left us looking good, it chose my layout.

Test 4: Adding modified views to a Navigation Controller

When you’re inferring screen elements such as the nav bar adding a navigation controller can affect a bunch of view controllers that someone else might have been working on. Here I’ve added a navigation controller while Testy’s been changing some colours.

To my surprise this auto merged just fine, the views that Testy was working on got the nav bar. It makes sense if you take a look at the XML, no nav bar is added as a child of the view controllers, the inferred setting is in there and Xcode knows what to do with that.

Test 5: Making changes to a slightly more filled out view controller

Things have gone ok so far, so lets revisit editing the same view controller, this time making it a bit more realistic.

We both started off with this

All I did was enclose the label in a scroll view, Testy had a few more bits and pieces to do, he changed that label from an attributed label to a regular label, moved it, changed its text and changed the background colour of the view for good measure. We know we’re going to be looking at XML here but that wasn’t a problem before.

And some of the XML here isn’t so bad either.

But it’s clear that as soon as you make more than 1 simple change you’ve got a problem, and you could easily waste a lot of time dealing with it.

The order of of XML has changed between the versions significantly, and it doesn’t seem to be too smart at highlighting which parts are the same. For example I didn’t touch the table view in either revision but Xcode highlighted it being removed on one side and added in again in the middle of a bunch of other stuff later on.

It ended up only taking a few minutes to figure this example out and getting to a version that made sense including both sets of changes but I was hand editing the XML and that is dangerous. It’s clear that if you make more than a few changes and keep them for yourself for too long, you could end up in a bad way pretty quickly.

Xcode as git client you say?

This might be more of a personal preference, and if you want to rebase rather than merge this is a not an option (for now at least) but it seems screwing up the storyboard XML is likely to happen less frequently if you don’t let anything other than Xcode touch it.

A conclusion, for now

After experimenting a little with this test project I’m happy that we can edit our storyboards simultaneously when we absolutely have to, but that shouldn’t stop us planning things out so that it doesn’t happen, and we’ll be sticking to this list, same one as above:

  • Everyone on the same build of Xcode
  • Multiple storyboards
  • Use Nibs for custom views
  • One person owns the storyboard setup / decides granularity
  • Think about which storyboards are involved when assigning tasks
  • Merge storyboards often

Who has two thumbs and loves storyboards now? John McClane

Drawing Physics with SpriteKit

There are plenty of games out there with this basic mechanic already but I wanted to see if it could be done easily using SpriteKit, spoiler alert: it can.

Shapes being drawn and then becoming part of a physics simulation

The Code

It’s on github, knock yourself out

I make use of some handy dandy categories on UIBezierPath made by other people, they are:

How it Works

We’re combining UIKit and SpriteKit here so we’re layering a transparent UIView on top of an SKView.

The SKView presents a single scene, it will contain our shapes and has a bounding static physics body to stop them escaping. The view controller sets up the scene in standard fashion.

- (void)viewWillLayoutSubviews
{
    scene = [[DropShapeScene alloc] initWithSize:self.view.bounds.size];
    scene.scaleMode = SKSceneScaleModeAspectFill;
    SKView *spriteView = (SKView *) self.view;
    [spriteView presentScene: scene];
}

We have a very simple UIView subclass that sits on top providing very basic drawing functionality - it will handle drawing a single path, once the drawing ends it passes the path to it’s delegate and forgets about it. The drawing is done similar to my previous post, here’s the delegate protocol.

@protocol SimplePathDrawingDelegate <nsobject>
-(void)drawingViewCreatedPath:(UIBezierPath *)path;
@end

We’ll let the view controller be the delegate, and thats where we do the interesting stuff, once it gets the drawn path.

-(void)drawingViewCreatedPath:(UIBezierPath *)path
{
    CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
    
    UIImage *image = [path strokeImageWithColor:[UIColor greenColor]];
    SKTexture *shapeTexture = [SKTexture textureWithImage:image];
    SKSpriteNode *shapeSprite = [SKSpriteNode spriteNodeWithTexture:shapeTexture size:pathBounds.size];
    
    shapeSprite.position = CGPointMake(pathBounds.origin.x + (pathBounds.size.width/2.0), scene.frame.size.height - pathBounds.origin.y - (pathBounds.size.height/2.0));
    
    shapeSprite.physicsBody = [SKPhysicsBody bodyWithConvexHullFromPath:path];
    shapeSprite.physicsBody.dynamic = YES;
    [scene addChild:shapeSprite];
}

We take the drawn line on a journey from path, to image, to a texture that is applied to a sprite. That part is pretty straightforward, more tricky is using that path to create a physics body.

SKPhysicsBody gives us a number of options for creating physics bodies, they are:

+ bodyWithCircleOfRadius:
+ bodyWithRectangleOfSize:
+ bodyWithPolygonFromPath:
+ bodyWithEdgeLoopFromRect:
+ bodyWithEdgeFromPoint:toPoint:
+ bodyWithEdgeLoopFromPath:
+ bodyWithEdgeChainFromPath:

There are a few there that will take a path and give us a body, perfect, right? Except on closer inspection only 1 of them will create a path that can be dynamic, and that one bodyWithPolygonFromPath: has the caveat

A convex polygonal path with counterclockwise winding and no self intersections.

Sadly any realistic user isn’t going to like having to draw nothing but convex polygonal counterclockwise paths with no intersections.

Additionally, SpriteKit only lets us have bodies with 12 or fewer sides!

There are a few approaches we could take for getting by these restrictions: multiple joined physics bodies, using Box2D directly to get around the limit on body vertices, but we’ll use a convex hull from the points that make up the path and make an SKPhysicsBody category to do it for us.

I won’t list the code here, you can download the project to have a look but here’s what it does. (I use some existing categories on UIBezierPath to help out here and got a convex hull implementation online too, they’re all included in the project.)

  • Get the points from the path
  • Order the points for the convex hull algorithm
  • Get the convex hull
  • While there are too many points in the hull, smooth it using increasing tolerance (removing points that make the smallest angles).

And that’s all there is to it. The results are pretty nice for most shapes, if you wanted to get started on a physics drawing game you wouldn’t need much more than the SKPhysicsBody (ConvexHull) category.

Fun with UIBezierPath and CAShapeLayer

This is a quick prototype for a fun drawing tool - as you drag your finger across the canvas the line grows branches which sprout leaves. The branches are randomly generated within certain parameters and animate on while you draw the main line.

A line is drawn, with branches automatically being added along its path

Yes, the leaves are very realistic looking, thank you.

The Code

It’s all on on GitHub, feel free to use and improve!

It’s not about the line drawing

The line drawing is very basic - simply adding points to a UIBezierPath. I keep an array of the curves and draw them all in drawRect:. I don’t care about smooth curves or different textures or performance but I’m sure this will work with more sophisticated drawing code too. Most of the drawing code I’ve shipped has been OpenGL based, so it was nice to see how good the results are when keeping things super simple with UIKit / CoreGraphics.

How it Works

Let’s start with the basic line and layer on the other bits. It starts with a pan gesture recogniser in our UIView subclass.

UIPanGestureRecognizer *pgr = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePan:)];

[self addGestureRecognizer:pgr];

Now in the action selector we create new paths when a pan begins, and add to the current path when a pan changes.

-(void)handlePan:(UIPanGestureRecognizer *)gestureRecognizer
{
    if(gestureRecognizer.state == UIGestureRecognizerStateBegan)
    {
        UIBezierPath *newVineLine = [[UIBezierPath alloc] init];
        [newVineLine moveToPoint:[gestureRecognizer locationInView:self]];
        [vineLines addObject:newVineLine];
    }
    else if(gestureRecognizer.state == UIGestureRecognizerStateChanged)
    {
        UIBezierPath *currentLine = [vineLines lastObject];
        [currentLine addLineToPoint:[gestureRecognizer locationInView:self]];
    }

    [self setNeedsDisplay];
}

You can see we’ve a mutable array called vineLines that we’re holding our paths in. This means to draw our paths we can simply iterate over that like so:

- (void)drawRect:(CGRect)rect
{
    for(VineLine *vineLine in vineLines)
    {
        [vineLine stroke];
    }
}

That’s the basic line drawing done, now let’s add some branches. Again they’re UIBezierPaths. Every so often we want to generate a random path, the branch, and add it to the user drawn path, the vine. There are a couple of options for this, we could let the view / view controller keep track of the branches and when to draw them but let’s encapsulate all that in a VineLine, a subclass of UIBezierPath. (In the snippet above just swap out UIBezierPath with our new subclass.

@interface VineLine : UIBezierPath

@property(nonatomic, retain, readonly)NSMutableArray *branchLines;

@end

Rather than just subclassing NSObject and having a property for our path we’re subclassing UIBezierPath and overriding addLineToPoint:, adding functionality to the existing method to decide when to create our branch and add it to the branchLines array. Note that VineBranch is just another UIBezierPath subclass that can create random paths with leaves on the end. All we’re doing here is checking if the point we’re adding is far enough away from the last branch (or beginning of the line) to create a branch and if it is, creating a new random branch and storing it in an array of branches.

-(void)addLineToPoint:(CGPoint)point
{
    [super addLineToPoint:point];
    
    float distanceFromPrevious;
    
    if([_branchLines count] == 0)
    {
        distanceFromPrevious = hypotf(point.x - firstPoint.x, point.y - firstPoint.y);
    }
    else
    {
        distanceFromPrevious = hypotf(point.x - lastBranchPosition.x, point.y - lastBranchPosition.y);
    }
    
    if(distanceFromPrevious > _minBranchSeperation)
    {
        VineBranch *newBranch = [[VineBranch alloc] initWithRandomPathFromPoint:point maxLength:_maxBranchLength leafSize:_leafSize];
        newBranch.lineWidth = self.lineWidth / 2.0;
        
        [_branchLines addObject:newBranch];
        lastBranchPosition = point;
    }
}

If we modify our drawRect: from before we can now draw the branches and leaves as well as the main line.

- (void)drawRect:(CGRect)rect
{
    [vineColor setStroke];
    
    for(VineLine *vineLine in vineLines)
    {
        [vineLine stroke];

        for(UIBezierPath *branchLine in vineLine.branchLines)
        {
            [branchLine stroke];
        }
    }
}

And we’re done!

Animating The Branches

That’s where CAShapeLayer comes in. CAShapeLayer has a number of animatable properties, and animating strokeEnd is great for drawing a path to the screen. So we can remove the code to iterate through the list of branches and stroke them instead, every time a branch is created we create a layer for it and animate the stroke.

-(void)vineLineDidCreateBranch:(VineBranch *)branchPath
{
    CAShapeLayer *branchShape = [CAShapeLayer layer];
    branchShape.path = branchPath.CGPath;
    branchShape.fillColor = [UIColor clearColor].CGColor;
    branchShape.strokeColor = vineColor.CGColor;
    branchShape.lineWidth = branchPath.lineWidth;
    
    [self.layer addSublayer:branchShape];
    
    CABasicAnimation *branchGrowAnimation = [CABasicAnimation animationWithKeyPath:@"strokeEnd"];
    branchGrowAnimation.duration = 1.0;
    branchGrowAnimation.fromValue = [NSNumber numberWithFloat:0.0];
    branchGrowAnimation.toValue = [NSNumber numberWithFloat:1.0];
    [branchShape addAnimation:branchGrowAnimation forKey:@"strokeEnd"];
}

We can make our view the VineLine’s delegate and add a call to the delegate notifying it of a new branch in our addLineToPoint: method from above.

Random Paths

Initially I tried to be clever and see what way the line was curving and attach curves that seemed natural but that wasn’t looking too good. Eventually I just started throwing random numbers at it and things started looking better (this probably should have been obvious to me). So what we’re doing here is getting a random point close to the main line (as defined by _maxLength) and adding a curve to that point, control points are picked near that end point so we don’t end up with curves that are too crazy. Finally, we add the leaf, which for now is just a circle.

-(id)initWithRandomPathFromPoint:(CGPoint)startPoint maxLength:(float)maxLength leafSize:(float)leafSize
{
    self = [super init];
    if(self)
    {
        [self moveToPoint:startPoint];
        
        CGPoint branchEnd = CGPointMake(startPoint.x + arc4random_uniform(maxLength * 2) - maxLength,startPoint.y + arc4random_uniform(maxLength * 2) - maxLength);
        CGPoint brachControl1 = CGPointMake(branchEnd.x + arc4random_uniform(maxLength) - maxLength / 2,branchEnd.y + arc4random_uniform(maxLength) - maxLength / 2);
        CGPoint branchControl2 = CGPointMake(branchEnd.x + arc4random_uniform(maxLength / 2) - maxLength / 4,branchEnd.y + arc4random_uniform(maxLength / 2) - maxLength / 4);
        
        [self addCurveToPoint:branchEnd controlPoint1:brachControl1 controlPoint2:branchControl2];
        
        UIBezierPath* leafPath = [UIBezierPath bezierPathWithOvalInRect: CGRectMake(branchEnd.x - leafSize/2.0, branchEnd.y - leafSize/2.0, leafSize, leafSize)];
        
        [self appendPath:leafPath];
    }
    return self;
}