This is part 2 of what I think will end up as four parts. This might be a bit of a rehash of the first part, but I skimmed lightly over why it actually is that I am so fond of
make compared to most other build systems, so I will elaborate with some examples.
Part 3 will be a general post about declarative systems, not directly related to build automation. Part 4 should be about auto-generating the make files (which is part of the motivation for writing about declarative systems first).
The original “insight” of
make is that whatever we want executed can be considered a goal and:
This is all there is to it. By linking the goals (via depenencies) we get the aforementioned DAG, and with this simple data structure we can model all our processes as long as the four criteria above are met, which they generally are, at least on unix where “everything is a file” :)
One of the reasons I like to view the process as a directed graph is that it becomes easy to see how we need to “patch” it to add our own actions. Yes, I said patch, because we can actually do that, and quite easily, even if we can’t edit the original make file.
Imagine we are building Lunettes (a new UI for the VLC media player) which depends on VLCKit.
Considering the graph there must be some goal of Lunettes that depend on the VLCKit, in Makefile syntax this could simply be:
APP_DST=Lunettes.app/Contents $(APP_DST)/MacOS/Lunettes: $(APP_DST)/Frameworks/VLCKit.framework
This syntax establish a connection (dependency) between the executable and the framework. Here I made it depend on the framework’s root directory, of course it should depend on the actual binary in the framework (but then my box will overflow).
What this means is that each time the framework is updated, the executable is considered out-of-date and as a result, will be relinked (with the updated framework).
The reason I mentioned the above link between the application and its framework is because this is where we want to insert new nodes (goals) in the graph incase we want to add unit tests to the VLCKit framework.
So the scenario is this: We write a bunch of unit tests for the VLCKit framework and we want these to run every single time the framework is updated, not only when we feel like it, but at the same time, since we probably spend most time developing on the application itself, we do not want the tests to run each time we do a build.
What we do is mind-boggling simple, we introduce a file to represent the unit test goal and we touch this each time the test has been successfully run:
vlckit_test: $(APP_DST)/Frameworks/VLCKit.framework if «run test»; then touch '$@'; else false; fi
We can now
make vlckit_test to run the test, and if the test has been run (succesfully) after last build of the framework, then it will just tell us that the goal is up-to-date.
To avoid running this manually, we add the following to our make file:
Now our application depends on having succesfully run the unit test for the used framework.
This is all done without touching any of the existing build files, we simply extend the build graph with our new actions.
And the result is IMO beautiful in the sense that the unit tests are only run when we actually change the framework, and failed unit tests will cause the entire build to fail.
As a reader exercise, go download the actual build files of the Lunettes / VLCKit project (much of it is in Xcode) and add something similar. What you will end up with is Xcode’s answer to the problem of extensibility: “custom shell script target” which will run every single time you re-build your target, regardless of whether or not there actually is a need for it.
This might be ok if you only have one thing that falls outside what the system was designed to handle, but when you have half a dozen of these…
Another common build action these days is automated build numbers. Say we are going to do nightly builds of Lunettes and want to put the git revision into the
You remember how everything is a file on unix? To my great delight, git conforms quite well to this paradigm and we can find the current revision as
.git/HEAD, although this file contains a reference to the symbolic head which likely is
For simplicity let us just assume we always stay on master (and we don’t create packs for the heads). The file is updated each time we make a commit, bumping its date, so all we need to do is have our
Info.plist depend on
.git/refs/heads/master and let the action to bring
Info.plist up-to-date insert the current revision as value for the
Again make’s simple axiomatic system makes it a breeze to do this, and “do it right”, that is, do it in a way that limits computation to the theoretical minimal, rather than update the
Info.plist with every single build or require it to be manually updated.
I have used Lunettes as example in this post so let me continue and link to the build instructions.
Here you see several steps you have to do in order to get a succesful build, additionally if you look in the frameworks directory of Lunettes you’ll find that it deep-copied these from other projects.
Since every single person who wants to build this has to go through these steps, we should incorporate it in the build process, and it is actually quite simple (had this project been based on make files), for example we need to clone and build the VLC project which can be done using:
vendor/vlc: git clone git://git.videolan.org/vlc.git '$@' $(MAKE) -sC '$@'
So if there is no
vendor/vlc then we do a git checkout and call
make afterwards. In theory we can also include the make file from this project so that we can do fine-grained dependencies, but since this is not our project we do not have control over its make file and can’t fix any potential clashes, so it’s safer to simply call
make recursively on the checked out project.
We need to setup a link between Lunettes and
vendor/vlc so that the checkout will actually be done (without having to
make vendor/vlc), but that is just a single line in our make file.
If it isn’t clear by now, make files is what drives my own build process when I build TextMate. I run the build from TextMate itself, and the goal I ask to build is relaunching TextMate on a successful build.
This isn’t always desired, as I am actually using the application when it happens, so what I have done is rather simple and mimics the unit test injection shown above.
Let me start by quoting from my make file:
$(APP_NAME)/run: ask_to_relaunch ask_to_relaunch: $(APP_PATH)/Contents/MacOS/$(APP_NAME) @[[ $$("$$DIALOG" alert …|pl) = *"buttonClicked = 0"* ]] .PHONY: ask_to_relaunch
This introduces a new goal (
ask_to_relaunch), it is declared “phony” so it is not backed by a file on disk (and therefor, always considered outdated). It depens on the actual application binary, so it will never be updated before the application has been fully built.
I use phony goals like
«app»/debug and similar. When I build from within TextMate it is the
«app»/run goal that I build, and I have set this to depend on my (phony)
As this goal is always outdated, it will run the (shell) command to bring it up-to-date. The shell command opens a dialog (via the
"$DIALOG" alert system) which asks whether or not to relaunch. If the user cancels the dialog, the shell command will return a non-zero return code and
make will treat that as having failed updating the
ask_to_relaunch goal which in turn will cause the
«app»/run goal to never be updated (have its (shell) commands executed), as one of its dependencies failed.
Simple yet effective.
This has just been a bunch of examples, what I hope to have shown is how simple the basic concept of make is, how easy it is to extend an existing build process, and how flexibile make is in what it can actually do for us.
Of the many build systems I have looked at, I don’t see anything which has this simple axiomatic definition nor is actually very versatile. A lot of build systems have been created because make files are ugly/complex/arcane/etc., and I agree with that sentiment, but it seems like many of the replacements are systems hardcoded for specific purposes which simplify the boilerplate but make them inflexibile, or they are actual programming languages, which makes the build script only marginally better than a custom script, for example some, but not all, of the systems which takes the “programming language route” lack the ability to execute tasks in parallel, which, with 16 cores and counting, is a pretty fatal design limitation.