A Mixture of Musings

How Apple lost its Scientists

Or how Jeremy Clarkson helped me understand why Apple’s Macbook Pro was such a disappointment.

As a machine-learning professional I can’t avoid deep-learning, and since 14 of the 16 deep-learning toolkits are NVidia-only I need a machine which has an NVidia GPU to do my job. Apple stopped selling such computers back in 2014. Therefore when Apple released its new Macbook Pros with ATI cards I joined the chorus of dismay writing, among many other things

What’s astonishing is Apple built a pro computer completely around GPUs, the Mac Pro, but chose an ATI GPU. Did they not talk to any end-users?

Recently, Jeremy Clarkson helped me realise where Apple’s gone wrong.

First lets consider Apple’s reasoning. They correctly anticipated the need for scientific computing on the GPU, but 80% of the computers they sell are iPhones and iPads, for which NVidia sells no suitable chipset. Therefore they couldn’t use CUDA, NVidia’s proprietary maths library1.

So instead they decided to promote a cross-platform API: OpenCL. NVidia was already number one, so they asked the number two – AMD/ATI – to be a partner.

ATI GPUs have the further advantage that they have much less power-draw than NVidia GPUs, which made them a great choice for graphics cards in consumer laptops.

For a prosumer laptop, one can get a lot more OpenCL power by just adding a faster ATI card. And one can make a minimalist desktop like the iMac by reusing a lot of laptop components.

At this stage you’re using ATI the whole way up, so there are significant efficiencies in scale in just standardising across the line, and putting ATI GPUs in the only remaining computer, your professional computer for scientific users, the Mac Pro.

The only problem is you can no more do scientific computing on the Mac Pro than you can write computer games on a machine unsupported by either the Unreal or Unity game engines.

And this is where Jeremy Clarkson comes in.

In the Censored to Censored episode of the Grand Tour, Hammond, May and Clarkson reviewed three SUVs, the Janguar F-Pace, the Bentley Bentayga and a Range Rover. It all ended with a race around the track, which Clarkson won by cheating: he simply left the dirt road and went cross country.

Finally, it was the turn of the best car here. However I had no intention of relying on my supreme driving skills.

You see the thing is, the Jaguar and the Bentley were designed as road cars and then given some off-road ability

Whereas the Range Rover was designed as an off-road car, and then given some ability to work on the road

This car senses what sort of terrain its driving over and then engages or disengages the differentials accordingly.

You could not come up here in the Bentley or the Jaguar. Look at that, look at it! What a machine you are!

This epitomises the difference between the prosumer and the professional. The Bentley and Jaguar are big enough and burly enough to work better on dirt tracks than a standard car, but if you take them off-road, they’ll get stuck instantly. For people whose jobs require off-road capability, only the Range Rover makes sense, because only it has the odd, peculiar, things a road-car would never have: things such as adjustable suspension; front & rear electronic locking differentials; or extremely low gearings.

Great professional hardware – motoring or computing – is created when one starts from needs and work back to a chassis. You will never succeed starting with a chassis and trying to scale up. Form has to follow function, not dictate it.

Yet this is exactly what Apple’s been doing the last five years. Final Cut X had a great new UI, and was an improvement on iMovie, but on release lacked the peculiar features professional video editors need. Photos.app is better than iPhoto, but it lacks all the editing and curation features Lightroom has. The Macbook Pro is faster than a Macbook, but lacks a sufficiently powerful NVidia GPU necessary to do scientific computing or game-development. The iMacs and Mac Pros are faster than Macbook Pros, but feature the same hardware trade-offs, and so are similarly disqualified from many professions.

Apple did once ship great professional tools. The 2010 15" Macbook Pros had an NVidia graphics card and a Unix OS, but Apple gave it the easy ergonomics of a consumer laptop by using a separate embedded GPU whenever possible to save battery, and providing a macOS shell to make Unix easy. That was a great professional laptop.

In the last five years, Apple has moved away from this philosophy, selling F-Paces instead of Range Rovers. For a while it was safe to do so, since the PC industry was still selling the computing equivalent of Land Rover Defenders: bulky ungainly things that were tolerable at work and a chore at home.

Unfortunately for Apple, Microsoft and Dell are now selling the computing equivalents of Range Rovers in their Surface and XPS ranges, and Apple is in the invidious position where it must sell dongles at a discount in order to lure professionals into purchasing its prosumer PCs. I suspect many won’t: in my case I’ll need a new computer this year, and Apple isn’t selling anything I can use.


  1. If you’re unfamiliar with the way math libraries are structured, here’s an analogy to graphics. At the top end you have math and deep-learning toolkits like Theano and Caffe which are like game engines such as Unreal or Unity. These are built on BLAS and Lapack libraries, which are like OpenGL and GLUT. CPU vendors often release libraries following the BLAS/Lapack API to expose their chips’ features (e.g. the Intel MKL). Once one starts doing maths on the GPU, there is one additional layer: NVidia CUDA or OpenCL, which are analoguous to Metal or Vulkan.

Poached Frogs and Professional Mac Users

The old adage says that if you drop a frog in boiling water he’ll jump out, but if you put him in cold water and slowly heat it up, he’ll happily sit there till he’s poached to fatal perfection.

It’s an awful metaphor. Our English forebears had some grim imaginations.1

Anyway, if one were to ask the frog if he was happy with his new situation, he’d probably say no. Why? Well, the frog would notice it was a bit warmer, but it had been getting warm for a while, so he’d discount that. What he would see was that suddenly there were lot of bubbles around, and – looking for some tangible detail – the frog would decide it was the bubbles that were to blame for his discomfort.

This is the problem of criticism. People know when they don’t like something, but they often have trouble articulating the root cause of their dislike. Instead they latch onto the most obvious, tangible difference. Film Critic Hulk2 has discussed this in the past in the context of movie reviews, where people may focus on a tangible detail, such as the silly emo-Tobey-Macguire scene in Spiderman 3, instead of the broader issue, which in that case was the fact that the movie didn’t find a consistent tone in which such a scene could work.

Something similar is happening with the Macbook Pro and its critics. They’re blinded by bubbles – ports and RAM – and haven’t taken the long view necessary to see the true cause of their unease.

Gently Poached Professionals

Things have been getting slowly, inexorably, and continuously worse for Apple’s professional customers over the last five years.

In 2005 Apple suggested professional photographers should use its new app, Aperture. In 2014 Apple discontinued it. Out of the upgrade cycle, users had to pay full whack for Lightroom and re-train.

Final Cut Pro, another app from Apple, stagnated for several years in the noughties, with few updates and no 64-bit support. So it was a great relief when Apple released Final Cut X in 2011. Professional’s relief turned to ashes when they realised Apple had dropped all the awkward pernickety little features necessary to get actual work done. Apple promised plugins would address this, but as they had not forewarned developers, plugins were slow to arrive. During this time Final Cut’s users struggled in a way Adobe After Effect’s didn’t.

Apple has abandoned its scientific computing users. NVidia has massively invested in maths on the GPU – both in software and hardware – while ATI has bleated about open standards and spent a pittance. The result is that of the 16 deep-learning toolkits 14 support NVidia while just two support ATI3. A similar situation exists with general-purpose linear-algebra toolkits. Apple stopped selling computers, of any kind, with NVidia cards in 2014.

This particularly affects me, as I do machine-learning research. There is no machine that Apple sells that I can justify buying for professional use.

What’s astonishing is Apple built a pro computer completely around GPUs, the Mac Pro, but chose an ATI GPU. Did they not talk to any end-users?

Indeed the worst affected are corporate buyers of the Mac Pro, which has seen no update in three years. Not only does it have frequently faulty GPUs and no upgradability; but there’s no hope of any new model in the future, and even if a new model were to arrive, there’s no second-hand market for such old hardware. Like Aperture users, Mac Pro owners have to write-off the entire cost and buy something completely new, out of the upgrade cycle4.

This was the background to the Apple event on October 27th, and to the response that followed.

Prosumers and Professionals

Just before Apple’s event, Microsoft had one of its own. They launched two computers – a laptop and a desktop – with touch-screens and styli. The laptop’s screen could be detached to form a tablet; the desktop – which looked like an iMac – could be arranged into an easel. This desktop optionally came with was a touch-sensitive dial for fine-grained adjustments. Thanks to their work with Adobe, Microsoft could use Photoshop and Illustrator to demonstrate how well these new computers worked.

These were genuinely novel.5

By contrast Apple announced a slightly faster, slightly smaller laptop. It had largely the same form factor, but with reduced functionality to support its smaller size. It also had a touchbar as a new input-method, and it was very evident that in execution, and even ambition, it was more limited than the dial.

In the thinking that saw both Pixelmator and Photoshop take equal billing in the launch, in Final Cut X’s absent features, in the switch to ATI and the rationale for the 16GB limit on the Macbook Pro, we find the root cause of Apple’s professional malaise

Apple has conflated professionals – whose needs are awkward and particular – with prosumers who just want a little more of everything. This has coincided, and may be a result of, a general disinterest in developing professional software and hardware.

Consider the GPU for instance. ATI GPUs have a far lower power draw than NVidia GPUs. Since casual users value portability, an ATI GPU makes sense. For professionals, only an NVidia GPU is good enough for scientific computing, VR, or game development.

Thus a professional computer should ship with NVidia GPUs irrespective of the adverse effect on battery life and portability. Portability is a secondary concern: professionals work in offices, not coffee shops.

This too is why many professionals would be happy if the most expensive Macbook Pro sacrificed battery-life to raise the RAM ceiling.

Fundamentally, if you try to scale up a machine aimed at casual users, you’ll miss the things professionals need to get work done, since professional needs are often slightly esoteric. Instead one should start with what professionals need, then work down to a chassis.

Modern Apple doesn’t seem to know what professionals need. Microsoft invited several artists to discuss the Surface Studio during its design, film-makers were excluded from the development of Final Cut X.

What’s worse is the laptop-isation extends to the entire line. Having chosen ATI as the best supplier of GPUs for portables, Apple standardised on ATI as a supplier, and now includes mediocre ATI GPUs in desktops and the Mac Pro. These computers consequently fail to meet the needs of gamers, VR developers, or scientific-computing professionals.

Why Bother at All?

As things stand, there is a significant business risk in choosing to purchase professional software or hardware from Apple rather than from Adobe or PC makers. The best possible computer that one can use for Photoshop or Illustrator is a Surface Studio. Apple sells no computer suitable for professionals in machine-learning or scientific computing.

But so what? Macs make up only 12% of Apple’s revenue, and creative professionals and scientists account for a small minority of that 12% in turn.

The reason is Beats Audio.

Apple, you’ll recall, paid $5bn for a maker of mediocre head-phones with a second-best streaming service. I couldn’t understand it at first, but in the end I could come up with only one reason: the importance of the humanities to Apple’s brand.

The iPod, and iTunes store, completely changed Apple. What once was a niche PC maker, suddenly became a maker of fashionable, electronic life-style accessories. This required that certain sense of cool that a love of music can provide.

I think Apple bought beats to reaffirm its commitment to music, and through it, fashion and lifestyle.

The public, conspicuous use of Apple hardware by professionals in art, publishing, music and science similarly adds to Apple’s brand. In particular, it’s an affirmation of Apple’s reputation for creating best-of-class hardware, and of that hardware’s potential to engender successful professional careers.

Apple is now at risk of losing these users, and by extension, damaging their brand. By upscaling casual computers for prosumers, and blithely forgetting to cater to the particular needs of its professional customers, Apple risks losing its professionals altogether.

If that were to happen, Apple would return to where it was in the nineties: a purveyor of undeniably fashionable computers for people who only need to pretend to work.


  1. And quite possibly some terrible eating habits.

  2. Ostensibly the Incredible Hulk writing film reviews, though they occasional break character to suggest they’re a professional screen-writer in Hollywood pretending to be the Incredible Hulk writing film reviews. Then again they may well be a randomer in a basement pretending to be a screen-writer pretending to be the Hulk writing movie reviews. The point is, they write good movie reviews.

  3. Of course plugins and forks exist to provide partial OpenCL support for some of these, but you would be foolish to bet your professional career on such unsupported software.

  4. In a dismal irony, the Mac Pro, championed as the computer that would reaffirm Apple’s commitment to its professionals, has instead become a monument to their neglect. Not only has Apple not updated the computer, they even forgot to update its product webpage which until the furore this month, had embarrassingly touted how well the the Mac Pro ran Aperture.

  5. Apologists snarkily griped that the stylus wasn’t as good as the Pencil on an iPad Pro, but there is no version of Adobe Photoshop and Illustrator for the iPad Pro. In fact due to Apple’s curation of the iOS store there are very few people developing pro apps for iOS at all.

How Did Donald Do it?

So America’s racist then

Not quite. I think it’s more likely that Americans decided to vote for the guy who promised to make them richer, over the woman who promised to inspire them.

But the poor voted for Hillary

If you look at absolute numbers, yes. However if you look at the trends, you’ll see that 16% percent of voters earning less than $30,000 swapped from Democrat to Republican this election.

But Donald’s plans would make them poorer

This is true. It’s also irrelevant in an environment where even mainstream news prefer to report on the frivolities of the race than the detail of the policies.

What matters is narrative. Donald had a simple proposal: he’d block all imports from abroad, raise tariffs if anyone tried to outsource jobs, and so protect jobs at home.

Then he’d throw out all the immigrants working for nothing, creating more jobs in turn for proper Americans.

But it’s all nonsense

It’s simple and it sounds right, and that’s all that matters.

In particular, this is why people with jobs, particularly union workers in places like Michigan, voted for Trump. It also explains why almost 1-in-3 “Hispanics” voted for Trump as well.

Union workers and legal-immigrants weren’t going to vote for the woman who secretly told Goldman Sachs she wanted free trade and unlimited immigration.

Incidentally, this also explains why Bernie, who also favoured trade-restrictions, tended to better against Trump in national polls than Hillary.

So there’s no room for honesty in politics?

There is, it’s just hard.

The thing is, Hillary never came out and said “this is how I’ll make you richer”. She had no simple stump speech. She focused on how awful Donald and his voters were, and how inspirational and right she was.

Meanwhile Donald said vote for me and I’ll protect your job and make you richer.

This should have been obvious. Bill Clinton always knew it was “the economy stupid”. The example of Mitt Romney is instructive too, he was nearly blown away by Herman Cain’s 9-9-9 plan. It was a terrible plan1, but Herman was the only one who presented Americans with a simple plan to get them rich, quick. It almost won him the GOP nomination.

But still, who could possibly vote for someone so awful?

Here’s a few numbers about America in 2016

America is quite a diverse place, and its inhabitants are very different to the staff of most mainstream media.

But more than these “deplorables”, there’s people who just don’t care about racism. These are the really dangerous ones, as Edmund Burke eloquently pointed out, and they have always been the majority. Inspiration & justice just don’t feed families the way cash does.

And now we’ve got the world’s most awful president

As bad as Trump is, he’s still much more liberal than the rest of the Republican contenders. For example while he clearly doesn’t respect women, he doesn’t appear to actively resent them: he was the only primary candidate to support Planned Parenthood. Equally while he’s egotistical, he doesn’t believe he’s God’s chosen candidate on earth, like Ted Cruz. And as daft as his economic plan is, so too were many of the other Republicans plans (consider for example Cruz, and Rubio, or what Brownback and Jindal did to Kansas and Louisiana).

Even his support for extremists has precedent: in 2000 George W Bush visited the the Bob Jones University for an endorsement while it still had a ban on interracial relationships on campus, refused to admit gay students and called the Pope an antichrist. It reluctantly dropped the ban shortly after, but replaced overt hostility with covert dog-whistling, and was still visited (and so implicitly endorsed) by John McCain and Mitt Romney subsequently.

In short, Donald Trump comes from the liberal wing of the Republican party, and his views are very much in tune with a good chunk of those of the US electorate.

So I should be reassured?

If anything no: it means the senate, congress and the states – all of which are under GOP control – are unlikely to check Donald’s worst instincts.

Gee, thanks

Yeah, I feel great too…

So let me get this right, you’re saying it’s poor people’s fault

Not quite. Donald did win the far (“alt”) right, gamer-gate vote too.

Trump won voters earning more than $100,000, Hillary won those under $50,000 (the median salary in the US is $55,000). Proper analysis of his primary successes and campaign in the run-up to the election showed that Donald Trump consistently won the vote of prosperous white people, not the working or unemployed poor. Generally the more negative their view of women and minorities, the more voters preferred Trump.

So it’s the fault of union workers and rich racists?

Also people who didn’t bother to vote, roughly 50% of American voters.

So it’s the fault of poor people, racists, and layabouts?

It’s Hillary’s fault too.

Hillary?! But everything she planned would have made people richer!

True, but politics isn’t about being right; neither is it about convincing other people you’re right; politics is about convincing other people that the right thing is worth doing.

As good as she is at policy, Hillary is terrible at politics. It was no accident she trailed not only Barack Obama, but John Edwards, in the 2008 primaries.

She absolutely failed to mobilise votes: if you look at the figures, the number of GOP voters has stayed constant the last three elections, all that happened this election is the number of Democrat voters fell. Voters either stayed at home or voted for third parties.

Hillary could easily have rectified this by putting Elizabeth Warren or Bernie Sanders on the ticket beside her: instead she chose an unthreatening, uninspirational nonentity.

What’s more, like Mitt Romney, she was two-faced. She got caught admitting to it in her Goldman Sachs speech, with her discourse on the need for public and private personas. Her speeches to Goldman Sachs didn’t match her speeches on the stump, and were as fatal to her as Romney’s 47-percent speech was to him.

People thought she was two-faced?

They thought a lot worse. During a trip through Nevada, one of the Economist’s reporters found that Trump supporters believed absolutely in things that were absolutely false (Cached version).

In the modern world the gatekeepers of news have been overwhelmed by a huge number of retailers selling news on demand. Consumers have the opportunity to shop around until they find the news that feels right, in short that confirms their biases. The Economist reporter found most Trump voters were finding their preferred “news” on Facebook.

The state of modern journalism is dire. Good reporters who do the research simply don’t generate content fast enough to get ad-clicks: the money is in being first, not correct. In the modern world the audience forgives errors: worse, it forgets them.

Accusations stick in a way exonerations can never clean away.

The result is an electorate that has never been more misinformed.

So what do we learn from all this?

The lesson of Herman Cain, of Mitt Romney, of Brexit and of Donald Trump, is three-fold.

First the majority of voters don’t care about inspiration: they will vote for the person who most plausibly promises to make them richer.

Secondly many, if not most, people believe that opposing globablisation is a surefire way to get rich.

Thirdly, everyone believes politicians are corrupt and duplicitous, and the longer one stays in politics, the worse one is.

Left-wingers therefore need to find a way of convincing voters that they can make people richer, and present it in a positive way2. Bernie’s approach seemed to resonate, and so provides a good template.

They also need to find a way to sell the electorate on globalisation, which is a hard one, as the negatives (your job is going to China) seem more obvious than the positives (but look how cheap my iPhone is!).

Finally, counterintuitively, they need to choose candidates with minimal experience for leadership. People prefer fresh faces.

Hillary should just have said that if you earn less than $80,000 a year, I’ll make you richer and Donald will make you poorer. So see how much you earn and vote accordingly. Everything else – racism, anti-semitism, feminism – was meaningless to the majority of voters.

So what’s going to happen

It’s going to be a bad four years. Americans will end up poorer. Russian and China will probably win some geopolitical battles as America deliberately enfeebles itself abroad. Trade will worsen, and so will the US deficit.

In Europe, Trumpism is already well underway. Poland and Hungary are led by increasingly autocratic governments robbing their citizens’ rights and oppressing women, gays and minorities. Marine Le Pen will win a substantial vote in France, and Angela Merkel is under risk from the Pergida movement in Germany. Britain is fatally wounded by the Brexit vote, and in the race between political expediency and national prosperity, it appears politics is winning (I’m a UK resident).

The world in four years will be a smaller, poorer, and more vulnerable place.

And then people will see what’s wrong?

In Britain left-wing politics are lost in the wilderness. Meanwhile right-wing parties are well organised. The same is true in the USA.

Even if people see what’s wrong, there may not be any good alternatives.

Um, so, give up then?

For the last 25 years governments have been dominated by career politicians, people with no normal life experience, whose parties and careers have been funded by minorities of people with an excess of free-time and extreme views, both on the right and the left (the latter concentrated in unions).

The only way to change that is for “ordinary” people with ordinary careers to get more involved in politics than ever they would want.

Basically, we all need to join a political party and change it.


  1. By lowering taxes and entitlements, it would have reduced the amount of money in the pockets of the poorer half of America, and increased borrowing, to pay for a tax break for the richer half of America, who already have a greater share of the nation’s wealth than at any time in the last fifty years

  2. People’s reaction to being called a deplorable isn’t to consider their life-choices, it’s to shout got-to-hell at the judgmental asshole who made the accusation. You can only change someone’s mind once you’ve befriended them.

Associated Types and Haskell

Or how to write a type-class containing a function returning a value of another related type-class.

Lets assume you’re using type-classes extensively to make your business logic independent of your data-representation. Some may argue whether this is a good idea, but that’s not relevant to what follows. This article explains why you’ll encounter a bug, and how to use the TypeFamilies extension to work around it.

Assume you have a type-class that defines a few functions:

type Position = (Int, Int)

class Player p where
  playerPosition :: p -> Position
  playerMoveTo :: Position -> p -> p
  -- etc. and lots more

Now lets say this is all part of your game state, which you also define via a type-class

class GameState g where
  getPlayer :: (Player p) => g -> p
  getMonsterPositions :: g -> [Position]

So far, so good. All of this compiles. Next we create the implementations:

data PlayerData = PlayerData { _pos :: Position, ... }
instance Player PlayerData where
  playerPosition = _pos
  playerMoveTo pos player = player { _pos = pos }
  -- etc..

data GameStateData = GameStateData PlayerData [Position]
instance GameState GameStateData where
  getPlayer           (GameStateData p _) = p
  getMonsterPositions (GameStateData _ mPoses) = mPoses

We want use this to write a wholly generic function like follows

checkForCollisions :: GameState s => s -> [Position] -> Bool
checkForCollisions s ps =
  let
    p    = getPlayer s
    pPos = playerPosition p
  in
  pPos `elem` ps

The problem is that none of this compiles!

Polymorphism vs Monomorphism

To someone coming from the imperative object-orientated world, this seems mysterious, as one could trivially achieve the same effect in imperative OOP languages using interfaces.

The issue is the difference between polymorphism and monomorphism.

Consider the following Java code

interace MyInterface {
  public int someFunction();
}

public static MyInterface genericFunc(int a) { ... }

public static void int callGenericFunc() {
  MyInterface mi = genericFunc(42);
  return mi.someFunction();
}

What we are saying is that

The following Haskell code looks very similar:

class MyTypeClass t where
  someFunction :: t -> Int

genericFunc :: (MyTypeClass t) => Int -> t
genericFunc = ...

callGenericFunc :: Int
callGenericFunc =
  let mt = genericFunc 42 in
  someFunction mt

In the Haskell version, we are saying that

So while the Java compiler renders this generic code polymorphic, by adapting callGenericFunc to work with any value in the MyInterface family, Haskell makes the code monomorphic by choosing a single specific type in the MyTypeClass family and generating variants of genericFunc and callGenericFunc which work on that type.

There are a few advantages to this. On a machine level, forcing everything to a single concrete type allows for static dispatch and therefore function-inlining which is a performance optimisation[2]. This is is why you see monomorphism appearing in recent ML-derivatives like Swift and Rust.

The second is that it means the typeclass code you write is incredibly generic, and can work in whichever way the caller requires.

However in order for monomorphism to work, the compiler needs to be able to identify the particular type that callGenericFunc will use.

Associated Types to the Rescue

If we look at our example again, we can see the problem

checkForCollisions :: GameState s => s -> [Position] -> Bool
checkForCollisions s ps =
  let
    p    = getPlayer s
    pPos = playerPosition p
  in
  pPos `elem` ps

GameState is a generic typeclass, so the compiler will inspect the code that calls checkForCollisions to choose the specific implementation.

Once it’s chosen an implementation in the GameState family, the typechecker looks at checkForCollision and sees getPlayer returns a value of another generic typeclass Player.

Remember it’s not the implementation of GameState that must determine the type, it’s checkForCollisions, so that’s where the type-checker looks.

Unfortunately, all the code in checkForCollisions is completely generic, so it can’t choose a single concrete type: hence Could not deduce (Player p0).

The solution to this to allow the implementation of GameState to additionally specify the particular type in the Player family to use.

To this this we use the TypeFamilies extension.

First we alter our type-class to add a type placeholder called PlayerType

{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleContexts #-}

class Player (PlayerType s) => GameState s where
  type PlayerType s :: *
  getPlayer :: s -> PlayerType s
  getMonsterPositions :: s -> [Position]

Essentially PlayerType is a variable that contains a type rather than a value. Consequently it’s annotated with a kind (recall, that the “type of a type” is called a kind). In this case the single asterisk means that this should be a concrete type.

Associated types must be tied (i.e. associated with) the type defined in the type class, which is why it’s PlayerType s and not just PlayerType.

However we can still constrain the associated type to be in our Player type-class as you can see. We need the FlexibleContexts extension in order to perform this constraint however.

You can have as many associated types as you want by the way, I’ve just used one in this example for simplicity.

The next step is to assign a particular concrete type in our implementation:

{-# LANGUAGE TypeFamilies #-}

instance GameState GameStateData where
  type PlayerType GameStateData = PlayerData
  getPlayer           (GameStateData p _) = p
  getMonsterPositions (GameStateData _ mPoses) = mPoses

This then solves our problem. The code that called checkForCollisions has already chosen the particular type in the GameState family, and in this example lets assume the type is GameStateData.

The compiler next looks at checkForCollisions, but now it knows from the GameStateData implementation that the associated type of Player used for getPlayer is PlayerData. Hence the code type-checks, and the compiler has the information it needs to monomorphise it.

And we’ve managed to do this while keeping checkForCollisions completely generic.

Final Thoughts

This only really rears it’s head once you start making extensive use of type-classes. Since defining and altering types is so easy in Haskell, you can argue that there’s no need for typeclasses. In fact, since abstraction bulks out code, and so can make it harder to read, there’s an argument to be made against the use of typeclasses for abstraction.

However there are many Haskellers that use typeclasses as a way to write code their monadic code in an “effectful” style (emulating the effects features in pure functional languages such as PureScript) and it’s here that they can run into issues, as I did.

In my case, I had a function like

goto :: (HasLevel r, MonadReader r m, HasPlayer p, MonadState p s)
     => Direction -> m MoveResult

And in this case I’d defined HasLevel to return a value in a Level type-class so the game engine could work equally well with different sorts of Level implementations. As it turned out, in the end, I only had the one implementation, so this was an unnecessary and premature abstraction.

In short, I wouldn’t encourage developers to use this on a regular basis. It is a useful trick to know, however, particularly since it’s begun to appear in other, more mainstream ML-derivatives like Swift and Rust.


1. This is not strictly true, you can have an if-statement in Java code which will return one type in one branch, and another type in another branch. The point being the Java code can only return values from a small subset of types in the MyInterface family whereas the Haskell code can return a value of any of the types in the MyTypeClass family.
2. Polymorphic code looks up a table for every function call, then calls the function. Static dispatch calls the function directly, without a run-time lookup and so is faster. Inlining skips the function call overhead entirely, copying the function body into the calling code, and so is faster still. However as this copying makes the overall size of your code greater, it can overflow the cache, which will make your code run much much slower. As a result, inlining is not an automatic win.

Getting Started with Haskell

I’ve been playing around with Haskell recently as a way of clearing my head from the day job doing Python. It turns out getting a working setup is surprisingly difficult, so I’ve written this guide to help those attempting to learn Haskell get started. It explains:

  1. How to install the Haskell compiler and development tools. The tools include the cabal build & dependency-management app; the ghc-mod IDE support tool; the stylish-haskell code-formatter; and the hi (“Haskell init”) project-scaffolding tool
  2. How to install and configure an IDE and REPL
  3. How to use the hi (Haskell Init) project-scaffolding tool to initialise a new project.
  4. What are the best Haskell learning-resources

There are some tips for Mac users in particular.

One of the problems with Haskell is determining how install the particular versions of each of these tools that work well together, as they can often conflict. In recent times, the Stackage project has been launched by the people at FP Complete to address this. Similar to a Linux distribution, Stackage maintains a collection of particular Haskell libraries (“packages”) and applications, at specific versions, which are all tested to not only be stable themselves, but stable with respect to each other. However as there is no IDE support for Stack at the moment, I won’t discuss it here.

Setting Up Haskell

The minimum version to target for GHC is 7.10, and for Cabal, it’s 1.22.

Users of Mac OS X will first need to ensure they have the command-line development-tools installed. To do this install XCode from the app-store, then on the command-line run the following (note the dashes).

sudo xcode-select --install
sudo xcodebuild -license

Next, Mac users, and also Windows users, should install the Haskell Platform.

For Linux users, of course, the first option is their native package manager; however if that doesn’t have GHC 7.10, then they too should use the Haskell platform.

Once done, you should have ghc, ghci, runghc, and cabal all on your path. GHC is the compiler obviously, ghci is a simple REPL, and runghc compiles and runs the given script.

Cabal is a build & dependency-management tool similar to Maven for Java, but it can also install standalone executables. As a first step, you should ensure you have the latest version of Cabal itself installed. Type:

cabal update
cabal install cabal cabal-install

For those who are confused, cabal is the library and cabal-install is the application called – somewhat confusingly – cabal.

When you execute the commands above it will tell you where all applications installed by typing cabal install appname will be placed on your filesystem. On Macs using the Haskell platform, it’s $HOME/Library/Haskell/bin. Make sure to update your PATH environment variable accordingly.

A Mac(Ports) Conundrum

Haskell and MacPorts don’t always play well together. Some third-party Haskell libraries (or “packages”) when being built will opportunistically link to libraries in the MacPorts installation directory, /opt/local. This directory includes libraries duplicating those on the system, notably libiconv. This can then lead to linker errors when the wrong iconv library is picked up by the linker. You can read an in-depth description of the MacPorts library problem here, but the short answer is that if you have linker trouble when building an application, particularly if it involves iconv and HSBase, you may want to try amending the project configuration and rebuilding by typing

cabal configure --extra-lib-dir=/usr/lib
cabal build

This will force a search of /usr/lib before the MacPorts’ directory. For whatever reason, the relocatable GHC package is less affected by this but that does not currently (February 2016) work on Mac OS X “El Capitan”.

Installing an IDE

As of February 2016, I have found no process which is guaranteed to always provide a new user with an IDE that works reliably, at least not on a Mac.

The IDE Haskell plugin which works with the Atom editor is the most promising at the moment, however while I’ve got it to work with GHC 7.10, it was after several false starts. Ghc-mod, on which it relies, will often crash out silently due to malformed Haskell or Cabal files, in which case you need to relaunch the editor. At other times, ghc-mod will launch some expensive, long-running process that confuses Atom.

The IntelliJ IDEA plugins, both IntelliJ’s own and HaskForce similarly struggle with GHC 7.10 and ghc-mod.

The EclipseFP plugin used to work very well indeed, but it relies on a tool called buildwrapper which ironically will not build with GHC 7.10 & Cabal 1.22

The Leksah application is a Haskell IDE written in Haskell which will definitely work. Visually it looks very poor on both Mac OS X and Windows, and it has an unorthodox and noisy layout that takes time to get used to.

Therefore the next best approach after Atom/IDE-Haskell, is to use the web-based IDE provided by FP Complete. While guaranteed to work, this obviously this requires a constant internet connection, and a github hosted project.

Since Atom, when it works, works very well indeed, it’s worthwhile trying to install and configure it however, which is what we describe next.

 Atom IDE-Haskell Installation

Install Atom from its website and then use the command-line and cabal to install the Haskell IDE support tools:

cabal install happy
cabal install ghc-mod hlint hoogle stylish-haskell

Happy needs to be installed first and separately as it is an undeclared dependency of haskell-src-exts, which is in turn required by ghc-mod.

Then launch Atom and install the following packages:

You will likely need to configure each of these individually to enter the full paths to cabal, ghc, ghc-mod etc. When editing the settings for haskell-ghc-mod you enter both the path to the cabal installation directory (e.g. /Users/myusername/Library/Haskell/bin) and the path to ghc (e.g. /usr/local/bin) to the “Additional Path Directories” field, the pair separated by a comma.

Then quit Atom, find a Haskell project (or create one as shown below) and restart Atom.

Add the project directory using the “Add Project Folder” item of the “File” menu, then open its Cabal file to trigger the launch of the Haskell IDE support, complete with its eponymous “Haskell IDE” menu. If you’re lucky, autocompletion, error-detection, linting and building tools will all become available.

Before opening a project with Atom, you may want to consider “test-driving” ghc-mod by launching it from the command-line with a sample file to check, using ghc-mod check /path/to/file.hs. GHC-Mod does occasionally do some very time-consuming work such as building its own cabal, and this tends to confuse IDEs mightily. Moreover, before you launch either Atom or ghc-mod for the first time on a new project, you should ensure that running cabal build works first.

The IHaskell REPL

A good REPL is invaluable when learning a new language, a new library, or just playing around with ideas. IHaskell is the best Haskel REPL I’ve come across.

It’s based on IPython Notebook, a web-frontend that lets you interleave markdown and live code in a virtual notebook. In 2015 this was split into two projects, a “Jupyter” core and a Python plugin. This was to explicitly accommodate other languages, such as Haskell, using the IHaskell plugin.

Linux users can install Jupyter using their package manager, though depending on their distribution it may still be called ipython, and then install ihaskell using

cabal install ihaskell
ihaskell install

Once installed, launch the notebook server by typing ipython notebook. Despite the name, you will see you will have the option of creating either Python or Haskell notebooks.

For OS X users you can use Homebrew to install Python, then Pip to install ipython, and finally cabal to install IHaskell as shown above. Should you try to install Python and IPython using MacPorts, you immediately encounter the usual MacPorts / system library linking issues. The easiest, though spectacularly wasteful, solution to this is to install Kronos Haskell which is a 2GB app bundle with its own copies of Python, Jupyter, GHC, Cabal and the usual Haskell libraries.

For windows users, the Kronos option is by far and away the simplest approach.

Creating your first project

We’ll use the Haskell Init tool. If you haven’t already installed it, do so by opening a console and typing

cabal install hi

To create a project called my-project-name in a directory called my-project-dir type

hi --repository git://github.com/tfausak/haskeleton.git \
   --directory-name my-project-dir \
   --package-name my-project-name \
   --module-name MyProjectName \
   --author "Bryan Feeney" \
   --email bryan@amixtureofmusings.com

The repository flag is not the new project’s repository; rather it gives a path to a template project, in this case the Haskeleton template. This features unit-tests, documentation tests and benchmarks, with all of these explained in the Haskeleton guide. Other project templates, such as web-apps, can be found on the HI templates page)

Module names, which are camel-cased, are used in Haskell code while package names, which are dashed, are used on the Hackage package repository.

Before you start coding, you’ll want to download the skeleton project’s dependencies. Rather than mix these dependent packages in with your global package repository, create a local package sandbox for your project:

cd my-project-dir
cabal sandbox init
cabal update

cabal install --only-dependencies
cabal install --enable-test  --only-dependencies
cabal install --enable-bench --only-dependencies

Cabal can automatically detect a sandbox, so once it’s created you can just use the usual commands. Typing cabal install --only-dependencies installs the project’s dependencies as given in the Cabal file. By default this is for the library and/or executable only. You need additional calls to download and install any dependencies required for the unit-tests and benchmarks.

As a quick check, make sure everything builds:

cabal build

And then you’re safe to open the project in Atom. Obviously you can also run tests using cabal test, and execute the benchmarks using cabal bench, though if you want benchmarks presented as graphs rather than ASCII text, type:

mkdir -p dist/bench
cabal bench --benchmark-options="--output dist/bench/index.html"

Note benchmarks are only available with the Haskeleton project template.

Actually Learning Haskell

There are two free online resources that are popular:

This is a case where the best approach is to actually buy a book, in this case Beginning Haskell: A Project-Based Approach. While obviously more expensive than free, it pays its own way in terms of time saved.

Once you’ve learnt the core language, you will likely find What I wish I knew when learning Haskell to be a useful read. It is a sort of Haskell-by-example, describing all the practical aspects of application development you need to be aware of. Particularly if you’ve learnt Haskell from Learn You a Haskell (LYAH), this is the recommended next step.

Similarly the 24 Days of … guides by Oliver Charles offer a nice introduction to the various Haskell packages and extensions that are commonly used.


1. Specifically the eponymous constructors for the State, Writer and Reader monads have all been hidden, and instead replaced with lower-case functions state, writer and reader, affecting the code examples in chapter 13.