The Lost Guide to starting with Ember and D3

D3 is a great library for displaying visual data on a webpage. Getting it integrated with your favourite framework, in my experience at least, takes a bit of time and thought. This post looks at a few of the things I’ve learnt building D3 charts within the EmberJS framework. We’ll start with the basics of installing and then look at a couple of ways to integrate D3 with your Ember components.

At the time of writing Ember is v3.10 and D3 is v5.9 however the patterns should translate well to most other versions.

Installing D3 and Integrating with Ember

Firstly we’re going to want D3:

By default, you’ll have the entire library loaded in at this point and you can start using it like:

But to reduce your build size, I’d recommend whitelisting only those parts of the library that you’re using. Add the following to your environment.js file:


After changing that array make sure you completely stop and restart Ember. Relying on Ember serve rebuilding didn’t work for me, and the changes were picked up only after a full restart.

One important note with the above import is that some of those imports will themselves depend upon other parts of D3 that you need to add yourself to that array. For example in the above list d3-shape depends on d3-path so you’ll need to add both. The easiest way to achieve this is just to load the ones you directly import and check for errors when the code attempts to import.

In the example shown here, I’d need to add d3-path to the whitelist of D3 sub-libraries to include in the build.

Integrating D3 with Ember

Now we have D3 available to our Ember application, the next step is integrating it with our components.

The way I think of D3 is having two distinct functions, one is to provide utilities to help with display (for example, calculating SVG paths) and the other is DOM selection and data binding. When you’re using a client-side framework such as Ember or React you typically already have all the data binding functionality you need, so you do have the option of using the framework to provide that.

This gives you two approaches to integrating with your components:

Use D3 for data-binding and DOM manipulation

With this approach you pass the HTML element to D3 and let that manage all of the modification of data. This means you need to hook into the component lifecycle and tell D3 to run the updates at that time.

Let Ember do the data binding and DOM manipulation

This approach uses your client-side framework to update the visualisation in a similar manner to how you update data elsewhere in your application using templates.

We’ll look at how these two approaches can be implemented with an example and discuss some of the pros and cons of the two approaches. For these examples I’ve built about the simplest D3 chart I could, which is a success/failure Donut Chart. You can update the count of Passed or Failed by either changing the numbers in the input boxes or by clicking on the coloured arc in the chart itself to increment that number.

See it running here.

Use D3 for data-binding and DOM manipulation

The approach here is to create a class that manages all the D3 code and then add hooks into the component lifecycle telling it when to create and update.

The advantage of this approach is that it’s just standard D3, this means you can take code from online examples or other projects and they’re much easier to drop in. As D3 is managing the DOM transitions will also work as expected. You also end up with something that is quite portable between frameworks as you’re using a plain old Javascript class with a simple interface.

The major disadvantage with this approach is that handling events on the chart itself are tricky as you need a way to communicate back to the component. The pattern I like to use for this is to add actions into your component like you would normally and then pass the D3 class a sendAction function (with “this” bound to the component) that it can call with an action name and arguments. See the Ember send method API docs for details on this.

The component:

The template:

The DonutChart D3 class would look like this (stripped down as the D3 specifics aren’t really relevant, see the Github repo for the full code):

Let Ember do the data binding and DOM manipulation

So the other way to approach this is to create the SVG (or whatever DOM you’re creating with your chart) yourself in an Ember template, just like any other component. Then you let Ember do all of the DOM manipulation, for example if a value changes you can recalculate the SVG path yourself using the D3 utility tools. See the following example to get the idea.

The component:

The template:

You can see that the interactions on the chart here feel much more natural, in fact the whole solution feels more Ember-like and similar to the rest of your application. I also personally prefer writing the SVG into a template rather than having D3 build it. A big downside is you forgo all the coolness of D3’s transitions which is one of its major features, in a lot of cases this will be a show-stopper. You are also coupling your D3 code pretty tightly to Ember which is a problem if you plan to reuse the chart across projects not written with Ember.


Both approaches have a certain niche were they seem to work better. Personally I like having both available for use in project, but in general I’d lean towards the more pure D3 approach as I think transitions are a big feature of D3 and you’d usually want the option of incorporating them. I also like the idea of having a portable Javascript class. However for cases were you’re not interested in transitions and you have lots of user interaction with the chart opting for letting Ember manage the DOM and data binding may be a tidier solution.




Banner image credit: Andrew Neel

Ethics and algorithms – Making the ‘right’ automated decisions

We are living in a golden age of data. Humanity is now capturing, storing, and sharing astonishing volumes of data. We have outsourced to technology, things previously requiring the consideration of our spongey, human brains.

We are automating decisions left and right, and wow, have we been quick to adopt these automated decisions! We have become increasingly accustomed to, and reliant on, these decisions replacing our own judgement. I cannot remember the last time I used a physical map in order to navigate to an unfamiliar location. Instead, I enter the address into Google Maps and rely on the recommended route. Google Maps even factors in the latest traffic data to recommend a given route. I get an informed route recommendation in considerably less time than it would take me to even find a paper map.

These automated decisions tend to be fairly accurate too. So accurate, in fact, that we may even disregard our own best judgement. There have been multiple reports around the world of drivers ending up in lakes or oceans because they blindly followed their GPS instructions.

So what is driving these automated decisions (pun, painfully intended)? Enter, the algorithms!

So what is an algorithm?

According to our friends at the Cambridge Dictionary, an algorithm is “a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem.”. A common way to explain it, is to liken it to a cooking recipe: by following the steps of a recipe (instructions) I produce an output to a problem (food to solve my hunger). However, as we will see later, the quality of that output very much depends on the quality of ingredients being used.

An algorithm explosion

We have been using algorithms to help automate decisions for a long time. However, up until recently these were for relatively rudimentary problems, such as controlling traffic lights.

Nowadays we are all familiar with algorithms solving much more complex problems – even accurately modelling human behaviour. Algorithms are behind the page results we see from a Google search, the content we see on our Facebook timelines, or the songs Spotify recommends to us. These are complicated problems. Think of how much information I would need in order to recommend new music to a complete stranger. Which genres of music do they like? Which songs have they heard before? What are their thoughts on Norwegian folk metal?

This growing ability to model complex behaviour is due to a number of factors.

Volume of data

In order to make useful predictions, algorithms need to be trained using lots of data. In the same way that I need to know lots of examples of songs you like in order to accurately recommend some new music, so does an algorithm. Obtaining data of the songs people listen to and enjoy was more difficult in the days when this was reliant on radio surveys and album purchases. But now Spotify has a record of every time a user plays a song, whether they give it a thumbs up, or whether they skip it after only a few seconds.

Speed of processing

The volume of data gathered has coincided with better, faster ways of storing and transforming data so it can be provided to algorithms at almost instantaneous speed.

Improved algorithm techniques

Finally, new algorithm techniques have been developed that allow for advanced predictions. For example, artificial neural networks are made up of collections of algorithms that are inspired by the workings of our own biological brains.

This combination of factors has given rise to the field of machine learning – the science of computers automatically learning and adapting from experience without being explicitly programmed. Machine learning is contributing to businesses being able to replace manual human processes with automated, algorithm-powered processes. Algorithms are fast becoming many companies’ Most Valuable Employees, enabling the massive growth seen in companies like Amazon and Uber.

Automated decisions in the public sector

It is not just in the private sector where algorithm-powered decision making is being adopted. The public sector is also making improved efficiencies through the use of automated decisions.

Stats NZ recently released a report into the use of algorithms in the New Zealand public sector. Examples of algorithm-powered decisions in government agencies include: risk assessments of visitor visa applications, automatic calculation of tax refunds, and risk of re-conviction/re-imprisonment of inmates.

The use of algorithms will only continue to grow as agencies continue to find new ways to improve using the data they have available. This means automated decision making will affect all of us, in almost all aspects of life. Not only in the products and services we purchase, but in the services offered to us – the funding available to our households and communities, the patient priority for medical or surgical procedures, or the security of our country.

So all of these improved decisions will help build a better, fairer New Zealand, right?

Biased data, complex machines

In order to for algorithms to be able to produce accurate predictions and recommendations they require lots of data. In many cases this involves data with specific outcomes identified. For example, scientists recently trained a neural network to diagnose melanoma. The neural network was trained with hundreds of images of skin discolourations. Each image included a corresponding classification – whether the tested discolouration was benign or cancerous. Using this data, the neural network was able to predict cancerous discolourations in new images it was shown, at a level that outperformed many (human) dermatologists.

So, the quality of an algorithm’s output is highly dependent on the data it is “trained” on. Unfortunately, no data set is perfect. There are always instances of missing, or incorrect data. In some cases the data can contain inherent bias that could have a significant impact on the type of decisions being generated.

Humans are naturally biased creatures. These biases have lead to a history of discrimination. In New Zealand, and around the world, there are different demographic groups that have experienced generations of discrimination – be that in education, health, employment, or one of the many other areas across our society. Algorithms themselves are not biased, but if the data they are trained on is biased, the automated decisions derived from these algorithms can be.

Amazon recently encountered just such a problem when building an automated recruitment tool. Amazon’s goal was to have a recruitment engine that could screen applicants, recommending the top candidates for the position. No more time spent manually screening applicants. The only problem – the recruitment engine did not favour women. The algorithms had been trained using years of data that reflected male dominance in the tech industry. The recruitment engine had “learned” that male candidates were more favourable. Amazon identified this bias and tried to account for it, but ultimately abandoned the project as there was no way to guarantee that some of other discrimination would not be introduced.

It is this uncertainty from Amazon that reflects another concern with automated decision making. As algorithms become increasingly complex, it is becoming increasingly difficult to understand the why behind their decisions. These algorithms become black boxes – the specific weighting of factors that result in a given output become hidden to us. Those artificial neural networks may be making accurate predictions, but interpreting the reasoning behind those predictions is increasingly leading to shoulder shrugs from those using them.

As we strive for equality in New Zealand, and globally, we need to be careful that our increased reliance on automated decisions is not reinforcing historic discrimination or injustices. This means being vigilant that the training data we use is not inherently biased, and remaining able to critique the reasons behind a particular prediction. We need to ensure that our automated decisions are “right” and not just “accurate”.

Safeguards and best practices

The New Zealand Privacy Commissioner and Government Chief Data Steward established principles for the safe and effective use of data and analytics by government agencies. These are:

  • Deliver clear public benefit
  • Maintain transparency
  • Understand the limitations
  • Retain human oversight
  • Ensure data is fit for purpose
  • Focus on people

These guidelines are being followed by the agencies identified in Stats NZ’s Algorithm Assessment Report. For example, the Department of Corrections does not use ethnicity as a variable in its predicted risk of re-conviction/re-imprisonment of inmates, in part because of the risk of bias existing in the history of data used to train the algorithms. And while the output of these algorithms helps inform inmate release decisions, the input of qualified (human) professionals is also used.

While these guidelines might help maintain some confidence in government agencies’ use of algorithms, New Zealand is lacking in stronger protections for individuals, especially when it comes to the private sector.

The European Union has gone some way to addressing this as part of its General Data Protection Regulation (GDPR). Individuals are given the right to object to the processing of personal data, including profiling. They also have the right not to be subject to a decision based solely on automated processing.

New Zealand has recently undergone its own review of our Privacy Bill. However, the Justice Select Committee’s recommended changes failed to include provision for transparency of algorithmic decisions – something that had been advocated for by the Privacy Commissioner.


The rise of automated decision making is only going to continue, as technology improves and increasing numbers of organisations are able to improve efficiency via algorithms and machine learning. However, with great power comes great responsibility (or something like that). It is important that those implementing these automated decision systems understand the context that these decisions are being made in. We need to ensure that our use of algorithms is leading to a better and fairer world, rather than reinforcing a legacy of discrimination.

While many organisations do act with good intentions, sometimes a bit of regulation can help sharpen that focus – as was the case with the threat of large fines if European organisations did not comply with GDPR. Hopefully New Zealand can follow the European Union’s lead in this regard, before the technology races too far ahead.

Banner image credit: Franki Chamaki.