Menu iconMenu

We are living in a golden age of data. Humanity is now capturing, storing, and sharing astonishing volumes of data. We have outsourced to technology, things previously requiring the consideration of our spongey, human brains.

We are automating decisions left and right, and wow, have we been quick to adopt these automated decisions! We have become increasingly accustomed to, and reliant on, these decisions replacing our own judgement. I cannot remember the last time I used a physical map in order to navigate to an unfamiliar location. Instead, I enter the address into Google Maps and rely on the recommended route. Google Maps even factors in the latest traffic data to recommend a given route. I get an informed route recommendation in considerably less time than it would take me to even find a paper map.

These automated decisions tend to be fairly accurate too. So accurate, in fact, that we may even disregard our own best judgement. There have been multiple reports around the world of drivers ending up in lakes or oceans because they blindly followed their GPS instructions.

So what is driving these automated decisions (pun, painfully intended)? Enter, the algorithms!

So what is an algorithm?

According to our friends at the Cambridge Dictionary, an algorithm is “a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem.”. A common way to explain it, is to liken it to a cooking recipe: by following the steps of a recipe (instructions) I produce an output to a problem (food to solve my hunger). However, as we will see later, the quality of that output very much depends on the quality of ingredients being used.

An algorithm explosion

We have been using algorithms to help automate decisions for a long time. However, up until recently these were for relatively rudimentary problems, such as controlling traffic lights.

Nowadays we are all familiar with algorithms solving much more complex problems – even accurately modelling human behaviour. Algorithms are behind the page results we see from a Google search, the content we see on our Facebook timelines, or the songs Spotify recommends to us. These are complicated problems. Think of how much information I would need in order to recommend new music to a complete stranger. Which genres of music do they like? Which songs have they heard before? What are their thoughts on Norwegian folk metal?

This growing ability to model complex behaviour is due to a number of factors.

Volume of data

In order to make useful predictions, algorithms need to be trained using lots of data. In the same way that I need to know lots of examples of songs you like in order to accurately recommend some new music, so does an algorithm. Obtaining data of the songs people listen to and enjoy was more difficult in the days when this was reliant on radio surveys and album purchases. But now Spotify has a record of every time a user plays a song, whether they give it a thumbs up, or whether they skip it after only a few seconds.

Speed of processing

The volume of data gathered has coincided with better, faster ways of storing and transforming data so it can be provided to algorithms at almost instantaneous speed.

Improved algorithm techniques

Finally, new algorithm techniques have been developed that allow for advanced predictions. For example, artificial neural networks are made up of collections of algorithms that are inspired by the workings of our own biological brains.

This combination of factors has given rise to the field of machine learning – the science of computers automatically learning and adapting from experience without being explicitly programmed. Machine learning is contributing to businesses being able to replace manual human processes with automated, algorithm-powered processes. Algorithms are fast becoming many companies’ Most Valuable Employees, enabling the massive growth seen in companies like Amazon and Uber.

Automated decisions in the public sector

It is not just in the private sector where algorithm-powered decision making is being adopted. The public sector is also making improved efficiencies through the use of automated decisions.

Stats NZ recently released a report into the use of algorithms in the New Zealand public sector. Examples of algorithm-powered decisions in government agencies include: risk assessments of visitor visa applications, automatic calculation of tax refunds, and risk of re-conviction/re-imprisonment of inmates.

The use of algorithms will only continue to grow as agencies continue to find new ways to improve using the data they have available. This means automated decision making will affect all of us, in almost all aspects of life. Not only in the products and services we purchase, but in the services offered to us – the funding available to our households and communities, the patient priority for medical or surgical procedures, or the security of our country.

So all of these improved decisions will help build a better, fairer New Zealand, right?

Biased data, complex machines

In order to for algorithms to be able to produce accurate predictions and recommendations they require lots of data. In many cases this involves data with specific outcomes identified. For example, scientists recently trained a neural network to diagnose melanoma. The neural network was trained with hundreds of images of skin discolourations. Each image included a corresponding classification – whether the tested discolouration was benign or cancerous. Using this data, the neural network was able to predict cancerous discolourations in new images it was shown, at a level that outperformed many (human) dermatologists.

So, the quality of an algorithm’s output is highly dependent on the data it is “trained” on. Unfortunately, no data set is perfect. There are always instances of missing, or incorrect data. In some cases the data can contain inherent bias that could have a significant impact on the type of decisions being generated.

Humans are naturally biased creatures. These biases have lead to a history of discrimination. In New Zealand, and around the world, there are different demographic groups that have experienced generations of discrimination – be that in education, health, employment, or one of the many other areas across our society. Algorithms themselves are not biased, but if the data they are trained on is biased, the automated decisions derived from these algorithms can be.

Amazon recently encountered just such a problem when building an automated recruitment tool. Amazon’s goal was to have a recruitment engine that could screen applicants, recommending the top candidates for the position. No more time spent manually screening applicants. The only problem – the recruitment engine did not favour women. The algorithms had been trained using years of data that reflected male dominance in the tech industry. The recruitment engine had “learned” that male candidates were more favourable. Amazon identified this bias and tried to account for it, but ultimately abandoned the project as there was no way to guarantee that some of other discrimination would not be introduced.

It is this uncertainty from Amazon that reflects another concern with automated decision making. As algorithms become increasingly complex, it is becoming increasingly difficult to understand the why behind their decisions. These algorithms become black boxes – the specific weighting of factors that result in a given output become hidden to us. Those artificial neural networks may be making accurate predictions, but interpreting the reasoning behind those predictions is increasingly leading to shoulder shrugs from those using them.

As we strive for equality in New Zealand, and globally, we need to be careful that our increased reliance on automated decisions is not reinforcing historic discrimination or injustices. This means being vigilant that the training data we use is not inherently biased, and remaining able to critique the reasons behind a particular prediction. We need to ensure that our automated decisions are “right” and not just “accurate”.

Safeguards and best practices

The New Zealand Privacy Commissioner and Government Chief Data Steward established principles for the safe and effective use of data and analytics by government agencies. These are:

  • Deliver clear public benefit
  • Maintain transparency
  • Understand the limitations
  • Retain human oversight
  • Ensure data is fit for purpose
  • Focus on people

These guidelines are being followed by the agencies identified in Stats NZ’s Algorithm Assessment Report. For example, the Department of Corrections does not use ethnicity as a variable in its predicted risk of re-conviction/re-imprisonment of inmates, in part because of the risk of bias existing in the history of data used to train the algorithms. And while the output of these algorithms helps inform inmate release decisions, the input of qualified (human) professionals is also used.

While these guidelines might help maintain some confidence in government agencies’ use of algorithms, New Zealand is lacking in stronger protections for individuals, especially when it comes to the private sector.

The European Union has gone some way to addressing this as part of its General Data Protection Regulation (GDPR). Individuals are given the right to object to the processing of personal data, including profiling. They also have the right not to be subject to a decision based solely on automated processing.

New Zealand has recently undergone its own review of our Privacy Bill. However, the Justice Select Committee’s recommended changes failed to include provision for transparency of algorithmic decisions – something that had been advocated for by the Privacy Commissioner.

Conclusion

The rise of automated decision making is only going to continue, as technology improves and increasing numbers of organisations are able to improve efficiency via algorithms and machine learning. However, with great power comes great responsibility (or something like that). It is important that those implementing these automated decision systems understand the context that these decisions are being made in. We need to ensure that our use of algorithms is leading to a better and fairer world, rather than reinforcing a legacy of discrimination.

While many organisations do act with good intentions, sometimes a bit of regulation can help sharpen that focus – as was the case with the threat of large fines if European organisations did not comply with GDPR. Hopefully New Zealand can follow the European Union’s lead in this regard, before the technology races too far ahead.

Banner image credit: Franki Chamaki.

Making testing great again

Steve the Tester - aka the office Willy Wonka - on why testing is the coolest, and why Media Suite believes it's essential.
Steve Posted by Steve on 21 April, 2017

Private properties in Javascript: how es-next is making life better

Ersin takes a comprehensive look at the patterns for hiding internal state, and contemplates the future.
Ersin Posted by Ersin on 3 October, 2018

Managing Relations in Ember Data with JSON API

From his villa in beautiful Bali, our nomad developer Patrick weighs in on this technical issue. He even provides demo-code.
Patrick Posted by Patrick on 18 August, 2017
Comments Comments
Add comment

Leave a Reply

Your email address will not be published.