Myth-busting the dangers of a ‘big data algocracy’
October 2, 2018
John Davison, Chief Information Officer, talks to Data IQ about the myths surrounding the dangers of a 'big data algocracy'.
Alongside faster, more efficient vehicles, the accelerated pace of progress in the automotive industry has resulted in more data than ever before. Connected cars, which are equipped with internet access, allow vehicles to collect data and share information remotely. This can be used in a range of different situations, from monitoring tyre pressure to helping to track a lost or stolen car through geo-location capabilities.
Bump in the road
A strong focus has been placed on the negative aspects of data in recent months, with a series of high profile data breaches affecting organisations such as the NHS. Alongside this, the Cambridge Analytica scandal raised further questions over how big tech firms are able to collect large amounts of personal data, giving them the ability to share it with third parties without the consent of the customer.
Most recently, the new chair of the Financial Conduct Authority issued stark warnings of the dangers of a new ‘algocracy’, referring to how new technology could harm consumers of financial services. In his first public speech in London, Charles Randell claimed that a data algocracy exacerbates social exclusion and worsens access to financial services in the way that it identifies the most profitable or ‘risky consumers’. He also highlighted that tech such as Big Data, AI and machine learning is also calling into question the adequacy of regulations in protecting customers.
Benefits of big data
However, it is important to consider the wide range of benefits of sensible data analysis to both consumers and businesses alike. In an insurance context, greater use of data can be utilised to drive down the price of premiums, enabling higher levels of personalisation for customers. It can also reduce the risk of abuse, making it more difficult to submit fraudulent claims.
One of the issues raised by Randell was ‘name bias’, a practice which has been uncovered in recent months whereby price comparison sites allegedly based premiums quotes on the names of each customer, with higher quotes given to those with names implying they belong to an ethnic minority group. While this practice is clearly neither ethical or effective from a business perspective, in reality instances of name influencing price are more likely to be caused by an insurer being unable to externally verify the details rather than the name per se. If an individual’s details don’t match during the verification process, it’s likely that their premium would increase, reflecting the perceived increase in risks to the insurer.
The General Data Protection Regulation (GDPR) should be at the top of the agenda for anyone processing customer information, but the regulation does not prevent the use of data for pricing. It is essential that businesses comply, in terms of gaining consent to collect data, then storing it safely and being transparent with customers. We’re in full support of the government’s well-enunciated intention to protect personal information. As long as businesses keep these considerations in mind, there is no reason why data should not be used to develop how premiums are calculated.
As connected and autonomous technology continues to evolve, it is likely that growing amounts of data will become available. If used responsibly by all parties involved, this is good news for both consumers and insurers, enabling fairer and more accurate pricing.
It is increasingly important to balance what is technologically possible (through the careful application of tech and big data) while ensuring that procedures are socially acceptable (through careful governance and regulation). With transparency and GDPR considerations at the heart of the operation, insurers can utilise increased amounts of data to build mutually beneficial relationships with customers.