When combined with Moore’s law, and humanity’s growing body of knowledge and expertise, artificial intelligence (AI) is the label à la mode for doing something useful with that growing morass of data. However, its definition is rather slippery – usually involving achieving a range of goals (identifying/ correlating/ reasoning/ planning/ learning/ acting) through a range of techniques (statistics/ search/ probability/ algorithms/ optimisation) from a range of fields (computer science/ mathematics/ psychology/ linguistics/ politics/ philosophy).
AI offers the promise of achieving those goals at speed and at scale. Many prefer the more modest terms “machine learning” or “data science”.
A technology is intelligent because it has a detailed data profile on the material it analyses, remembers and acts upon. When that material is people, whether it is a virtual assistant like Google Now, Cortana, or a piece of software that performs risk assessments on defendants in the legal system, it creates detailed data profiles that the individual typically has neither understanding of, nor control over. Intelligent system data profiling therefore has data ethics implications, as it has a decisive influence on the individual and the opportunities afforded them in society.
Ethical questions around the use of data should not be left to be determined by lawyers or compliance teams alone. These issues require engagement across a broad range of internal stakeholders, from those involved in designing and implementing digital services to those responsible for customer and business strategy. Tone from the top, engagement at all levels, and education and awareness are critical to ensuring that all internal teams understand the importance of ethical approaches to data, and the implications of getting this wrong.
To this end, governance is another potential sticking point when it comes to assessing the potential impact of AI. There is no accepted global standard when it comes to ethics around AI and data use. Ethics are a set of accepted morals that vary between cultures. Ideally, the AI systems that organisations create will adapt to the cultures in which they’re adopted and used.
The same is not true of data rights. The entire notion of “data and AI ethics” has become of central interest for many individuals, businesses, and governments due to the burgeoning role of data in our lives. The internet, the Internet of Things, and sensors can track an astounding amount of data about a person—from sleep habits, to moment-to-moment location, to every keyboard click ever executed. Moreover, as artificial intelligence (AI) systems make more decisions, AI ethics become increasingly relevant to public policy. If a self-driving car faces a dangerous situation, should it choose the course least risky to the passengers or to a pedestrian—even if the pedestrian is at fault? Data ethics can be tricky, and very little of it is defined by existing law.
Today, more than ever, “customer first” inevitably must also mean “data first”. In many ways, the updated legislation and the current AI revolution are born out of the same trend: the need to understand, manage and protect the data now available. AI’s self-learning capabilities and ability to make sense of huge data volumes certainly opens up new business opportunities. Not only will these tools enable performance marketing to match people to the items they want – from trainers to electrical goods – more quickly and efficiently, but automating this process will also allow the human element in the process, the marketers, to focus on providing more value to customers; in the same way a proven auto-pilot can keep pilots fresh from the mundane and allow them to guide and intervene where they can make the biggest difference. Of course, the inevitable flip side is it raises the spectre of over-automation, highlighting the need for strong controls that safeguard consumers – both their personal data and their individual experiences with brands.
As a relatively new topic within emerging technology, there are still a lot of questions concerning the ethics of AI and big data. It will require a happy medium to strike between human and machine. While the laws and regulations guiding AI are still in their infancy, it is for us to question if the data we are using is correct and if we trust the data source. The reputation of the source becomes incredibly important in how we form this trust. As development in AI continues to improve and evolve, we will need to constantly ask if what we are experiencing is authentic without aspects of human bias involvement. Eliminating bias within data and AI is what we hope to achieve and shedding more light around the importance of ethics and ethical practices will hopefully ensure that we will be free from human bias, the data will always be fair for the population and it is never deemed unethical under any circumstance.
In this video four experts talk about ethical questions of AI such as self-driving cars, job loss, data bias, gender issues, minority populations, human vulnerability, international cooperation and lessons from the Frankenstein story.
On an individual level, intelligent algorithms analyse our internet habits and personalise the content we see and interact with online. At the same time, macro-social structures are shaped and developed on the basis of advanced intelligent computer analysis of data. For example, the majority of all trades on the global stock market today, are performed by algorithms that calculate potential risks associated with a trade. In 2017, The Guardian journalist, Carole Cadwalladr, wrote about how, whilst campaigning for American President, Donald Trump had used a data analysis company, Cambridge Analytica, to help sway American voters with individual targeted messages based on algorithms’ psychological analysis of millions of Americans’ Facebook profiles. Considering the significance that advanced data systems have on our everyday life and society today, we should move beyond the existential ethical questions about a sci fi future and address the “here and now” data ethics challenges.
While future AI systems might also be able to gather their own data, most current systems rely on humans to provide the input parameters, including the data and the best result, as identified through learning definitions – like reinforcement learning. When you ask the algorithm to figure out the best way to achieve that result, you have no idea how the machine will solve the problem. You just know it will be more efficient than you are.
Companies will need to become more transparent, sustainable, and purpose-driven to meet the expectations of post-Millennial generations over the next decade. As an example, experts predict this audience will make environmental considerations a primary factor in consumer decision making as the 10-year deadline established by the UN to avert the worst effects of climate change draws nearer.
Considering the negative impact of these risks, some businesses have decided to do more than just wait around for governments to introduce regulations. Major technology companies such as IBM and Salesforce, for example, have put themselves at the forefront of their own internal ethical AI efforts. Acxiom, a US-based SaaS company, went as far as creating a position of the chief data ethics officer.
The concept of ethics in any given society is constantly evolving, as behaviours move between ‘acceptable’ and ‘unacceptable’ in public and regulatory consciousness. This can make it challenging for organisations to be sure that they are operating in a way that is ethical, and seen to be ethical. Operationalising data ethics means establishing a framework that can, in the long term, withstand a fluid socio-cultural landscape. It is important to establish regular reviews to refresh data ethics principles, and regularly review the effectiveness of governance and internal controls to ensure that they are driving the desired behaviours (for example, responsible data use, rather than reckless data hoarding). It could also include regularly stress-testing data ethics principles against public sentiment by monitoring current affairs and engaging market researchers.
Despite riding the digital revolution for some time now, economically and technologically, we are entering a new era. The opportunity that the current landscape of enhanced privacy presents for AI is that of an intelligent engine that can look at data sources, multiple ways that information is being processed, take some basic rules and automate a lot of the requirements around tagging, and associating consent. Innovation will therefore not be hampered but directed and determined by the backdrop of consumer privacy and robust data ethics. This focus on the consumer and their data privacy will in turn give rise to new applications, technologies and businesses that will both deliver value to us as people while helping organisations achieve and maintain data compliance.
AI can help to fuel a brave new data-driven world, but it is the humans in it that can put this into practice in the right way, to build the right relationships that last.