AI: Exclusive by Nature

Hamilton Mann, Group Vice President Digital at Thales

Since the beginning of human history, we have always been very good at building products that meet the specific needs of certain people, thus excluding certain others.

We kept learning, over and over again to do it, as best we could.

This search for the ultimate differentiation, this obsession with designing things that will respond to market demand and suit a very particular target audience, this know-how that we may have learned in school or on the job, shape the image we have of the world, and the way we see ourselves in it, how we behave in it, most of our time.

Undoubtedly, this mindset is in tension with inclusiveness and diversity.

The better we are at designing and offering products and services that will perfectly suit a specific audience that we are purposely targeting, the better we become at our ability to discriminate against other audiences that are not targeted, and therefore cast aside, on purpose.

Built with our mental, moral, ethical models, their strengths, and their limits, Artificial Intelligence is made in this same mold: it is not inclusive but exclusive by nature.

And paradoxically it is already everywhere.

The global artificial intelligence market was estimated at US$ 87 billion dollars in 2021 and it is expected to hit US$ 1597.1 billion dollars by 2030.

Its continued and increased adoption propels it into the heart of many organizations around the world:

-In more and more hardware and software components,

-In more and more industrial sectors such as automotive, health, retail, finance, banking, insurance, telecoms, manufacturing, agriculture, aviation, education, media, security, to name a few,

-In more and more functions and professions, such as human resources, marketing, sales, advertising, legal, supply chain, and many more.

We are only at the beginning.

How do we ensure that the biases or segmentation patterns in the data that power artificial intelligence do not lead to operations that treat individuals unfavorably based on characteristics such as their gender, color, religion, disability, sexual or political orientation? This is one of the big questions raised by the development of artificial intelligence.

Artificial Intelligence is not so… artificial.

With the exponential and unbridled development of artificial intelligence, the temptation is, and will be increasingly great, to use it to establish new modes of differentiation, unparalleled targeting approaches, for more economic growth, for more competitiveness.

There is a tension between, on the one hand, the need to have organizations and individuals capable of tolerance vis-à-vis diversity while understanding the issue of inclusivity in order to build more equality in society, and on the other hand the global economic system, which encourages and exacerbates more than it restrains behaviors leading us into these forms of competition where differentiating, therefore discriminating, is a rule of the game that leads to success.

This tension is being amplified by what artificial intelligence is able to codify in a systematic and systemic way in our digital society: it is one of the greatest challenges of our time.

Artificial Intelligence is already penetrating every pore of society:

  • Personal assistants are now virtual and allow to perform basic daily tasks,
  • Market analyzes are carried out by machines that produce studies such as comparing competitors and formalize detailed reports,
  • Analyzes of usage behaviors, buying processes and customers preferences are scrutinized by CRMs that integrate more and more intelligence and are able to make predictions about customer needs,
  • Customer services is also provided by Chabots able to answer the questions most frequently asked by visitors to a website, -Etc.

And all this is only an appetizer alongside the developments of possible applications, already emerging, and at least part of a future that is fast approaching with:

  • Autonomous vehicles (bicycles, cars, trains, planes, boats, etc.),
  • Robots assisting the surgeons in the operating rooms,
  • The creation of content (videos, music, articles, etc.) entirely produced by the work of the machine,
  • Public policies whose measures would be prescribed, and whose performance would be predicted by the analysis of large volumes of data, -Etc.

In view of the societal challenges it poses for the future of humanity, artificial intelligence is far from being as artificial as it might suggest.

Either we plan to use artificial intelligence to increase our ability to eliminate visible and invisible inequalities to levels never achieved before, or we (un)consciously plan to increase them to the same scale. In the era of artificial intelligence we are entering, there will be fewer and fewer in-betweens.

Artificial Intelligence opens a new era for human learning.

We humans are responsible for what Machine Learning – essential to all artificial intelligence – learns, how it learns what it does and does not learn. How we teach what the machine should learn is at the heart of 21st century learning issues. This implies that we must not only continue to learn how to develop our own intelligence, but also understand and learn how the machine learns to do its own.

Machine and human learning face similar challenges:

  • Supervised vs unsupervised learning,
  • Structured vs unstructured learning,
  • Few-shot Learning vs “Blink” learning (Malcolm Gladwell),
  • Long/short term learning vs “forgetfulness vs retention” tradeoffs,
  • Zero Shot learning vs “dream” learning,
  • Visuomotor Learning vs Multisensory Learning (AVK).

By learning how machines can learn, we discover and will discover new ways of learning that until now had not been explored or even imagined. These could well revolutionize the standards we know about our own way of learning, to increase human intelligence.

But make no mistake about it.

Intelligence and knowledge are not synonymous, and increasing our knowledge is a necessary but insufficient condition for increasing our intelligence. Increasing our human intelligence is above all increasing our ability to question, to challenge the status quo, to arouse our curiosity and bring new questions in our minds, for the discovery and rediscovery of what we think we know, and of who we are.

Artificial Intelligence is much less intelligent than we imagine.

Without going so far as to imagine an artificial intelligence capable of imitating human feeling, there is something that inevitably distinguishes artificial intelligence from that of humans, and which resides in the understanding and apprehension of the context.

The context is made up of as many parameters, some ostensibly visible to the naked eye, and others more discreet, finer, more subtle, made up of weak signals and details, which are all parameters that count in characterizing a context. Given the nature of perpetual evolution specific to any context, it will take time before an artificial intelligence is able to appreciate the complexity of a situation-specific context.

The artificial intelligence cannot do everything.

Building the artificial intelligence we need for the good of society necessarily requires a vision. That which allows us to understand the tasks which are and will be best performed by machine intelligence as opposed to those which are and will be best performed by human intelligence, also considering those which must and will continue to be performed by the human, no matter what.

The answers that our societies will establish to build the framework according to which artificial intelligence is intelligent for humans, will shape the future of all humanity, not only in terms of many innovations and new forms of competitive advantages which will change the laws of markets as we know them today, but much more importantly, on the sociological level and therefore of the world that we will leave to future generations.

Most of the time, when we think of “Machine Learning”, our mental model leads us to think that it is a strictly one-way approach in which we teach the machine and give it, in different fields, as much as possible means to learn on its own.

Artificial Intelligence is causing a profound change in this link between humans and machines, which will become increasingly critical and exciting to explore because it is in fact already more bidirectional than ever. So the question is posed to us: what can we learn from the intelligence of the machine, to improve us [in what we do] as human beings?

We will have to learn to think differently about many things to make the machine do the same things that it would be humanly difficult or even impossible for us to achieve in the same way. And we will also be able to seize new opportunities to learn, to train ourselves, on many things whose expertise can only be acquired today at the cost of long efforts and many years, and for which performance can be achieved only by human execution.

Artificial Intelligence interferes in the making of many decisions

If artificial intelligence, and the recommendations it produces, opens up unsuspected opportunities to increase not only our own intelligence, but also the nature of the relationships and emotional attachments we might develop with the machine in the future, it also opens up delicate questions of Environmental and Social Responsibility: from when would the decision support provided by artificial intelligence act with such a degree of influence, that it would finally decide silently instead of the human?

This complicated question is on our doorstep.

The response, which depends in particular on the degree of vulnerability that society can at a given moment recognize in each of us, at a particular moment in our life, in particular circumstances of existence, can take on as many nuances as there are people.

This is why applications, devices, and any technological equipment powered with some form of artificial intelligence will have to be the subject of an explicit readability as to the limitation of the parameters that the algorithm takes, or does not take into account, as to the potential implications that could represent a danger to oneself or to others, to help promote responsible use of the said artificial intelligences of these machines, and to prevent the risks of inappropriate use, or even to be prohibited.

Artificial intelligence forces us to take up the great challenge of making it capable of being explicitly explainable to everyone and for everyone, on the causalities of the results it offers to guide decisions that will increasingly impact our lives and society as a whole, even if paradoxically, as humans, we do not know how to explain everything about the why of many of our decisions in the sense that the greatest number would understand them, and in the sense that these explanations would be correct and fair.

Artificial Intelligence will profoundly change the value of work

Some fear that Artificial Intelligence will come to replace humans. 

If the imaginary of a science-fiction artificial intelligence, supplanting humanity such as the Terminator, is scientifically fictional, there is a paradigm that is necessary to include in what the digital society is hatching within it:

Artificial intelligence can be better than humans at performing certain tasks, and yet is not and will not be better than humans at performing all tasks.

With the developments in artificial intelligence, we are experiencing and are going to experience a transformation which is that of the knowledge economy towards the trust economy, motivated on the one hand by the need for more predictability, more precision, and more efficiency, and on the other hand by the need for more fairness, more transparency, and more sustainability.

For the future of “knowledge workers”, digital technology, and in particular artificial intelligence, will bring about five types of change, which will, for each of them, disrupt society on a greater or lesser scale, with more or less impact, according to the predominant natures of work and its relative value in each continent of the planet:

  • Starting with what can be a primary source of anxiety largely fueled by the imagination of an artificial intelligence disseminated by the pop culture, there are first of all the professions that will disappear. And this is nothing new. In other times, during other industrial revolutions, this phenomenon has already existed.
  • Then there are the jobs that will be augmented by artificial intelligence. Again, this is nothing new. By analogy, in other times, resulting from precedent industrial revolutions, this phenomenon has also already existed.
  • Then there are those jobs that will evolve to become tech-jobs.
  • And those which are quite difficult to imagine now because their usefulness is intrinsic to the needs of our societies of which we still know little or nothing.
  • But we must not be naive: the development of Artificial Intelligence is already creating and will create an increase in the emergence of precarious jobs, crutch jobs to compensate for the lack of intelligence of the artificial intelligence. Those are for example shadow workers who label tons of data, in a frenzy of particularly repetitive tasks, to help artificial intelligence learn, and to ensure for example that certain despicable and unbearable content is prohibited from access via the platforms we use – because they break the law – with the impact that the viewing of such content can have in the long run on the mental health of these “workers”.

Which of these five types of changes brought by artificial intelligence will have the greatest impact on the evolution of work in our societies? Hard to predict. However, even if this is not the only moving force in the kinetics of the transformations that characterize our century, it will obviously be up to us to decide.

Anyway, Artificial Intelligence has no other ethics than ours.

Artificial intelligence has no ethics: it is simply a question of ours, and ours only.

Our ethical principles are ultimately, for artificial intelligence, an integral part of the functional requirements which consequently digitally codify the biases of which we are intellectual owners.

Artificial Intelligence somehow inherits the ethical genes of its creator.

Making the invisible codes of our societies visible is probably one of the most transformative advances that artificial intelligence will allow humanity to achieve.

Such a level of transparency on the unsaid and the unwritten, thus brought to light, will help to achieve greater equality, and will profoundly redefine the citizen demand for justice in our societies.

It is also an opportunity to make any artificial intelligence that will interact with ours, and that will coexist with us, be as much as possible the product of collective intelligence, or as best as possible the receptacle of the wealth that synergies can produce, resulting from human diversity, in all its forms of intelligence.

The increase of our intelligence by that of the machine will always be confronted with the existential question of the human cause that we give to this intelligence the mission to serve.

It is therefore that we must make “artificial intelligence” an intelligence inspired by the quintessence of what is best in our humanity, excluding all the dark parts of human nature. This is probably the most dizzying question but also the most decisive for the future of humanity.

It is an ethical question to which only our humanity has the power and the responsibility to provide an answer, constantly renewed, to build the future in which we wish to live.


Hamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. 

He leads the digital transformation of marketing globally for 68 countries to drive enhanced customer engagement and experience and the excellence of Thales’ integrated Marketing campaigns and digital Sales practices.  

He is known for having implemented the worldwide Thales Digital Seller initiative, a “ContentFlix” platform aiming at equipping Thales’ Sales & Marketing enablement practices towards Thales’ 7 Global Business Units, 35 Business Lines and 68 countries, strengthening the collective willingness to break down internal silos to leverage the full potential of Thales’ business synergies for faster sales, upsell and cross-sell.  

Hamilton Mann is the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), mentor at the MIT Priscilla King Gray (PKG) Center, contributor at Harvard Business Review France and was voted among the Top 10 Global Thought Leaders and Influencers on Digital Disruption by Thinkers 360 in 2021 and 2022.