Home > Digitalizacija > Mike Walsh: The Algorithmic Leader; How to Be Smart When Machines Are Smarter Than You

Mike Walsh: The Algorithmic Leader; How to Be Smart When Machines Are Smarter Than You

The Algorithmic Leader

You don’t have to be working for a technology company for algorithms to matter. Every company today is an algorithmic company, whether it knows it or not.

Algorithms are not purely abstractions. They are a bridge between computation and real-world challenges. They present powerful opportunities for those who know how to work with them. They allow us to take our knowledge, experience and insights about the world and build them into platforms that can then act autonomously on our behalf.

And how can someone trained in the analogue era truly rise to become an algorithmic leader?

A tale of two leaders

The aim of this book is to explore the personal qualities, cognitive frameworks and strategic approaches exhibited by a small but growing group of leaders who seem to thrive in this new environment, which is really about finding your own response to the algorithmic age.

An algorithmic leader is someone who has successfully adapted their decision making, management style and creative output to the complexities of the machine age.

The leader in the rhizome

In the 1970s, two French philosophers, Gilles Deleuze and Félix Guattari, challenged existing philosophical notions of the construct of knowledge. Deleuze and Guattari found that model of describing the world inadequate to explain the multiplicity of human society and culture. In their view, there was a more appropriate metaphor from the natural world: the rhizome. A rhizome is the tangled mass of roots of plants like bamboo, lotus or ginger. The rhizome is a complex network used not only for reproduction, but also for storing nutrients and energy for all new plants that are propagated from it.

The rhizome is also a useful way of thinking about leadership in an algorithmic age. Like a rhizome, algorithmic leaders have to thrive without clearly defined hierarchies or structures. You need to be a connector, not a controller.

Being an algorithmic leader means more than just being able to share a few rehearsed anecdotes about artificial intelligence and big data. It means learning to tamp down your own ego, willingly tearing down the corporate structures that support your status, letting go of the idea that you need to make all the decisions, letting your teams self-organize and self-manage, not worrying about being seen to be right all the time, being open to more open forms of partnerships and work arrangements and embracing a new, uncertain future.

Most of us who are currently in leadership positions started out as analogue leaders. We need to make a conscious decision to adapt and evolve and to recognize that the availability of data and algorithms should change our viewpoint.

The end of all jobs?

While algorithms might not necessarily replace the need for human beings, they do increase the responsibility placed on us.

An algorithm cannot be a stand-in for true leadership. We still need real-life humans who can interpret what the machines are telling us, who can decide whether those conclusions are appropriate and ethical and who know how to best orchestrate the capabilities of machines that are smarter than us.

While we may end up making fewer decisions in the future, leaders will need to spend more time designing, refining and validating the algorithms that will make those decisions instead.

For leaders, the real question is not how smart machines can be, but rather What does “smart” now mean when it comes to humans? Surviving the algorithmic age doesn’t require you to be smarter than machines. You just need to know what it takes to be smart.

About the book

This book is based on 10 principles that I’ve organized into three stages of a journey of transformation, starting with your own mindset, then extending to the people with whom you work and finally expanding to the world around you:

  • Change Your Mind
  • Change Your Work
  • Change the World

The 10 principles are:

  • Work backward from the future
  • Aim for 10x, not 10 %
  • Think computationally
  • Embrace uncertainty
  • Make culture your operating system
  • Don’t work, design work
  • Automate and elevate
  • If the answer is X, ask Y
  • When in doubt, ask a human
  • Solve for purpose, not just profit

Work Backward from the Future

Algorithms have inputs and outputs. Every step generates a result, which we can then feed as an input to the next step. When you scale up this idea, algorithms allow us to address very complex, real-world challenges.

While algorithms have been around for thousands of years, the real reason we now live in an algorithmic age is that there have been dramatic advances in deep learning. Systems based on machine learning algorithms adapt themselves as they work. Essentially, machines can now write their own instructions. It is only now, because of machine learning, that we face the new reality of computers that can be smarter than us.

In his book Life 3.0, physicist and cosmologist Max Tegmark considers intelligence to be the ability to accomplish complex goals.

The founder of Satalia is Daniel Hulme. In Hulme’s view, if a system is not adapting itself, learning from its mistakes and improving its model, it is not AI; it is just a form of automation.

This ability to adapt, learn and achieve proficiency within narrow domains is why machines are becoming smarter than us in specific areas.

According to DeepMind CEO Demis Hassabis, building an algorithm that can learn without human knowledge allows that algorithm to be applied more easily to multiple real-world problems.

But here is the important part of the story: while machines will get dramatically better at extracting insights from data, spotting patterns, and even making decisions on our behalf, only humans will have the unique ability to imagine innovative ways to use machine intelligence to create experiences, transform organizations, and reinvent the world.

Researcher Alexandra Samuel uncovered three distinct digital parent-ing styles.

  • The first group of parents she called digital enablers.
  • Digital limiters, by contrast, are parents who will use the off switch.
  • The final group are digital mentors, who take an active role in guiding their kids in the digital world.

Ultimately, these differences in parenting will, according to Samuel, lead to three separate groups in the upcoming generations: digital orphans, digital exiles and digital heirs.

We are just entering a new age of algorithmic experiences that will be fueled by exponential advances in machine intelligence. AI has the potential to transform the way we interact with the world, but it will be your job as an algorithmic leader to imagine what that future might look like and figure out how we get there. A useful way to start designing algorithmic experiences is by thinking about the relationships between intentions, interactions and identity.

Intentions are the often unarticulated needs or desires of a user or customer, which can be deduced from their behavior. Interactions are the method or manner by which you use a platform, product or service. Identity is the cognitive or emotional impact of the experience and the degree to which it has become integrated into a participant’s sense of self.

The true measure of success for an algorithmic experience is that you stop noticing the algorithm altogether.

In an age of algorithmic experiences, anticipating what someone may want without their having to ask will be the new normal.

The next big shift in interface design is the move toward more natural interactions. Our bodies are becoming interfaces.

The sheer scale of the digital world in China, the amount of data that their local platforms can collect, and their ability to use this data to train ever-improving algorithms mean that what happens in China will determine the fate of algorithmic systems everywhere.

When algorithms become deeply embedded in our daily lives, they have the potential to greatly influence how we behave. If we reach that point, we will no longer be able to easily discern how much of our memory, experiences, tastes or even our own identity is native to us and how much is merely the technological extensions of ourselves.

Technology companies have long been experimenting with choice architecture on not only their users but also their own employees.

In the long term, the real driver of business value will be the ways that algorithms and AI can create compelling customer experiences.

Masayoshi Son, CEO of SoftBank, is an example of an algorithmic leader who starts with a strong vision of the future and works backward from there.

Use the Wheel of Algorithmic Experience — intentions, interactions and identity — to imagine how algorithms might respond to what your customers want, as well as how they behave and see themselves.

Aim for 10X, Not 10%

If you are simply automating your existing processes, adding a chatbot to your website or updating your mobile app, then in all probability you are not thinking big enough about your future opportunities. Too often, digital transformation is just digital incrementalism.

The only thing worse than a lack of growth is the opportunity cost of not growing fast enough. When you have gained enough scale, you can approach the structure of your organization, the design of your platforms and the dynamics of your industry in a completely different way. Hoffman refined his network growth concept into something he calls Blitzscaling. His idea was that startups were in a race to get to the point in their life cycle when the most value could be created. Take too long, and either your competitors overtake you or you drain yourself of the resources you need to survive. Basically, you need to go straight from “startup” to “scale up”.

When a venture capitalist says they need 10x returns on their investment, it sounds greedy until you understand that investing in startups is inherently risky and many don’t work out as expected.

If you can structure your organization around learning models built on data loops, you will create a reinforcing cycle.

Great idea is better that good idea

Great ideas can be the foundation of a great business, but they can also hold you back if you let them. You need to be flexible enough to allow new ideas to replace the old. Sometimes that means nothing more than a change in structure. But sometimes, it means a change in people.

What makes Nadella’s approach both powerful and pragmatic is how he distinguishes between capabilities and business units.

Unfortunately, technology companies are driven by capability rather than structure. To chase new 10x opportunities, you can’t rely on your traditional silos and functional departments. Even if you know where the future of your business might be, getting there requires a more agile approach — in other words, allowing your teams and leaders to quickly adjust their plans, projects, responsibilities and even job titles without adhering to rigid organizational structures and approval processes.

Unlock the value of your knowledge

Geoffrey Hinton, Alex Krizhevsky and Ilya Sutskever did not use neural networks as part of their winning strategy in the 2012 ImageNet challenge by chance. Hinton was persuaded to use the technique by a young Stanford college professor and computer prodigy named Andrew Ng.

Although fast, conventional computer chips can only manage a few computational tasks at any one time, and neural networks require the capability to run thousands of calculations at the same time. Fortunately, chips that could meet that need did exist. They were designed for video games, and the best ones were made by NVIDIA.

At Baidu, Ng often designed and launched applications with the specific goal of acquiring particular datasets that targeted an aspect of user behavior or a geographic region.

There are no shortcuts to becoming an algorithmic organization. Given the value of your data, the first step is to centralize your information in one place — virtually.

Pooling data from disparate systems is not enough. Algorithmic leaders also need to think carefully about data availability, data acquisition, data labeling and data governance.

The plan is to build a virtual copy of every engine Rolls – Royce makes, combining data insights from throughout the business with design and manufacturing data, resulting in a perfect digital twin of their underlying physical asset.

A company

Data and algorithms offer traditional companies a chance to reinvent themselves. The more provocative question, however, is whether we still need the company itself in the twenty-first century.

Companies as a concept have been around for a long time. For example, could you design a company that had no people in it and that existed as nothing more than lines of code?

Ronald Coase wrote a short, but highly influential paper called “The Nature of the Firm” (1937), in which he argued that companies exist to lower the transaction costs incurred should you need to use the market every time you need to get something done.

Blockchain is a profound idea that will likely change the structure of companies in the twenty-first century. Smart contracts are agreements written in code.

The future of your company may be no company at all.

Think Computationally

Computational thinking is an approach to solving problems and making decisions that allows you to leverage data and technology to augment your capabilities. Although the concept was popularized by Jeannette Wing, former head of computer science at Carnegie Mellon, it is really just a form of “first principles thinking”, a technique that has been around since the time of Aristotle.

Computational thinking is a structured approach to problem solving.

It is not only lawyers who use analogies to persuade.

Algorithmic leaders take a different approach to evaluating problems and making decisions. Analogies are not enough, and they can be misleading if you don’t have the data to support the purported similarities.

Reasoning by analogy alone not only is dangerous when it comes to strategy, but also can create confusion when it comes to culture and leadership.

Musk used first principles thinking. Aristotle defined a first principle as “the first basis from which a thing is known”. First principles thinking is therefore the art of breaking a problem down to the fundamental parts that you know are true and building up from there.

Musk’s using first principles thinking. When he was advised that it was impossible to cost-effectively use batteries to store energy for homes and cars, he once again broke the problem down into smaller parts.

Like reasoning from first principles, computational thinking involves taking a problem and breaking it down into a series of smaller, more manageable problems (decomposition). These problems can then be considered in the context of how similar problems might have been tackled in the past (pattern recognition). Next, you can identify simple steps or rules to solve each of the smaller problems (algorithms), before considering what the bigger picture might be (abstraction). You can express these principles as a series of steps, applicable to any problem:

  • Break a problem into parts or steps
  • Recognize and find patterns or trends
  • Develop instructions to solve a problem or steps for a task
  • Generalize patterns and trends into rules, principles or insights

One of the main advantages of computational thinking is that it offers the ability to separate the strategy (how to approach a problem) from the execution (crunching the data).

Trust the algorithm

A good AI team requires more than just a collection of AI experts; it requires a practical diversity of skills, knowledge and perspectives.

While benchmarking algorithmic performance against the performance of experts in your company is one way of building trust, another approach is simply to compromise and provide a little bit of control, even if it leads to suboptimal results.

In 2016, DeepMind worked with Google to develop an AI-powered recommendation system to improve the energy efficiency of Google’s data centers, that ended up reducing the amount of energy Google used for cooling by up to 40 per cent. While an impressive result, the DeepMind algorithm was not at that stage in direct control of the cooling system. That took two more years.

Being able to use a domain-specific language is not quite the same as being able to program in a language like Python or JavaScript, as it requires an understanding of both coding and business.

A small group of people should be able to run a complex business without having hundreds of programmers on staff. That only becomes possible if your programmers have built the right infrastructure and high-level languages that then allow business people to execute their ideas, design processes and build powerful applications.

In Narang’s view, the most highly compensated and coveted people in the future won’t necessarily be the most skilled programmers or the smartest MBA graduates. They will be the people who can live at the intersection of technology and business, who can devise and drive the domain-specific languages that will allow them to shape and reshape their business model.

A key barrier to computational thinking in your organization is algorithm aversion, or human mistrust of the recommendations made by an AI system.

In the future, the most effective computational thinkers will be those who can directly express their ideas and execute their strategies in domain-specific programming languages.

Embrace Uncertainty

You can, however, embrace uncertainty by adjusting your views as new information becomes available.

In order to do that, you need to learn something about Thomas Bayes, an English clergyman and mathematician who proposed a theorem in 1763 that would forever change the way we think about making decisions in ambiguous conditions.

Bayes figured out that even when it comes to uncertain outcomes, we can update our knowledge by incorporating new, relevant information as it becomes available. Many years later, French mathematician Pierre-Simon Laplace developed Bayes’s idea into a powerful theory, which we now know as the Bayes Theorem.

Bayes is relevant to modern leaders because it can help them develop an approach to uncertainty that is less deterministic and more probabilistic.

Deterministic models produce a single solution that describes the outcome of an experiment given appropriate inputs; in other words, for every possible input, there is a single output. A probabilistic model distributes over all possible solutions and provides some indication of how likely it is that each will, might or can occur.

Our instinct for determinism may well have been an evolutionary innovation. To survive, we had to make snap judgments about the world and our response to it.

Rather than trying to be right, gamblers try to be less wrong with time.

By understanding the data around which leads go on to become great customers, a sales leader can then work closely with their marketing colleagues to figure out new sources of potential customer prospects.

Developing a probabilistic mindset allows you to be better prepared for the uncertainties and complexities of the algorithmic age.

Management for modern times

Perhaps the simplest way to improve a meeting is to keep it brief.

At Zappos, there are no job titles, only roles and people in the organization can hold multiple roles at any given time.

The more you can understand about the meeting mechanics that drive good outcomes in your own culture, the more consistent results you will achieve.

A decision was said to reflect one of four styles: authoritative (the leader has full responsibility); consultative (the leader makes a decision after weighing group input); voting or consensus.

Bad meetings are a symptom, rather than the cause of ineffective organizations.

If you have adopted a culture of transparency, where data and facts drive decisions, projects are coordinated by algorithms, and work is done by small, empowered teams, the primary function of meetings becomes problem solving and creative development, rather than compliance and control.

For Bezos, there are two categories of decisions. Type 1 decisions are the mission-critical, high-impact choices that influence higher-level strategy and can determine your future; Type 2 decisions are the lower-stakes choices that can be reversed if need be.

Despite its digital DNA, Rakuten used to be a very sales-driven organization. Over the last few years, Rakuten has transformed itself from a sales-obsessed company to a data-focused one. The company’s vision, Kitagawa explained, was to become a membership company to compete with Amazon and Facebook.

“Shikumika is to systemize”, explained Kitagawa. “In practice, that means to learn ideas from one part of the business, turn them into a system and apply them elsewhere”.[1]

An effective algorithmic brain trust is the perfect example of twenty-first century shikumika.

Decision making in the algorithmic age is a moving target. The boundaries of what can and should be automated will shift constantly as AI improves and more data becomes available.

Make Culture Your Operating System

Technology may have changed the hardware of your business, but culture is your true operating system.

One of the most influential documents on how to manage people in an algorithmic era came from Netflix . Shared millions of times on SlideShare, the 124-page document called Netflix Culture: Freedom & Responsibility was written by Patty McCord, the former head of talent at the company, who spent fourteen years in the job.

However, rather than creating a complex new model for managing people (like Holacracy), they did the opposite. They kept removing policies, processes and procedures so that people could get stuff done, guided by their own judgment.

Netflix discovered that by embedding a core set of behaviors in its people, and then giving them the freedom to practice them, its teams would be naturally motivated, proactive and ultimately successful. Principles rather than processes are what matter.

Amazon’s 14 Leadership Principles work because they are a codification of useful behaviors that are already practiced daily at the company.

Daniel Hulme started his company with a depressing thought: he calculated that he had only 700 months to live. He discovered in those ancient writings a universal principle that still holds true: the ultimate meaning of life is to maximize happiness and minimize its suffering. In short, the meaning of life is to maximize good. He created Satalia, a company that started out as a conduit for academic algorithms to be applied more broadly in the world.

The company uses machine learning to understand how people are connected across the organization and pinpoint who has the right expertise to be making certain decisions. Hulme believes that the key to success in a decentralized organization is for leaders to act as humble gardeners rather than prison guards.

Ali Parsa, the founder of Babylon Health, is an algorithmic leader we will meet later in this book.

In mid-2016, IBM started calling people back to the office. Jeff Smith, IBM’s CIO:

“Leaders have to be with the squads and the squads have to be in a location”.[2] Remote work had been a feature of life at IBM for a long time. But at a time when the company required fresh ideas and more disruptive innovation, its leaders were hoping that bringing people back together might deliver the productivity gains it needed.

NBBJ is one of the world’s leading design firms, used by technology companies like Google and Amazon in the US, and Alipay and Tencent in China. NBBJ is increasingly using algorithmic, computational design frameworks to help its clients reinvent their workspaces. This approach, known as parametric design, uses algorithms and computer models to simulate how a building’s occupants will use a space.

As a leader, Jim Barksdale is a contradictory mix of Southern charm and ruthless business acumen. One of my favorite quotes is attributed to him: “If we have the data, let’s look at the data. If all we have are opinions, let’s just go with mine”.[3]

Changing behavior in an organization is not easy unless you can have a fact-based conversation about it.

Waber subsequently co-founded, with Daniel Olguin, Taemie Kim, Tuomas Jaanu and MIT Professor Alex Pentland, Humanyze, a behavioral analytics company that uses wearable sensors to transform company culture and operating models.

Clever team design is good way to accelerate cultural change. Aldo Denti’s pod teams at Johnson & Johnson are an example of how team structures can support innovation, agile management and rapid development when breakthrough growth is required.

Don’t Work, Design Work

The job of an algorithmic leader is not to work. Their real job is to design work.

Unfortunately, a big part of “getting work done” for the last fifty years has translated as standardizing activities and outcomes to establish benchmarks against which to measure people.

Welch’s Vitality Curve, also known as a stack or forced ranking (or more colloquially, the rank and yank method), is a management practice that requires an entire company to be sorted into three groups.

Part of being an algorithmic leader means being able to constantly step back from the task, activity or mission at hand and ask yourself: Is this the smartest way of doing this?

Digital transformation requires you to not only automate your processes but also reimagine what you do. In some ways, this is an extension of the logic in Michael Hammer and James Champy’s 1993 book Reengineering the Corporation. In that groundbreaking book, the authors argued that companies need to step back from their processes and focus on the actual objectives that they want to achieve. Leaders could then study the workflows and figure out the tasks that are required to achieve those objectives.

Digital transformation begins with the customer and asks leaders to consider, given the data, algorithms and digital platforms at their disposal, how they might fundamentally reimagine the entire customer experience.

Analogue leaders look for a profitable way to run their business, to make a reasonable return on people and assets. Algorithmic leaders try to design a model that allows them to deliver their service on a truly global scale.

In order to create a “free library” equivalent for healthcare, Babylon needed to find a way to leverage the expertise of human knowledge without hiring lots of humans. Parsa and his team began exploring ways of triaging patients and diagnosing conditions using algorithms and AI, while still relying on human doctors for more complex conditions and sensitive discussions.

What makes Ali Parsa an effective algorithmic leader is not just his company’s use of AI, but also his ability to constantly reframe his objective — the global delivery of affordable healthcare — through the lens of technologies and practices that scale up.

Ganesh Padmanabhan is a VP at CognitiveScale, an AI startup in the field of augmented intelligence. Augmented intelligence is when you try to mimic human cognitive functions with a feedback loop in it.

Rather than simply automating obvious processes, algorithmic leaders should attempt to identity, record and replicate the best behavioral patterns across their organization. In a data-driven organization, you can constantly iterate and test, and by doing so, gradually build a picture of what the ideal state of your organization or process should look like.

By designing and managing the digital version of its physical product, Rolls-Royce was able to transform itself from a manufacturer that competed with other vendors on price to an algorithmic partner to the airlines that was integral to their operating efficiently. The Rolls – Royce Trent engine is an early example of a digital twin.

A digital twin is a digital model of a physical object or process that allows you to optimize its performance.

As the economist W. Brian Arthur argues, as company processes and products become more digital and modular, leaders will be able to access a library of existing virtual structures that they can use like LEGO pieces to build entirely new organizational models.

The real job of an algorithmic leader is not to work but to design work. Look for the scaled-up solution. A great example of using algorithms to design work is building a digital twin.

Automate and Elevate

James Bessen, an economist and lecturer at the Boston University School of Law, studies the relationship between automation and employment.

Given that both capital and human labor are finite resources, doing more with less should translate into lower prices. And, as prices fall and more people can afford to buy more things, the market will expand such that companies will need to hire more people to meet the new demand.

The impact of automation is rarely as simple as machines replacing humans. Generally, it is humans with the ability to leverage technology replacing other humans, argues Bessen.

Finding the new job inside the old one requires leaders to look beyond the scope of the original activity or process to figure out where value can really be created.

Andrew Ng, also a pioneer in online education and co-founder of Coursera, believes that our challenge is to find a way to teach people to do non-routine, non-repetitive work. To date, our education system has not been good at doing that either at scale or fast enough to keep pace with rapid industry change.

That leaves a lot of the responsibility for education in the hands of employers. However, training is not enough, unless it helps employees migrate to a new way of working and thinking.

Automation is not only an opportunity to elevate your teams; it is also an invitation to profoundly reimagine what you do.

Goldman decided to use that available real estate to house small groups of internal technology startups created to leverage data and machine learning. An interesting example of one of those startups is Marcus, a retail bank named after the founder of Goldman Sachs. Marcus was initially created to help consumers consolidate their credit card balances. In its first eighteen months of operation, it issued $ 3 billion in new consumer loans.

Nike, for example, did not merely automate the die cutters and hydraulic presses that it traditionally uses to make shoes. It partnered with Flex, a technology company that makes consumer electronic goods like Fitbits. Flex, an outsider to the shoe industry, brought a fresh approach. It introduced two ideas previously considered to be impossible: the automated gluing of materials and the use of lasers to cut materials. Rather than being created from a sewing pattern, the Nike shoe was now produced on demand via a digital file.

In the next few years, robotic process automation software will take over much of the administrative and clerical work normally done by people.

Speed is of the essence. As platforms get better at identifying when a problem exists, human leaders have to be more proactive at responding to avert a crisis or seize an opportunity.

Humans, it turns out, can excel at managing exceptions and understanding the context of a problem, especially when there is a scarcity of data, significant ambiguity or numerous contradictions in what is known or provided.

As we start automating more of the repetitive parts of daily work, the most valuable use of your time will be managing exceptions and finding nonlinear solutions to complex problems.

If the Answer Is X, Ask Y

It’s challenging to navigate ethics in the digital age. As a leader in the twenty-first century, you will face difficult choices, and so will your organization.

In 2013, researchers Michal Kosinski, David Stillwell and Thore Graepel published an academic paper in the Proceedings of the National Academy of Sciences that became a useful case study for examining ethics in an algorithmic age and set the stage for what would become a watershed moment for digital privacy. Titled “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior”, the paper demonstrated that Facebook “Likes” (which were publicly open by default at that time) could be used to automatically and accurately predict a range of highly sensitive personal attributes, including sexual orientation and gender, ethnicity, religious and political views, personality traits, use of addictive substances, parental separation status and age.

Aleksandr Kogan, one of Kosinski’s colleagues at Cambridge University, saw opportunity. In early 2014, Cambridge Analytica, a British political consulting firm, signed a deal with Kogan for a private venture that would capitalize on the work of Kosinski and his team.

In late 2013, I found myself in Oslo sharing a taxi with Jim Messina, who had been the White House deputy chief of staff for operations under President Barack Obama from 2009 to 2011 and had served as the campaign manager for the 2012 reelection campaign.

The real game changer for Obama’s campaign in 2012, however, was not Facebook, but the data from set-top boxes that had become recently available. Now the campaign could correlate voter preferences with TV show – watching preferences. Messina estimated that access to this data saved them over $ 40 million in buying efficiency and also uncovered new patterns that meant they could target a segment without buying prime time slots. Basically, the data showed them exactly which TV shows to buy, as they could correlate voting patterns with specific viewership segments.

Algorithms and ethics

While it is still early days for the great ethical AI debates that may define the next decade, one principle is apparent: You can’t serve two masters. In the end, you either build a culture based on following the law or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice.

When it comes to ethics, algorithmic leaders also need to be vigilant to the possibility of bias in the systems that they design and manage.

Machines can suffer from bias in the same way that people can, although our biases tend to be of a different nature.

Machine bias, however, arises from design, data and automation.

Caroline Sinders is a data ethnographer, a relatively new type of job for a world in which the culture of datasets is becoming as important as the design of interfaces.

We are already seeing a growing number of cases where algorithms are struggling to adjust to the diversity of the world.

Algorithms can reinforce and amplify existing prejudices.

One of the biggest sources of anxiety about AI is not that it will turn against us but that we simply cannot understand how it works.

Some organizations and industries are investing in the capability to audit and explain machine learning systems. The Defense Advanced Research Projects Agency (DARPA) is currently funding a program called Explainable AI whose goal is to interpret the deep learning that powers drones and intelligence-mining operations.

The rationale behind algorithmic regulation is accountability.

The challenge of leaders

The challenge for leaders is to identify the kinds of problems in their organization that are suitable for the application of algorithmic solutions — that generally means seeking AI – generated solutions to problems that are not controversial, or politically or socially sensitive.

Algorithms, like any system, are not perfect. AI is a tool that reflects our priorities, as organizations and governments.

Leaders will be challenged by shareholders, customers and regulators on what they optimize for.

As an algorithmic leader, you can ask the right questions and make the right ethical choices, but still encounter another kind of algorithmic risk: abstraction.

As organizations become more like algorithmic machines, there is a risk that their leaders will lose the ability to understand the end-to-end system.

But it is never a good idea to only see trees and forget what a forest looks like.

The ability to question AI systems’ design and data , to challenge their assumptions, and to bring deep knowledge and domain expertise to the discussion of their future are all powerful examples of the most important question that algorithmic leaders need to master: Why?

When in Doubt, Ask a Human

To understand why technology can have such a strong dehumanizing effect, you have to go back to the early days of mass production, when business processes, as well as manufacturing processes, started to be industrialized.

Standardization and simplification were major drivers of business design in the twentieth century.

The service equation was actually a trade-off. Either serve a few customers with a lot of choices, or serve many by offering just a few choices. When the Internet and digital commerce arrived, suddenly the calculus changed. The trade-off disappeared.

Retailers in a digital age don’t need stores, but digital retailers are building them anyway. Today’s algorithmic store is not designed to simply sell things; it also serves as a platform to create relationships with customers.

While technology has generally supported the automation of business processes and the standardization of products and services, in the algorithmic age, leaders will be called upon to do the opposite: to create rich, immersive, personalized and ultimately human experiences for their customers.

Computers lack common sense. We can train machine learning algorithms to spot patterns and detect signals, but to date we haven’t been able to give them the ability to reason from context.

Symbolic AI had been, up until the 2012 ImageNet competition, the dominant approach for researchers attempting to create intelligent systems. Symbolic AI is very different from adaptive machine learning. It works on logic, rules and structured input by human beings.

Humans have a habit of thinking and acting in ways outside of expected norms, which can challenge algorithmic expectations.

Ramya Joseph’s father taught her from an early age that math was not about learning formulas and theorems but solving practical problems. With a background in computer science and finance, Joseph was a natural fit for a newly tech-focused Wall Street and soon found herself in a relatively new area called algorithmic trading.

However, when it comes to financial decisions, the sheer volume of factors that can affect your financial outcome is mind-boggling. Even for brilliant people. The reason Joseph’s father still struggled to make financial decisions was that financial institutions are often set up to sell products rather than solve problems. That got Joseph thinking. How might you scale financial advice and offer it in a way that was not only fiduciary, with a no-strings-attached policy, but also affordable?

And that was the beginning of Pefin, an AI financial adviser. Pefin is a feed-forward neural network. Joseph’s platform is like having a highly paid adviser on retainer, except that the insights are being delivered by an algorithm, at scale, to millions of people in a highly personalized way.

Companies that build AI systems for consumer use are starting to rely on a new discipline called human-centered machine learning. This form of design-thinking blends the hard work of finding out what people need (ethnography, contextual inquiries, interviews, observation, surveys, reading customer support tickets, logs analysis) with an iterative approach to software engineering and interface design.

Thinking like a designer also means anticipating and responding to user behavior as it changes over time.

Communicating data well requires balancing an understanding of sophisticated tools with the ability to translate complex findings in a way that focuses on the most important issues and insights.

Algorithms have the potential to automate almost every aspect of sales, from identifying prospects to creating proposals, setting rates, handling contracts and sending out reminders for renewals.

Selling is a uniquely human capability and difficult to fully automate.

Machines may be able identify the optimal structure of a deal, but if you want your investors, partners and customers to really believe in it, you will need something more. You will need to ask a human.

Solve for Purpose, Not Just Profit

Many of my colleagues and friends are advocates of paying everyone a universal basic income (UBI). A UBI would mean a radical change to the design of our economy.

While we might like to believe that we work to live, rather than live to work, the more we understand about our biochemical engineering and the reward systems of our brain, the more apparent it is that we have to work in order for our lives to have meaning and purpose.

In my view, the real risk of living in an algorithmic society is not that we will have too much time on our hands and nothing to do, but rather that the nature of work will suddenly atomize to the extent that we lose perspective on why we are doing it.

It is not only about your team members needing a rationale for their work; it is equally about you needing to find the right rationale for your company’s transformation.

The primary driver of your digital transformation should be purpose rather than profits, but that doesn’t mean that you need to become a charity or connect your company mission to something that is going to save the world.

Although Gourley was enthusiastic about the prospect of humans and machines working together effectively, he was also cautious about a future in which a class-based divide could open up between the masses who effectively had some kind of algorithm as their boss (think of Uber drivers), a privileged professional class who had the skills and capabilities to design and train algorithmic systems, and a tiny, almost aristocratic class of the ultra-wealthy, who actually owned the algorithmic platforms.

The longer-term solution to algorithmic inequality will not lie in just taxation and regulation, but rather in our ability to provide an adequate education system for the twenty-first century.

Algorithms might allow you to manage more people at scale, but that doesn’t mean they will make you a better manager.

We have been here before. About a hundred years ago, the world experienced the Scientific Management revolution, or more popularly, Taylorism. US industrial engineer Frederick Winslow Taylor had a lot of ideas about how companies might integrate machine and worker for maximum efficiency, and he wrote them all down in his 1911 book, The Principles of Scientific Management.

Just like Taylorism, overreliance on algorithmic management may end up creating unease in the workplace and broader social unrest.

In the future, we won’t work for companies. We’ll work for platforms.

At Satalia, Daniel Hulme and his team are attempting to use machine learning to assign problems and decisions to people with the right capabilities to solve and make them.

AI and algorithms offer a wealth of opportunities to design more flexible, fulfilling ways to work. Just be sure that you would be prepared to use the same talent platform you are expecting other people to use.

Transformation cannot be bought. It is easy to go through the motions of digital transformation: you can hire a team of expensive consultants to deliver a fancy strategy presentation for your board, offer free coding lessons for your employees, upgrade to the latest enterprise technology stack, and even buy a few promis-ing AI startups and integrate them into your business. But in the end, the likelihood of your organization becoming a successful, twenty-first-century organization depends on the culture you create through your actions and the way you empower the people around you.

The complexity of the algorithmic age defies simple solutions or solitary heroes. Only when we work together, empowered by new ways of thinking, with smart machines to guide us, and a renewed sense of our value, can we truly transform our organizations, our industries and the world itself.

In the future, we will be either working for or on the algorithm.

In my view, algorithmic leaders:

  • Focus on their future customers, not their existing ones
  • Design their operating model for multipliers, not margins
  • Analyze problems from first principles, not by analogy
  • Seek to be less wrong with time, rather than always being right
  • Humanize and complexify, rather than standardize and simplify
  • Are guided by user empowerment, rather than mere regulatory compliance
  • Ask whether they have the right approach, rather than whether they are getting results
  • Manage by principles, rather than processes
  • Believe that they should automate and elevate, rather than automate and decimate
  • Transform for purpose, not just profit

[1] In the book on page 97

[2] In the book on page 114

[3] In the book on page 115

Leave a Reply