Home > Digitalizacija > Martin Ford: Architects of Intelligence; The truth about AI from the people building it

Martin Ford: Architects of Intelligence; The truth about AI from the people building it

This book is a collection of interviews that author had with some of the main players in AI field. Those can be divided into different areas. People like Geoffrey Hinton, Yoshua Bengio and Yann LeCun are pioneers in deep learning. Andrew Ng, Fei-Fei Li, Jeff Dean and Demis Hassabis have done work on advancing neural networks. Barbara Grosz and David Ferrucci are focusing on natural language recognition. Gary Marcus and Josh Tenebaum – human cognition. Oren Etzioni, Stuart Russel, Ray Kurzweil and Daphne Koller – AI generalist. Judea Pearl is also working on probalistic approaches in AI and machine learning. Rodney Brooks, Daniela Rus and Cynthia Breazeal – robotics. Bryan Johnson – founder of Kernel – enhancing human cognition with technology. James Manyika – McKinsey, research leader. Nick Bostrom – AI alignment problem.

Conversations were wide and author give opportunity to all to explain what are they doing, but he wanted to explore three areas with all of them.

  • Potential impact of AI and robotics to job market and the economy.
  • Path towards human-level AI (AGI – general intelligence).
  • Risks associated with progress in AI.

Here are main ideas from interviews:

Yoshua Bengio:

Professor at University of Montreal, one of pioneers of deep learning. He was instrumental in advancing neural networks, in particular unsupervised learning, where neural networks can learn without relying on vast amount of data. He believes that one of main challenges is still how to teach machines to get ability to understand causal relationship in data. He believes that common sense in computing will emerge as part of learning process.

Deep learning is an approach to machine learning. While machine learning is trying to put knowledge into computers by allowing computers to learn from examples, deep learning is doing it in a way that is inspired by brain. What deep learning researchers are doing is like evolution in a way that they are putting prior knowledge in the form of architecture and the training procedure.

What is happening now is that with use of building blocks coming from deep learning, researchers are trying to solve the same problems as classical AI.

Backpropagation is one of technics that really made deep learning possible. Idea that you can send error information back through the layers, and adjust each layer based on the final outcome. It is an old idea, but people like LeCun, Hinton and David Rumelhart reinvented it. And until 2006 there was no success in training deeper networks with this method. After that time training took off and additional features were added to abilities of deep networks.

In 2010 big companies started to invest in into neural networks. Mainly for using them for speech recognition. Speech recognition come in 2010, computer vision around 2012.

He stayed in academic life, create Mila, The Montreal Institute for Learning Algorithms. Mila was frontrunner in creating AI ecosystem in Montreal. With addition of Vector Institute in Toronto and AMII in Edmonton, Canada strategy to really push AI forward was on a good way.

Development of AI will continue, there is solid ground already in place, there are still vast amount of data that are not used yet (like healthcare). The things that will slow down development are social factors. Society can’t change infinitely fast, even if the technology is moving forward.

In connection with relationship between human and machine. He would rather have imperfect human being as a judge, than a machine that doesn’t understand what it’s doing. He also believes that people should understand new challenges since we would have to make collective choices about what kind of future we want.

Stuart J. Russell

Professor of Computer Science at Berkley. His definition of artificial intelligence is that an entity is intelligent to the extent that it does the right thing, meaning that its actions are expected to achieve its objectives. The definition applies to both human and machines. When we talk about knowledge representations, we are entering in a field of how we know things. We study how knowledge can be stored internally and then processed by reasoning algorithms, such as automated logical deduction and probalistic inference algorithms.

When looking at machines and what they can achieve, we need to be careful how to classify AI and how to classify deep learning. AlphaGo and AlphaZero are really hybrid of classical search-based AI and deep learning algorithm that evaluates each game position that the classical AI system searches through.  Self-driving cars use the same mechanism.

A lot of things popular today were available already years ago. We just weren’t using them properly. We are now applying modern engineering to older breakthroughs, by collecting large datasets and processing them across large networks on the latest hardware.

First self-driving car operating on public roads was 30 years ago. Ernst Dickmanns’ demo in Germany. But even today the challenge of through self-driving car breakthrough is to build an AI system, that people are willing to trust their lives with. It is hard to see when AI technology that could be self-maintaining and learning will be available, but one thing is clear that rule-based approach that was used in early day of Google self-driving cars is not a way forward. Self-driving car must deal with unexpected circumstances on the road, it cannot do that based on rules. It should use some form of lookahead-based decision-making.

Once AGI gets past kindergarten reading level, it will shoot beyond anything that any human being has ever done, and it will have much bigger knowledge base than any human ever has. Getting closer to AGI we will need to work on some areas. Areas like clear approach to how natural language can be understood to produce knowledge structures upon which reasoning processes can operate. Ability to understand language and then to operate with the results of that understanding, is one important breakthrough for AGI that still needs to happen. Another breakthrough is the ability to operate over long timescales.

One attempt to improve AI field is from group around Russell that invented a language called BLOG (Bayesian Logic). It is a probalistic modeling language, so you can write down in the form of BLOG model, what you know and then you can combine that knowledge with data and you run interference, which in turn makes predictions. A real-world example of such system is the monitoring system for nuclear test-ban treaty.

Russell sees two possible future scenarios for human economy. First is that machines will bring automation and productivity improvement that will create wealth that will subsidize the economic viability of everybody else. A lot of people will not do any economically productive activities. They will be provided with subsidized universal income, but this will not be an interesting life, people will lose motivation and development of human race will not look bright. Second is that machines will take over some production of goods and basic services, but this will enable people to still pursue activities that could improve quality of life for them and others. The future can have a perfectly functioning economy, where people who are experts in living life well, and helping other people, can provide those kinds of services. We want to avoid a situation, where there are the super-rich who own the means of production – the robots and the AI systems – and then there are their servants, and then there is the rest of the world doing nothing. That’s sort of the worst outcome from an economic point of view.

Regrading threats of AI, problem is straightforward: our intelligence is what gives us our ability to control the world; and so, intelligence represents power over the world. If something has a greater degree of intelligence, then it has more power. His vision is that AI must always be designed to try to help us achieve our objectives, but that AI system should not be assumed to know what those objectives are. This uncertainty is actually a margin of safety that we require. And we need to build AI in a way, that will always be controlled by humans.

Geoffrey Hinton

Godfather of Deep Learning, driving force behind technologies like backpropagation, Boltzmann machines and the Capsules neural networks. He is active with Google, in University of Toronto and as Chief Scientific Advisor of the Vector Institute for Artificial Intelligence.

Backpropagation was originally created by David Rumelhart and Hinton and Williams worked with him on formulating it properly (Hinton main contribution was to show how you can use it for learning distributed representations). Backpropagation is algorithm that is adjusting weight in connection between layers of neural networks in a way not by trial-error approach but through computing all weight inside network based on outputs. So basically, you don’t measure results, but you compute them and for all the weight at once with no interference.

Hinton contribution was how can backpropagation learn distributed presentation, it can learn new features of different objects by connecting other features. Bengio was doing that with language, taking few words from text and predicting the next word. LeCun was doing that in computer vision. The fact that backpropagation would learn distributed representations that captured the meaning and the syntax of words was a big breakthrough.

But backpropagation at that time was a little oversell. They thought it was going to be amazing, but actually, it was just pretty good. In the early 90s, other machine learning methods on small datasets turned out to work better than backpropagation. The SVM (support vector machine) did better at recognizing handwritten digits than backpropagation.

Using deep learning for computer vision reach inflection point with ImageNet competition in 2012, for speech recognition this inflection point was 2009. Before AI was mainly done by symbol strings definition and rule-based approach. People working at that time on AI weren’t too interested in learning. Logic-based people were interested in symbolic reasoning, whereas the neural network-based people were interested in learning, perception and motor control.

Hinton believes that the strongest feature of deep networks is distributed representation. Because in human memory, you don’t store each memory in individual neuron, you actually adjust strengths between neurons across the whole brain to store each memory. That is basically distributed representation. So, if you look at this approach from perspective that each represented concept is represented by activity in whole bunch of neurons and each neuron is involved in representation of many different concepts, than you can talk about distributed representation. So, learning in this model is done through adjusting strength of connection between neurons.

Opponents of neural network approach received strong bust with Marvin Minsky and Seymour Papert’s book Perceptrons. On the other side we had people like John von Neumann and Alan Turning, who thought that big networks of stimulated neurons was a good way to study intelligence. But dominant approach was symbol processing inspired by logic.

Hinton still believes that we should learn how the brain does reasoning and that is the way forward. One of the main challenges in getting to AGI is trying to solve challenge of implementation of unsupervised learning. Second challenge is that we should think of communities of AI, not only individual AI systems, since they can be so much more efficient in communities.

Threats of technology is really a social question. Technology will improve productivity, it is what people will do with it, that will define if it will be used for good or bad. Usually leading nations that behave very well.

Hinton in his project Capsules is trying to create environment where people will start to challenge same basic assumption. He believes that people doing master’s degree and then going straight to industry aren’t going to come up with radically new ideas. Saying that Canada is a good place for deep learning. Canadian government investing million dollar a year in this field.

Nick Bostrom

He is one of the most important authors on superintelligence and risk associated with it. He is founding director of Humanity Institute at the University of Oxford.

Bostrom is afraid of possibility that the technology could misuse itself. That objectives of this powerful system are trying to optimize for are different from our human value (problem of alignment). Example of paperclip system changing whole world in paperclips. In comparison to technology, humans are mess. We don’t have a particular goal from which all the other objectives we pursue are sub-goals.

In Human Institute they are working on governance, policy, ethics and the social implications of AI and on technical control problem of alignment.

Governments should be involved into regulations of AI, but not yet, since nature of the problem first need to be clarified and understood better. At this moment it is important to channel existing concern and interest in constructive directions. The risk associated with narrow AI are significant, but not existential.

Challenges in development of AI are connected with unsupervised learning and use of unlabeled data. Companies that are leaders in AGI development according to Bostrom are. DeepMind and Google Brain, but Facebook, Baidu and Microsoft all have strong research groups too.

Yann LeCun

VP & Chief AI scientist at Facebook, he is recognized as developer of convolutional neural networks – a machine learning architecture inspired by the brain’s visual cortex. He started thinking about neural networks when stumbled on a philosophy book The Debate between Jean Piaget and Noah Chomsky. Interesting debate between the concepts of nature and nurture and the emergence of language and intelligence.

The motivation for conventional neural network (CNN) was building a neural network that was appropriate for recognizing images. CNN is a particular way of connecting the neurons with each other in such a way that processing that takes place is appropriate for things like images. The basic principle of how the neurons are connected is that they’re organized in multiple layers and each neuron in the first layer is connected with a small patch of pixels in the input image. Each neuron computes a weighted sum of inputs. The weights are the quantities that are modified by learning. Second layer is non-linearity layer, calculating weighted sums and turn on or not if it is above or below threshold. Third layer is pooling. The convolutional net is basically a stack of layers of this type – convolution, non-linearity, pooling. You stack multiple layers of those, and by the time you get to the top, you have neurons that are supposed to detect individual objects. You have neurons for each category you want to recognize and they will turn on if you put an image of that category in input channel.

Almost all of the applications of deep learning today use supervised learning. Other categories are reinforcement learning and self-supervised learning. Reinforcement is learning by trial and error, getting rewards when you succeed.  

In approach to finding best techniques for deep learning there are a lot of different opinions. One is that we should have structures such as logic and reasoning. That is maybe beneficial in short term. In Facebook they are trying to find a way for machines to learn from observation from different data sources, to create an idea how world works. Maybe this will lead to some kind of common-sense model of world, that can be used for machines not to go through trial and error process, but that they can use this kind of predictive model.

Categories that Facebook is strong in are computer vision, natural language processing, translation, summarization, text categorization and dialog systems.

LeCun believe that until we figure out how to do unsupervised/self-supervised/predictive learning, we’re not going to make significant progress towards AGI. But AGI if done properly, could become general-purpose technology, that will transform a lot of industry sectors. In order for technology to really start influence the economy, we need more people that can work with it. LeCun see this development of AI more as an amplification of human intelligence in the way that mechanical machines have been an amplification of physical strength.

Technology will change landscape. LeCun is not an economist, but he thinks that challenge of inequality should be address, that there will be economic distribution and there will be concentration of power. Regarding the bias issue, he thinks that this will be much easier to solve with machines, than it is with people.

What he is afraid is that funding will stop if some convincing breakthrough is not achieved quickly. But since deep learning has become so central to the business models of some of the biggest companies, this offers some optimism, that this will not happen.

Fei-Fei Li

Chief scientist Google and professor in Stanford. Co-founder of AI4ALL. Working in area of computer vision and cognitive neuroscience. She wanted to understand intelligence and that is how here intellectual interest in AI and neuroscience began.

Focusing on visual intelligence, since visual probably led to the development of brain it-self, put her into computer vision. But moving past object recognition with usage of machine learning in computer vision was difficult. She wanted to do crazy project, taking all the pictures on internet and organize them into concepts that mattered to humans and label those images. So, we have now ImageNet, 15 million images organized into 22.000 labels. Winner of 2012 ImageNet competition created algorithm that combine ImageNet, GPU computing power and convolutional neural networks.

Children for the most part are not getting labeled data – they just figure them out. Can machines do that? Field is starting with inverse reinforcement learning algorithms and neuroprogramming algorithms.

Cloud with its computing capacities is the most appropriate platform form AI. Google created product called AutoML, that makes machine learning accessible to less technical people. Another project they are working on in Google is Visual Genome Project.

There are three core components of human-centered AI: advancing AI itself (interdisciplinary research across neuroscience and cognitive science), technology and applications (collaborative technologies like robotics, NLP, human-centric design), computer science alone cannot address all the AI opportunities and issues (bring in economist, historians, artists, policymakers).

AI as technology has so much potential to enhance and augment labor, in additions to just replace it.

Another challenge of development in AI is lack of diversity in workforce. AI is a science in her opinion. Government should support it.

Demis Hassabis

Co-founder and CEO of DeepMind. He was interested in chess and games, that got him into programming and writing AI for games. DeepMind was from beginning AGI company. Their mission statement is solving intelligence. They needed a lot of smart people doing large amount of upfront research. But issue was lack of such people in 2010 and who will finance theat. Pillar of DeepMind were hypotheses like taking inspirations from neuroscience, doing learning systems using benchmarking and simulations for rapid development and testing of AI, using lot of computing power. Google acquisition bring computing power and ability to use AGI for delivering solutions to worldwide problems. Working with other AI teams inside Google is also beneficiary, but DeepMind is really focus on AGI and they work on long-term roadmap.

Their focus is combining deep learning with reinforcement learning. Hassabis believe that deep learning will become as big as deep learning in next few years. From the neuroscience perspective, we know that the brain uses a form of reinforcement learning as one of its learning mechanisms, it is called temporal difference learning and we know the dopamine system implements that. Your dopamine neurons track the prediction errors your brain is making, and then you strengthen your synapses according to those rewards signals. It seems reinforcement learning is sufficient once you scale it up enough.

In DeepMind they are interested in systems-level understanding of the brain and the algorithms the brain implements, the capabilities it has, the functions it has, and the representations it uses. Another exciting example of potential use of neuroscience concepts is concept of grid cells from Edvard Moser and May-Britt Moser. These grid cells are not just a function of the wiring the brain, they may be the most optimal way of representing space from a computational sense.

Hassabis believes that comparing the brain to an algorithmic construct could be a way to understand a lot of mysteries of mind.

On more everyday use cases DeepMind is being used for optimizing energy in Google data centers, they work on WaveNet, the very human-like text-to-speech system that is now in Google Assistant. They use AI in recommendation systems, in Google play and even for battery saving mechanism in Android phones.

Next improvement will be in the area of concept learning and transfer learning. And AI will be hugely transformative. We need to make sure those benefits are shared with everyone.

Andrew Ng

Professor at Stanford, CEO of Landing AI & General Partner at AI Fund. Co-founder of Google Brain project and Coursera. Also involved as chief scientist at Baidu.

What we need is ability of unsupervised learning and learning from unlabeled data for us to talk about AGI. AI today is really valuable for online advertising, speech recognition and self-driving cars.

In Google and Baidu, he transformed large web search engines. The data assets that the large search engines have definitely creates a highly defensible barrier to the web search business. At Google they also saw the ability of GPUs to help scale up deep learning algorithms earlier than almost everyone else. He started Coursers with Daphne Koller, to scale online teaching to millions of people around the world.

With AI fund, they don’t just look for winners, they are trying to create them. Building a strong AI team often needs a portfolio of different skills ranging from the tech, to the business strategy, to product, to marketing, to business development. Our role is building full stack team that are able to build concrete business verticals.

I think there needs to be a reset of expectations about AGI. The fundamentals of the economics support continued investment in deep learning. AI is a broad category, though, and Ng think when people discuss AI, they have in mind specific toolset of backpropagation, supervised learning and neural networks. AI is not magic; it can’t do everything.

At Landing AI, they use hybrid systems, to build solutions for industrial partners. When your datasets are small, deep learning by itself isn’t always the best tool. Part of the skill of being and AI person is knowing when to use a hybrid and how to put everything together. That’s how we deliver tons of short-term useful applications.

One of the most exciting things yet to be invented will be other algorithms that are much better than backpropagation.

There are hundreds of different things that deep learning doesn’t do, and causality is one of them. There are other things, such as doing explainability well enough; we need to sort out how to defend against adversarial attacks; we need to get a lot better at learning from small datasets rather than big datasets; we need to get much better at transfer and multitasking learning; we need to figure out how to use unlabeled data better.

Threat of power consolidation can relate to certain corporation with concentration of power can become more powerful than governments. Countries with more thoughtful regulation will advance faster to embrace possibilities of AI. Even today some governments use internet better than others. Singapore has integrated healthcare system, where every patient has a unique ID.

Ng doesn’t support universal basic income; he is more incline towards a conditional basic income. He believes that we are moving away from a world where you have one career in your lifetime. About bias he is also on position that it is easier to fight it on machine level, than human.

Rana El Kaliouby

Co-founder and CEO of Affectiva – AI systems that sense and understand human emotions. Young global leader 2017.

She realized, when using Microsoft Clipy, that there is an opportunity, because we had emotional intelligence gap with our technology. At Cambridge Autism Research Center, she gets data to train algorithms to read different emotions. Not only said/happy emotions, but also emotions like confusion, interest, anxiety or boredom.

When she worked in Media Lab at MIT, she cooperated with industry a lot. Companies that saw their work, think about using it in advertising. Proctor&Gamble use it for checking if people like the smell. Toyota wanted it to use it to monitor driver state. The Bank of America wanted to optimize banking experience.

Their vision is to humanize technology.

When they started, they work only with face. 55% of the signals they use are facial expressions and gestures. 38% are from voice. Only 7% is text and actual choice of words someone uses. They are also careful to take into consideration come cultural specifics.

As mentioned in auto industry they are solving issues of driver state and occupant experience. Big potential is also in healthcare.

They agree that they were only take on situations where people are explicitly consenting and opting in to share that data. And where they’re also getting some value in return for sharing that data.

They started with dynamic Bayesian networks but couple of years ago they moved their science infrastructure to deep learning. Today machines can already be programed to use emotional cues in their actions. Affectiva has developed an emotion-sensing platform. They are also cooperating with companies that are building nurse avatars for our phones. She also sees cooperation of machines and humans in a field such as teaching, truck driving. They cooperate with company HireVue, who use their technology in the hiring process.

She is big advocate for regulation. Technology is always neutral, it is how we use it, that matters.

Ray Kurzweil

Director of engineering at Google. Author of the book Singularity is Nearer.

He started with AI in 1962. At that time there were two campus. First was The Symbolic school with Marvin Minsky regarded as its leader. Second was connectionists. Frank Rossenblatt was a person who had first popularized neural net called the perceptron. But after Minsky and Papert wrote Perceptrons in 1969, they kill all funding for connectionism. They showed that there was a problem with going with too many layers. This was solved by Hinton and group of mathematicians. Their solution was that you recalibrate the information after each level.

To go around supervised learning and labeled data, you need to work on simulation your own world you are working in and then you can create your own training data. Humans can learn from much less data, because we engage in transfer learning. His brain model is not one big neutral net, but rather many small modules, each of which can recognize pattern. In his book How to Create a Mind, he describes neocortex as basically 3000 million of those modules and each can recognize a sequential pattern and accept a certain amount of variability. We can learn from a small amount of data because we can generalize information from one domain to another.

Understanding language at human level is the ultimate goal. If AI could do that, it could read all documents and books and learn everything else. Humans use hierarchical approach. Brain is not doing deep learning in each module, they are doing something equivalent to Markov process, but is better to use deep learning in AI. With rule-based system you can reach limits. This was shown in Doug Lenat’s Cyc project. In Google they use deep learning to create vectors that represent the patterns in each module and then they have hierarchy that goes beyond the deep learning paradigm.

Computers can’t do multi-chain reasoning very well at this moment. This is where chatbots routinely fail.

One of his theses is that we are going to merge with the intelligent technology that we are creating. The first bridge is what we can do now, and bridge two is the perfecting of biotechnology and reprograming the software of life. Bridge three constitutes these medical nanorobots to perfect the immune system. With certain successes in AI, advantage will be quicker and quicker. And cost of running these technologies will come down.

Kurzweil believes that we should be more optimistic about future. But he understands that humans have an evolutionary preference for bad news. Issues that he sees with alignment is that even us humans, we don’t have align goals with each other. But new technologies will bring economic prosperities. Technologies like 3-D printing, robotic factories and agriculture. he believes in basic income. He believes that we have wrong assumption that job is a road to happiness. For him it is purpose and meaning. And he sees people still competing to be able to contribute and get gratification.

New economy will not be zero-sum game. You can see that Google put TensorFlow deep learning framework into public domain.

Daniela Rus

Director of MIT CSAIL. She leads research in robotics, mobile computing and data science. In her lab they do a lot of work on the mathematical foundations of how machines operate and she is very interested in understanding and advancing the engineering of both the science of autonomy and of intelligence.

Many of the things that we take for granted today have their roots in the research developed at CSAIL. Password, RSA encryption, computer time-sharing systems that inspired Unix, the optical mouse, object-oriented programming, speech systems, mobile robots with computer vision, the free software movements. Lately they are strong at cloud computing and democratizing education through Massive Open Online Courses (MOOCs).

Today’s solutions in robotics are good for certain level 4 autonomy situations (last level before full automation (Society of Automotive Engineers)). Some issue could be with sensors. Some sensors used in autonomous driving are not good at bad weather. We today see much higher abilities of robots in navigation than in manipulation. That was much due to introduction of LIDAR sensor (laser scanner). In manipulation there are some advancement with use of soft robot hands. She is very bullish about the future progress in grasping and manipulation. She believes in soft robots.

She thinks that today most people who say AI, actually mean machine learning, and more than that, they mean deep learning within machine learning. Today we see progress at the intersection between neuroscience, cognitive science and computer science.

Today we operate with a sequential model of learning and work. What I mean by this is that most people spend some chunk of their lives studying and at some point, they say “OK, we we’re done studying, now we’re going to start working.” We should consider a more parallel approach to learning and working, where we will be open to acquiring new skills and applying those skills as a lifelong learning process. One example of retraining is BitSource company, that was launched a couple years back in Kentucky and they are retraining coal miners into data miners.

James Manyika

Charmian and Director of McKinsey Global Institute. He is also fellow at DeepMind.

We have a problem today, when everyone wants to call everything AI. When they started doing research, they wanted to see the economic and business impacts of new technologies. Deep learning techniques are helping us solve a lot of challenges in AI field, they will help with a progress in narrow AI. But on the other hand, we need to think about problems like transfer learning. Some new techniques like reinforcement learning or simulated learning can help with that. This kind of things are done with AlphaZero. Also group with Jeff Dean at Google and their AutoML is great attempt of use of AI. Eric Horvitz and his group are trying to address issue of labeled data, working on in-stream supervision. Techniques like GAN (generative adversarial networks).

Data is still important; data availability is advantage for countries like China in their development of AI.

Stewe Wozniak suggested that we use »coffee test« instead of Turing. If machine can enter into a house and figure out how to make a coffee, then we can talk about AGI.

When talking to challenges of AI. Bias can be solved better by machines. He is excited about work that Silvia Chiappa at DeepMind is doing using counterfactual fairness and causal model approaches to tackle fairness and bias. When talking about explainability of AI, new techniques like LIME (local-interpretable-model agnostic explanations) or GAM (generalized additive models). Another challenge is detection problem. Knowing when AI is deployed.

In development of AI, we will see massive concentration of resources where there is big computing power and access to a lot of data.

Regarding economics, new technologies will bring improvement of productivity. In the last 10 years we’ve had the lowest capital intensity period in about 70 years. Capital investment and capital intensity are something we need for productivity growth. You need demand also, not only growth of value-added output for growth. Bob Solow was author of The Solow Paradox, where you could see computers everywhere except in the productivity numbers. That paradox was resolved in late ’90s, when we had enough demand to drive productivity growth, but more importantly, when we had very large sectors of the economy – retail, wholesale, and others – finally adopting the technologies of the day: client-server architecture, ERP systems. If you look at the current wave of digital technologies: cloud computing, e-commerce, electronic payment; we can see them everywhere, yet productivity growth has been very sluggish for several years now. But if you actually systematically measure how digitized the economy is today, looking at the current wave of digital technologies, the surprising answer is; not so much, actually, in terms of assets, processes, and how people work with technologies. The most digitized sectors – on a relative basis – are sectors like the tech sector itself, media and maybe financial services. And those sectors are actually relatively small in grand scheme of things, measuring as a share of GDP or as a share of employment. We would need economic growth. In last 50 years in G20, growth was 3,5%. It comes from two sources, productivity gains and expansion of labor supply. 1.7 % came from expansion of labor supply and only 1.8% from productivity gain.

When we look at potential job lost because of new technologies, we need to address five questions: technical feasibility (can it be done), cost of developing and deploying those technologies, labor-market demand dynamics, benefits including and beyond labor substitution and social norms.

We are constantly inventing jobs that didn’t exist before, so that will partially offset loss of jobs. You can see this through following job statistic, where »other« job category is growing the fastest. But we can find great challenge in wage issues, because of occupational mix shifts. We would need to put more focus on reskilling and on-the-job training. Since globalization is putting another stress on traditional workforce. As it was written in President Johnson’s report on Technology, Automation and Economic Progress: »The basic fact is that technology eliminates jobs, not work. «[1]

Gary Marcus

Founder of Geometric Intelligence, a machine learning company acquired by Uber. Author of The Future of the Brain.

Our memories are nowhere near theoretically optimal in terms of their capacity or in terms of the stability of the memory that you store. It is historic, since we had a need for general information more than exact one. In computers we have a location-addressable memory, where every single location in memory is assigned to a particular stable function. Google is hybrid, as it has location-addressable memory underneath and then cue-addressable memory, which is what we have, on top. Computers have indexes, internal addressing systems to know where individual memories are stored. We don’t have that.

He sees itself not as a native speaker of AI, but as somebody coming from cognitive sciences and has a fresh insight.

Neural networks are good at works that included similarity. They are very data-driven and don’t induce a high level of abstraction. In biology, systems start with a lot of inherent structure, at least about the heart, the kidney or the brain. A head containing a fully developed human brain would be too big to pass through the birth canal. So, genome is a rich draft of how brain should operate, but there is a lot of learning on top of that. But draft is also about learning mechanisms themselves. In AI the approach is to build things without prior knowledge, and that is for him not the right approach.

Maybe we should build innateness into AI systems. We should think from functional and mechanical perspective how to do it. Instead learning from pixels, maybe we should include some other approaches like symbol manipulation and ability to represent abstract variables. Deep learning is useful tool for doing pattern classification, it is not very good at abstract inference. Deep learning at the moment is focusing on bottom-up information

There is a lot of knowledge humans have about the world that can be codified symbolically, either through math or sentences in language.

In order to get to AGI, we need to get not only bottom-up information, but also top-down. We need to bring together symbol manipulation and deep learning. He is excited about Project Mosaic at the Allen Institute. It is about how do you get knowledge and how can you put it in a computable form.

Regarding job changes, at some point we will need to change structure of our society. It is easy to come up with new jobs, but it is harder to come up with new industries, that will employ a lot of people (startup economy).

Barbara J. Grosz

Professor at Harvard. She has made ground-breaking contributions in artificial intelligence that have led to the foundation principles of dialogue processing that are important for personal assistant industry.

She is known for an effort to somehow model a conversation. The idea that conversation can be computed, and that there’s some structure within a conversation that can be represented mathematically. To build a dialogue system that can handle dialogues of the sort people actually engage in, you need to have real data of real people having real dialogues, and that’s much harder to get than Twitter data.

With the Turing test, a system either succeeds or it fails, and there’s no guide for how to incrementally improve it reasoning. For science to develop, you need to be able to make steps along the way.

One of the works she did with multiple-agent systems was to developed the first computational model of collaboration. Collaboration is dividing task into subtasks, and share/delegate them inside the group.

AI systems are best if they are designed with people in mind. The set of questions that AI raises, require combination of thinking: analytically, mathematically, about people and behavior and engineering. She doesn’t believe that AGI is the proper directions to go.

She believes that there is a room for legislation, policy and regulation.

Judea Pearl

He is known for his work on probalistic (or Bayesian) techniques and causality. He wrote The Book of Why.

Science is not just a collection of facts, but a continuous human struggle with the uncertainties of nature. He started to work at UCLA on pattern recognition, image encoding and decision theory.

In 90s environment around Bayesian Networks was divided between the scruffies and the neaties. The scruffies just wanted to build a system that works, not caring about guarantees or whether their methods comply with any theory or not. The neaties wanted to understand why it worked and make sure that they have performance guarantees of some kind. He was advocating to do things properly. He was inspired by work of David Rumelhart. He tried to simulate his architecture in probability theory, he couldn’t do it, until he realizes that if you have a tree as a structure connecting modules, then you do have this convergence property. Architecture was easy to program. Using Bayes’ rules is an old idea; doing it efficiently was the hard part. You can get evidence and use the Bayesian rule to update the system to improve its performance and improve parameters. That’s all part of the Bayesian scheme of updating knowledge using evidence, it is probalistic, not causal knowledge, so it has limitations.

In today’s world every cellphone has Bayesian network and belief propagation, that’s the name we gave to the message passing scheme.

Causation was part of the intuition that gave rise to Bayesian networks, even though the formal definition of Bayesian networks is purely probalistic. You can do everything that Bayesian network does with purely probalistic terminology. However, in practice, people noticed that if you structure the network in the causal direction, things are much easier. Features of causality like modularity, reconfigurability, transferability was something that AI researcher were looking for.

Statistic allows you to do induction, deduction, abduction and model updating. Statistic speaks a different language, the language of averages, of hypothesis testing, summarizing data and visualizing it from different perspectives. All of this is language of data and it is different from language od cause and effect. In order to get those languages to interact, they were working on technical language of diagrams, that should be describing causation. It was already created by Sewall Wright in 1920. A causal diagram with arrows and nodes. You can get out of diagram that statisticians could not get from regression, association or from correlation. He came up with a causal diagram as a means of encoding scientific knowledge and as a means of guiding machines in the task of figuring out cause-effect relationships in various sciences. This is explained in his books Causality and The Book of Why.

Today causal modelling is not at forefront in machine learning. It is dominated by data-centric approach. But it is limited. He calls it curve fitting. Fitting function to a cloud of points. It has clear theoretical limitations. You cannot do counterfactuals and you cannot think about actions you’ve never seen before. There are three levels of comprehending. Seeing, intervening and imagining. We need highest level, imagination in order to have capability to build new models of the world.

Child learn causal structure, by playful manipulation, so those scientist with causal structure.

So current machine learning concentration on deep learning and its non-transparent structures is such a hang-up. Neural networks and reinforcement learning will all be essential components when properly utilized in causal modeling.

Jeffery Dean

Head of AI and Google Brain. He played important role in development of some of Google AI features like TensorFlow and others. His areas of interest include large-scale distributed systems, performance monitoring, compression techniques, information retrieval, application of machine learning to search and other related problems, microprocessor architecture, compiler optimization.

DeepMind is more focus on AGI, but they are all working together on trying to build really intelligent, felxible AI systems. At Google they used large computation power they have to improve AI.  They created software TensorFlow. They design it for three objectives. To be really flexible. To be able to scale and tackle problems where lots of data are available. To move from research idea to production-serving systems.

In paralel they also improve computational power with Tensor Processing Unit (TPU). They also have a suite of AutoML products. This are all important issues, since we don’t have more then 10.000 to 20.000 organizations in the world that have in-house ML experts.

He believes in human ability to learn and retrain themselves. Regarding regulation he hopes it will be done by people with expertise in the field.

Daphne Koller

CEO and founder of Insitro (Biotech startup using ML to research and develop new drugs), professor at Stanford. One of the founders of Coursera.

In R&D process in pharmacy new solution is needed. Pre-tax cost of R&D to develop new drug is estimated at 2.5 B Dollar. ROI will hit zero by 2020. Lot of drugs that have effect on large population was already discovered. At Insitro they build a culture in which scientists, engineers and data scientis work closely together to define problems, design experiments, analyze data and derive insights that will lead to new therapeutics.

Standord MOOCs in 2011 showed that people are looking for digital access to knowledge, that lead to Coursera. At first it was a lot of guestionmarks, but then online courses really grow. When they started Coursera technology was limited in innovating on new pedagogy; it was mostly just taking what was already present in standard teaching and modularizing it. As more data is gathered and learning becomes more sophisticated, you will certainly see more personalization.

Humans are really good at learning from small data. This has to do with our ability of transferable skills and ways of learning.

The whole deep learning framework has done an amazing job of addressing one of the key bottlenecks in machine learning, which is having to engineer a feature space that captures enough about the domain so that you can get very high performance, especially in contexts where you don’t have strong intuition for the domain. Prior to deep learning, in order to apply machine learning, you had to spend months or even years tweaking the representation of underlying data in order to achieve higher performance. Now with additional data, ML can really find patterns for itself. A lot of human insight is still needed in constructing these models, especially in figuring out what architecture of the model is, that captures the fundamental aspects of a domain.

Regarding self-driving cars it is more a question of social evolution than a technical one. Regulation can be a problem. It is a bad idea for governments to regulate something that they don’t understand. We need to let technology progress and then think about the mechanisms to channel it towards good rather than bad.

Education is important for higher quality of living in the future. She thinks that OpenAI is great in helping everybody get access to open source AI tools.

David Ferruci

He built and led the IBM Watson team. In 2015 he founded his own company Elemental Cognition. AI research venture that is trying to do real language understanding. He was interested in coordinated intelligence from a mathematical, algorithmic and philosophical perspective.

Watson was about language processing, text and multimedia analytics and automatic question answering. David sees AI in a way that there is perception (recognizing things), control (doing things) and knowing (building, developing and understanding conceptual models that provide the foundation of communication, and the development of theories and ideas.

Thinking about what knowing and understanding means is a really interesting part of AI. People refine and compound understanding through reading and dialoging. At Elemental Cognition they want AI to do that. For people to understand each other it is not enough for them to just say things. Language is not itself an information. Language is a vehicle through which people communicate the models in their heads. That model is independently developed and refined, and then people align them to communicate.

Development of AI field will go in two paths. Perception side and the control side will continue to get better in leaps and bounds but we will also be more able to learn how to develop that understanding side. Today lot of investments go into pure statistical ML, since it is short-term and hot. But AI goal should be intelligence that is anchored in logic, language and reason.

IBM has used Watson as a brand to enter into AI business. They can approach the market broadly through business intelligence, data analytics and optimization. They can deliver targeted value, for example in healthcare applications.

With development of new technology there is existential risk and can impact a change in how we think about ourselves and what we consider unique about being human.

Rodney Brooks

Chairman of Rethink Robotics. Co-founder of iRobot Corporation.

iRobot was established in 1990. They had a run of 14 failed business models. At 2002 they started with robots for military (used in caves of Afghanistan) and they launched Roomba. In the last version of software, they have an option, where you can show robots, what you want them to do and they write a program.

There are different versions of techno religion. They are life extension companies being started by the billionaires in Silicon Valley, then there’s the upload yourself to a computer person like Ray Kurzweil.

Our cities got transformed by cars when they first came along, and we’re going to need a transformation of our cities for self-driving technology.

In robotics you need to make progress in parallel areas like mechanics, materials, sensors and algorithms of control. IKEA test – when you give IKEA kit with instruction to robot and see if it is capable to put together a furniture. Some potential development in robotics and usage of robots are elderly care, 3D printing, agriculture.

Regarding the impact of robotics and AI on economy, he believes that it is more down to digitalization. Regulation is questionable, since it is naïve to legislate against a technology and not taking into account the good things that you can do with it.

Cynthia Breazeal

Director of Personal Robotics Group at the MIT Media Lab. Founder of Jibo. She designed Kismet, the world’s first social robot. She is a pioneer of social robotics and human-robot interactions.

When we are talking about huge societal challenges, it is about new kind of intelligent machines that can collaboratively engage you over an extended longitudinal relationship and personalize, grow and change with you. That is what social robot is about.

Kismet was modeling nonverbal, emotive communication at the infant stage, because if a baby cannot form its emotional bond with its caregiver, the baby can’t survive. A huge part of human communication is nonverbal and a lot of our social judgments of trustworthiness and affiliation are heavily influenced by our nonverbal interaction. Building social and emotional intelligence into machines is very, very hard.

For a long time, AI and human collaboration was not seen as a problem, but now it is addressed differently.

We need to start working on making AI far more democratized and inclusive so we have a future where AI can truly benefit everyone, not just few. One solution how to do it is through education. In an increasing AI-powered society, we need an AI-literate society. Regarding regulation it is hard to balance the ability to ensure human values and civil rights are supported and supporting innovation to open up opportunities.

Joshua Tenenbaum

Professor at MIT. He studies learning and reasoning at humans and machines.

If we want to sketch high-level roadmap to building some form of AGI, we should divide it into three stages coresponding to stages of human cognitive development. First stage is comparable to first year and a half in child’s life. It is building all the intelligence we have prior to really being linguistic creatures. It is mainly about developing common-sense understanding of the physical world and other’s people action. Second stage is from one and a half to three years. It is used to build language. Third is from three years up. After we have built language, we now use it to build and learn everything else.

Boston Dynamics, which was founded by Mark Raibert, is working on getting robotics better and movement closer to locomotion of animals and humans.

His approach is different than the one from DeepMind, that is building everything from scretch. Humans are born with some structures in our brain and Tenebaum is more incline to this kind of approach. Roger Shepard at Standford had a lot of influence on his work. He was working on how we get from specific expirience to general truth. From 90s to 2000 a lot of success was achieved in field of Bayesian statistic and Bayesian inference. Modeling individual aspects of cognition using Bayesian models, such as certain aspects of perception, causal reasoning, how people judge similarity, how people learn meaning of words and how people make certain kinds of plans, decisions or understand other people decisions, was at the forefront of his work at that time.

Elizabeth Spelke is very important in if you are looking in AI and similarity with humans. Spelke and others have shown that in many ways our braings are born already prepared to understand the world in terms of physical objects and in terms of what we call intentional agents. So we should build some kind of child AI system and than scale it.

Deep networks are good at pattern recognition, but to turn all intelligence into pattern recognition is not good. There are all these activites of modeling the world, such as explaining, understanding, imaginig, planning and building out new models and deep networks don’t really address that.

The three vawes in the field of AI – the symbolic era, the probalistic and causal era and the neural networks era – are three of our best ideas on how to think about intelligence computationally. An he has been interested in how all of them can come together. Up until today, the best hybrid go by the name of probalistic programming. Real knowledge of probalistic programing is not only in trading numbers for numbers, but in expressing abstract knowledge in symbolic forms – like math, programming language or logic. He built with his friend language called Church, named after Alonzo Church.

We should be careful about use of technology. Risk connected with privacy or human rights are very real. On the other hand, even if every new technology brings some distrubtion, now this is happening inside one generation and that put additional stress on workforce. Another important risk is AI usage that increase computer usage and additional stress on climate.

We have the opportunity to both understand more about what it means to be intelligent in a human way, and learng how to build technology that can make us smarter individually and collectivelly.

Oren Etzioni

CEO of Allen Institute for Artificial Intelligence. One of interesting projects they are doing is Project Mosaic, a 125 M dollars effort to build common sense into an artificial system. At project they are using modern AI techniques like crowdsourcing, natural language processing, machine learning and machine vision in order to acquire knowledge in a different way.

They will set benchmarks to assess the common sense abilities of any program. They will also try to past standard tests – for science it is called project Aristo, for math is Euclid. Challenges that they face is when machine needs to apply the concept in a particular situation that reguires language understanding and common-sense reasoning.

In 2003 Paul Allen started Allen Institute of Brain Science. That Institute is looking at the physical structure of the brain, while at AI2 they are adopting classical AI methodology for building software.

Regarding development in AI, there are exciting things like AlphaZero, where they try to improve performance without hand-labeled examples. There are also work on robotics and natural language processing. Important field is also transfer learning. There is zero-shot learning and one-shot learning.

If we really want AGI, the ability to use knowledge in a different domain is a core capability we need to develop. Another important stepping stone would be ability to handle different tasks, self-replication and data-efficiency.

Bryan Johnson

Founder of Kernel, OS Fund and Braintree. Kernel is building brain-machine interfaces with the intention of providing humans with the option to radically enhance their cognition.

He started Kernel with identified problem – they wanted to build better tools to read and write our neural code, to address disease and malfuction, to illuminate the mechanisms of intelligence, and to extend human cognition.

The idea behind OS fund is that most people in the world who manage or have money do not have scientific expertise, and therefore, they typically invest in things that they are more comfortable with, such as transportation or finance. But they can successfully invest in science-based enterpreneurs that are doing world-changing technology.

His fundamental belife is that we human need to radically up-level ourselves as a species. He believe that in 15 years neural interfaces will be as common as smartphones are today. Other organizations that are doing similar things like Kernel is Musk’s Neuralink, Facebook, DARPA. If we would be more humble and we use AI to help as improve as a species, that would be the proper way forward.


[1] In the book, on page 300

Leave a Reply