June 1, 2017

#2 - AI: The Rise of the Machines

How close are we from reaching the point of Artificial General Intelligence?

#2 - AI: The Rise of the Machines

Researchers have spent decades trying to build machines that have the ability to think and reason like humans. But how close are we from reaching the point of Artificial General Intelligence?

CREDITS

Moonshot is hosted by Kristofor Lawson (@kristoforlawson) and Andrew Moon (@moonytweets).

Our theme music is by Breakmaster Cylinder.

And our cover artwork is by Andrew Millist.

 Transcript

PROF. MARY-ANNE WILLIAMS: People tend to think of robots of the future will be like C-3PO and R2-D2. Wrong. They will be the Jedi Knights. And they will be able to control the lighting just by thinking about it, because they can connect to the internet.

KRIS: That’s Professor Mary-Anne Williams ….., you’ll hear more from her a bit later. But If you’ve ever watched a good sci-fi film, you’ve no doubt seen this world filled with robots and machines able to think for themselves.

ANDREW: But if you look at the robots in real life - we still have a long way to go before they start thinking - and feeling for themselves... Right? We still have a bit of time?

KRIS: I’m Kristofor Lawson,

ANDREW: And I’m Andrew Moon.

KRIS: And This week on Moonshot - we’re exploring Artificial Intelligence - and more specifically, artificial GENERAL intelligence. The idea that machines might one day be able to think like humans.

ANDREW: or might rise up and take control. Yes, we've got cheesy movie quotes, but we've also got some incredibly smart perspectives for you ahead on…. what you should be paying attention to, and what's standing in the way of AI completely taking over.

ANDREW: You’re probably hearing more and more about artificial intelligence - but the concept itself is nothing new, and dates back to the birth of computers.

ANDREW: In the early 1950s - Alan Turing - the English scientist largely known as the father of computer science - published a paper proposing a test - or ‘imitation game’ - in which a judge would try to determine whether contestants were either humans or machines.

KRIS: Turing’s ‘imitation game’ later became known as the Turing Test - and became an often used measure of intelligent systems.

KRIS: But it was John McCarthy - a researcher at Dartmouth College - who first coined the term ‘Artificial Intelligence’ in 1955. He mentioned it while preparing a proposal for a conference to discuss the issue.

KRIS: AI - became an idea that captivated researchers for decades - and McCarthy went on to become one of the most influential voices in Artificial Intelligence research. And The holy grail of AI research is coming up with a ‘general’ intelligence system - that’s not limited to one discipline and can think and reason like a human.’

KRIS: That was from a company called Good AI, who are actively pursuing a general intelligence system. However despite the decades of work that has gone into developing general intelligence - much of the major progress in AI has happened over the past 10 years. Computing power has increased dramatically, data storage is much cheaper now, - and researchers have figured out how to better process the complex calculations required to help these systems operate.

LIZA DALY: In particular, hardware parts of the computer called the GPU which use was designed to help computers render graphics has turned out to be really critical for doing the kind of deep mathematics that are required in machine learning; to do them kind of at scale. So there’s a hardware piece that definitely has accelerated.

ANDREW: That’s Liza Daly. She’s a software engineer who’s been working with the web for two decades, and most recently decided to teach herself machine learning - one of the major subsets of AI.

ANDREW: We’ll also provide a link to an excellent Medium post she wrote explaining the AI basics in the podcast description, but back over to her….

LIZA DALY: Most machine learning systems need tremendous numbers of examples from which to learn. DSS numbering in the millions, if not billions, and one thing that’s really helped to make that available is the existence of the internet. So if you want to teach a computer how to read English sentences, we certainly have an explosion of English every single day in a written… in the common vernacular that computers can be trained on. And this stuff is all free. So even if we’d had the hardware and software capabilities that we do today prior to the internet, I think researchers would have struggled to get as many examples. You know, the infinite number of pictures that are on Instagram every day. That’s all raw material for training these systems, and it’s definitely accelerated the process.

ANDREW: But despite an awful lot of progress - we’re still a long way from reaching the point of ‘Artificial General Intelligence’. While there’s a lot that today’s super computers can do in terms of crunching numbers - much of the current technology falls well short of being able to replicate the same complex thought patterns, and ability to reason, unique to humans.

PROF. MARY-ANNE WILLIAMS: It turns out that expert knowledge is not difficult to encode in a computer system or a robot; very, very specific task-based information like playing chess. We’ve solved that problem. So the really big challenge is, back in the fifties and even today, is how do we build a computer system or a robot that has the common sense of a young child?

KRIS: That’s Professor Mary-Anne Williams - You heard her at the top of the show. She’s the director of the Magic Lab at the University of Technology Sydney.

PROF. MARY-ANNE WILLIAMS:The Magic Lab is a research lab that focuses on disruptive innovation, and we’ve been working with artificial general intelligence for more than 20 years.

KRIS: But despite the name - there’s nothing ‘magic’ about artificial intelligence. Building an AI system requires data - and an awful lot of it. All of these systems need to be programmed and then trained to have any hope of replicating human thought patterns and the complexity of human life.

PROF. MARY-ANNE WILLIAMS: Well the difficulty is dealing with complexity, yes, uncertainty and the real world. And this is what young children learn very early. They bump into things, they fall over. And machine learning techniques today require that a robot fall over thousands of times before it actually learns anything valuable as to how not to fall over…. So even today we don’t know enough about the way our own brain works, even young brains.

ANDREW: And although we don’t know enough about the brain itself to fully replicate it yet - There are a few areas where current AI systems are catching up and leaping ahead of us mere mortals.

KRIS: AI can already process and analyse huge amounts of data through machine learning and complex neural networks. Googles AI system AlphaGo has recently defeated the Go world champion - and the computer vision systems, that you might see in a self driving car, are already far exceeding the capabilities of human sight thanks to an array of other more capable sensors that our bodies just don't have - like infrared.

KRIS: And one startup that’s making use of all of this new technology - and the vast amounts of data available - is an Australian company called Black AI who have developed a system called Ethr that can analyse people moving through a space.

KEATON OKKONEN (BLACK AI): it’s a computer vision system to detect, track and recognise people across massive city spaces. So the way that works is we take these 3D vision sensors and we put them on the street – so either on street signs, lamp posts, even on the walls – and they map out an environment in 3D.

KRIS: That’s Keaton Okkonen - he’s one of the co-founders of Black AI - and says their system spawned out of a desire to build the types of AI systems you may have seen in Iron Man.

KEATON OKKONEN (BLACK AI): Tony Stark has this personal assistant that’s an AI called J.A.R.V.I.S. and it’s sort of the ultimate home companion. It provides the perfect personalised interaction. It knows what you’re doing. It can help you with anything that you might need. We originally started off trying to build this and replicate that, and we weren’t really happy with building a system that would just be your assistant on a desktop on your operating system or phone; we wanted to break out of the computer. So a part of that was we needed to build a system to understand your behaviour inside a physical space so that it could actually interpret that and be able to help you. [cut to] - And then we decided well let’s, instead of building smart homes where a system can understand your behaviour in a house, let’s take that to the absolute extreme and stretch it across entire city spaces.

KRIS: How do you see this technology being used?

KEATON OKKONEN (BLACK AI): So if you look at how things work online, we know more about how people interact with our websites than we do about how they interact with our city spaces, which is kind of broken. So we see ourselves bridging that so that we can collect information on how people move through in our city from one side of the city to the other, how people interact with public services like transport, and how people interact between themselves and those structures generally. All that information is currently just not being collected and it’s a hugely missed opportunity for optimisation, generally.

ANDREW: So that’s all well and good, but does this not feel a little big brother-esque? Gathering all this data from public spaces? Tracking how we move?

KRIS: That’s what I thought too - so I put that question to Keaton because I figured surely I wasn’t the first and only person to be asking that question.

KEATON OKKONEN (BLACK AI): We’re building a system to track and recognise people across public space and understand their behaviour, so yeah, the obvious response is, “Hey, I don’t want to be tracked, and is that even legal? Can you do this that and the other?” I guess our answer to that is everything at the moment we’re trying to anonymise points of data. So rather than knowing that James is walking through a city street then we know that there’s a white male who’s 182 cm tall, who walks at a certain pace, and might be caucasian, age 22, yada, yada. So we collect a lot of features that might be able to eventually identify James, but right now it’s not crossing that final bridge.

KRIS: So it has the perception of potentially being a bit 1984, Big Brother sort of thing – which is obviously something that you’re going to have to deal with. How do you see that playing out long term?

KEATON OKKONEN (BLACK AI): Definitely. I think there’s a couple of things to note; the first being just around how we feel about privacy as a society. There’s quite a lot of shifts that are happening I guess socially. You can sort of see that happening online with people giving up their privacy for convenience every day. When you look at Facebook, Google services, all of those are built on data so that they can collect more information about who you are, build personalised services, and you sort of get to enjoy that experience, which is great. That’s definitely shifting into the physical domain and I think it’s happening more and more.

KRIS: It turns out this question of privacy is actually a recurring theme in the discussion over Artificial Intelligence. And when you start discussing the idea of having a general intelligence system it raises a bunch of other questions - about morality. (Sound from Siraj Rival YouTube video).

KRIS: That was Siraj Rival - he was one of the speakers at The Next The Next Web conference in Amsterdam - I went along to see what it was all about.

KRIS: It’s basically a huge technology conference, and it was held at this old gasworks building - where everyone with an idea or interest in tech goes to talk about everything from Virtual reality - to self driving cars - and of course AI.

KRIS: I tried to listen to as many discussions around AI - as I could - and a lot of them centred around the intersection of data collection and privacy. All of these systems need a huge amount of data to work - and where there’s data there’s a potential for that data to be misused.

KRIS: The other problem is that if you don’t train the system correctly - using good data - there’s potential for the system to think it’s solving a problem - but actually create a new problem - a moral problem where the AI system might decide that the best way to solve its task is to - the worst of which being that the AI system might decide to terminate you.

KEVIN ROSE (TRUE VENTURES): I think that everyone is assuming that at some point when you have enough interconnected nodes that there will be some type of emergent consciousness that happens, and we just don’t know if that’s going to happen.... I certainly know one thing is for sure and that is that using this technologies in a very directed fashion to help us identify new potential cancer drugs, or… There's a thousand different use cases like that where it's just like a no-brainer. And I'm willing to trade privacy for an improved quality of life.

KRIS: That’s Kevin Rose - Kevin is currently an investor with a company called True Ventures - and he’s also started a number of companies himself.

KEVIN ROSE (TRUE VENTURES): I think that anytime you have machines analysing our data, especially our private data at scale, there are issues around it. I don’t think that the machines are going to rise up any time soon and try and take things over. It will be really interesting to see how they perceive the world... The only reason we perceive our world the way that we do is because of the hardware that we have built into our bodies and our senses. They don’t have eyes and they don’t have a thousand different sensors that we do that are very sophisticated. And so it’ll be interesting to see if they can grasp – if this does happen – if they can grasp what it's like for us to live in this tangible world, where they're more zeros and ones kind of digital world, if that makes sense at all.

ANDREW: With the huge amount of data being gathered by AI systems - there are a number of other questions being raised. What could happen if that data got hacked, or if the AI misinterpreted the data in a non-lethal but still impactful way?

ANDREW: And more importantly - what role should governments have in keeping the AI systems under control?

PROF. MARY-ANNE WILLIAMS: This is critical for mankind because we don’t want to be deploying technologies that we do not understand.

PROF. MARY-ANNE WILLIAMS: It’s also important for nations, nations like Australia, to explore and to lead because we wouldn’t want to be importing this kind of technology from another nation state. Of course security is huge. Ethics is big, but just security is enormous.

KRIS: Do you think our governments are equipped to deal with this technology as it changes in marketplaces?

PROF. MARY-ANNE WILLIAMS: Absolutely not. But I don’t think it’s necessarily up to government. This is really something for society and business as well. If we’re waiting for the government to solve this problem then we are really in trouble.

LIZA DALY: There’s a legitimate concern that’s already been borne out about these systems amplifying existing human bias…. they’re only as good as the data that you feed them.

ANDREW: That’s Liza Daly again.

LIZA DALY: And they are exceptional computation machines so they will find correlations in data that we as humans are not able to identify, because there are just too many parameters, or the data is too immense, and maybe areas we haven’t even thought to try to derive correlations. If it’s in the data, the computer will pick it up. And what that could mean is that if you design a system to decide whether someone is a high risk for some behaviour that we as humans know might be correlated with race or socioeconomic status – not caused by, but just correlated with – the computer does not understand that distinction and will sort of happily give our bias back to us by saying, “Well, most insurance policies with good rates are given to caucasians. That’s what you should keep doing,” because that’s the historical precedent that’s represented in the data. That’s a real concern. And people are certainly aware of it, but there’s a… It’s very difficult with these machine learning models to really understand what conclusions they have drawn, so they’re difficult to audit. It’s very hard for us to look at it and actually say, “Is this algorithm racist? We don’t really know how it got to its answer.”

ANDREW: Just like in high school, you always should show your working in how you got to an answer.

ELMO KEEP: And there’s so much evidence of this already happening for people of colour particularly who say, “I’m discriminated against,” when they try and book Airbnb.

ANDREW: This is Elmo Keep. She’s an Aussie writer, currently based in Mexico, chasing weird and wonderful stories at the intersection of humans, technology and society.

ELMO KEEP: Or Facebook having so much socioeconomic data on people that when you go and apply for a lease, if your friend group is too poor you’re going to start getting knocked back for financing. Things like that.

ELMO KEEP: And that’s where the AI that we should be worried about is actually happening. And it’s happening faster than it could be legislated against because it’s happening all in the private sector. It’s happening in all these proprietary companies. So I feel like, okay, please be less worried about robots and be more worried about the fact that these companies are exploiting our data and they’re totally unaccountable to anyone at the moment.

KRIS: So you might be worried about the huge amount of data the AI systems collect and what happens if someone wanted to exploit that - but Professor Mary-Anne Williams says having an increasing amount of data might also provide some form of security.

PROF. MARY-ANNE WILLIAMS: Now if you consider the way your brain works, you cannot recreate entire experiences. You sort of have stories. This is why stories are so important to us; they’re a summary of what actually happened. Currently in your body a lot is going on. Think of all the billions of cells. So it’s very hard even for the human brain to keep tabs on everything that’s going on... So while data and collecting lots of it is the problem, it’s probably also the solution.

ANDREW: No matter which side of the artificial intelligence fence you sit - it seems that much of the startup world - and especially the big players like Google - are actively pursuing increasingly advanced AI systems. Google’s CEO ‘Sundar Pichai’ recently announced that the whole company is pivoting to be ‘AI first’.

SUNDAR PICHAI (from Google IO) - “Mobile made us reimagine every product we were working on, we had to take into account that the user interaction model had fundamentally changed, with multi-touch, location, identity, payments, and so on. Similarly in an AI first world we are rethinking all our products and are applying AI and machine learning to solve user problems."

KRIS: You’ve also got Amazon - and IBM - who are also giving public access to their systems - so that you too can build your own AI driven apps. And for investors like Kevin Rose, and startups like Black AI - that’s actually a super exciting prospect.

KEVIN ROSE (TRUE VENTURES): I think that you're going to have small pockets of really world class entrepreneurs, computer scientists that come out of university that can potentially create new innovations in AI, in which case those companies would probably be worth backing. But for the majority of folks out there, I think that as an entrepreneur you're best off not trying to develop the AI expertise in-house but really just go with one of the big three or big four, however many it ends up being... So am I excited about it? 100%. Is it an area I'll be investing in? 100%. It's early days and fun to track this stuff. KEATON: I’d say AI is not magic, it’s something really important to realise. It’s just… It’s a tool that everybody has access to and anybody can learn about how machine learning algorithms function or how the chat bots work that you interact with every day. And I think that that education piece and understanding piece is going to be really important so that we’re not scared of the future that we’re going to ultimately face in the next five years.