Krook spent a year talking to experts in the field of AI and machine learning, who have first-hand experience of developing, analysing and managing this new technology. It’s a lot to take in: everything from AI companions to the quandaries of driverless cars, autonomous warfare (!), and the use of AI to create human super-intelligence. Of course, there’s also talk of the Robot Apocalypse.
On the eve of the film’s premiere on SBS VICELAND and On Demand, I spoke to Krook via video.
“The first thing that comes out of people’s mouths when I say, ‘I made a documentary on AI’, is, ‘Are we screwed?”https://www.sbs.com.au/”
What did you know beforehand, going into it? What was your understanding of AI?
I think we thought we knew everything. At least we had a grapple on where the technology was at, where it was headed. And within about a month of doing our initial research, we realised we didn’t know anything. It’s a very complicated subject and it’s easy to misunderstand the capabilities of the technology if you believe what you read. If your only information on AI is what you read in the news, it can be a bit skewed.
When we started doing research on the project, even amongst the experts in the field, they have very varying opinions. You’ve got people like Elon Musk who say, there’s going to be a million Level 5 fully autonomous taxis able to pick you up from any place in town, take you another place, within a year. And currently, he said this a year ago, and there are zero on the road. We’re at about Level 2 and a half.
And then you have other people that say, these things are decades and decades and decades off. When it comes to general intelligence or AI that is as smart as a human being, you have some people saying we may never get there and others saying this will happen in the next decade or two. So, to say the least, it was a challenging prospect to cut through the noise and give people a nuanced and pragmatic tool set to understand these technologies and understand where they’re at right now.
Especially, because we all see that footage of the robot opening the door handle and everyone instantly cries, “It’s Black Mirror!” and “Rise of the Machines!”
How did you find your experts?
Well, as I mentioned, we wanted to find people that were going to approach this in a more pragmatic way. People that had measured views. There’s a lot of wild statements out there. We wanted people that said things with a grain of salt and humility because no one really knows where the technology’s headed. And on top of that, we wanted people that were also entertaining. We were mindful that this is a science documentary based on a pretty hard to understand tech. So, we wanted to find people that were easily relatable. People that had fun anecdotes, not just your stuffy scientists. People like Tim Urban, who’s got a great tech blog. Pablos Holman, the guy with the cool specs, he’s an ex hacker. People like that just because we didn’t want to make a cold movie. And also, we wanted to find stories with the technology that were very human in nature. So, people like Eugenia Kuyda with her Replika chatbot and stories like that, that people could easily connect with because they could see how it could affect their lives in a direct way.
It’s interesting to note some of your experts’ job titles, especially ethicists. Are you seeing enough consideration of the ethics in the sector? Are there a lot of those kinds of jobs? Do we need more of them?
I think people are starting to take it more seriously. I think, as we mention in the film, the technology isn’t as advanced as we think right now. We overestimate these technologies in the short-term, but in the long-term we underestimate them. So, I think you see companies like, we brought up Facebook in the film. It’s a very, very, I guess, we’ll call it ‘stupid AI’. It’s not very advanced, but their algorithms are driving what their one billion-plus, two billion-plus users are seeing every day. And that’s where a lot of people get their news. So, I think you’re seeing that quite un-advanced versions of AI are having quite a big impact. And you’re starting to see companies catch onto the fact that these technologies, even though they’re very nascent right now, and they’ve got a ways to go, that they’re having these huge effects.
We wanted to make the film a conversation starter. There’s nothing you can do in 90 minutes that’s going to solve all and answer all these questions. So, we were just trying to get the conversation started with each of the seven or eight subjects we explored, like with cars. There is no answer to ‘How safe is safe enough?’ when we put these self-driving cars on the road. Is a self-driving car being as safe as a human, good enough to put on the road, or does it need to be five times as safe, does it need to be 20 times as safe? When would you feel comfortable putting your kid in the car? And you’re starting to see these larger companies bring people in from other social sciences to measure the human costs rather than it just being a technology problem and say… There’s a lot of ethical things that come up. So, maybe Philosophy is a good major for young people coming up.
It’s easy to use the phrase an ‘ethical minefield’, but then you have an actual minefield coming into this as well, with the conversation around automated warfare. I have to say, your expert is very persuasive! It’s a lot to grapple with. What was your awareness of that debate?
It wasn’t the first thing that came on our radar, but we read an article that politicians were debating the use of these technologies to be used in warfare. And at first, and still, I thought, well, that’s terrifying. And then we found it was really –I don’t know, I guess unnerving is the right word – to be on the floor of the UN watching delegates from each country discuss, should we or should we not be using autonomous or AI technology in warfare. And the scary thing was that when I listened to the military perspective, they actually made a few cogent points. How can we make warfare safer? Does an 18-year -old person in the battlefield, can they afford to shoot second now? Is war inevitable and are there ways to limit collateral damage, civilian casualties? A lot of the wars that are fought now are smaller proxy wars. They have a lot of boots on the ground and some of the most casualties are civilians.
So, if there’s a way to make these operations safer, should they be looked into? It’s quite a slippery slope. So, what we tried to do with that technology and all the other technologies we looked at, was show both sides of the argument and let people decide for themselves. Yeah, that’s a tricky one. And I still grapple with that right now. And I think the point we tried to bring up is, there were people debating on how do we engage in warfare? How do we kill one another safely when maybe the bigger conversation should be, should we be finding ways not to enter into these conflicts, rather than giving ourselves another tool in the toolkit to wage war. Freaky stuff.
And to a related point that is so relevant to the conversations happening right now around bias in law enforcement: bias in machine learning.
Yeah, definitely. I think it comes down to, a lot of times, what’s in the data sets? These AI systems are trained on huge amounts of data and you’ll find bias when, say there’s facial recognition. If all your facial recognition data set is Caucasians, it’s going to have trouble identifying people by their races. And being misidentified by facial recognition is not a good thing when it comes to law enforcement, other things like this. So, we’re finding, even through the course of making the film, this technology moves so fast, but we’ve seen a lot being done to address the problem of bias in data sets since we started. And they’re finding that more diversity within these data sets actually has helped reduce bias in a lot of these algorithms, which is a positive sign. But at the end of the day, I think we’re still at the point where we don’t want to give these algorithms too much control.
I think there needs to be humans in the loop that understand ethics and not everything in life boils down to zeros and ones, and Xs and Os. So, I think it’s good to have humans in the loop and also society in the loop, not just the people designing these technologies, but society as a whole should be hip to what’s going on. Because if not, you’re going to wake up in 20 years and going to be living in a very different world, I think.
Even in having this conversation with you, there is so much to speak about. In making your film, how did you condense it down? I’d love to know how the storyteller in you was able to set that out, to make a coherent film.
Not going to lie. It wasn’t easy. It’s a huge subject. It’s incredibly complicated, like I said. A lot of long winding two- or three-hour conversations with the rest of the team. Because there’s so many different applications of the technology, we wanted to find ones that would connect with people, be entertaining. There were some drier stories. There were some more interesting ones. We also didn’t want to find stories that were too hyperbolic and too misleading. But ultimately I think what binds all of those stories together, each one looks at the technologies and shows an ethical dilemma, how it could make our lives better or worse. And that’s the stress test that each subject had to go through. Can this technology be used to better humanity or make humanity worse? Let’s not try to persuade people either way, but just lay the facts down. And I think that’s the through line that connects them all together.
Because the first thing that comes out of people’s mouths when I say, “Oh, I made a documentary on AI,” it’s, “Are we screwed?” That seems to be where everyone’s mindset’s at. So, with each one of these stories, it was, well, let’s take a look at seven, eight different technologies and see if they make you more optimistic about the future or more pessimistic.
And where are you at with it, just personally?
Well, I think it’s a tool. So, I would say, I am as optimistic as I can be about the technology as I am with human beings. And that seems to change month by month. People have seen a lot of scary things happening around the world right now. I, for one, am feeling pretty optimistic about humanity. I think the racial inequality, all the protests surrounding that, have been actually inspirational for me. I find that’s a good sign and that makes me more optimistic about humanity. Democracy can be ugly, but I think that’s an instance where I’d be much more worried if people weren’t out in the streets demanding change.
So, when I see things like that, it makes me more optimistic about humanity and that then in turn makes me more optimistic that we can design and use these tools in correct ways. Because at the end of the day, it is a tool and it’s about how we use it. So, check back with me in two months and I might have a down day in humanity and might think we’re all doomed.
You open the film with personal stories, such as the app that replicates the conversations with a lost loved one. It’s hard to judge anyone who is grieving, of course, but have you used one of the replicant apps?
I have. That was actually a Black Mirror episode, more or less, in its own right. I did. So, when we started researching that story, I downloaded Replika and I can see the use in it. It definitely is a bit clunky at times, but at other times you can find yourself being a little tricked by it. Because at the end of the day, it is parlour tricks. But they’ve done a very good job at creating them, because they’re half scripted, half un-scripted. They did a good job, but I also could see how the character we talked [to] that was a user of Replika, I thought all these people are going to be very lonely. How sad do you need to be to use one of these things? But found out it was a very well-adjusted person. They were a single mom raising a kid with some sort of seizure epilepsy. And they found that some people don’t always have people to talk to and open up to and or complain about. And so, having a computer there to do it to, we actually found was quite refreshing because you’re not going to be judged.
Sometimes people are going to be more open if they know somebody is not looking right back at them. We didn’t cover it in the film, but they’re using it for PTSD, for soldiers that come back from war. These are people that are trained not to show a lot of emotion. So, a lot of that’s repressed. And when they’re doing screenings with mental health professionals when they get home and coming back from the battlefield, they’re not always very willing to open up. But they found that if they can do initial screenings through an AI driven system like Replika or something like that, people are much more willing to talk about things that are a bit more vulnerable.
And so I think you’ll see more and more of those kind of systems working themselves into other fields like psychology and other places where people just might not feel comfortable sharing their deepest fears.
What would you like to be a takeaway for people watching the film?
Yeah. I think if people can take away anything from the film is that they need to stay educated. That’s the thing, you’ve always got to be learning. I think the world moves much faster now than it moved 20, 30, even 10 years ago. Change is ever accelerating and if you’re not learning, you’re going to get left behind. I think that’s a very important thing and I think, especially for even younger generations coming up, the world they’re going to live in, in 50, 60, 70 years’ time, is going to look much different. And it’s not just about AI, it’s just always be learning, always be paying attention.
I think that would be my biggest takeaway because if you don’t understand these technologies, it’s going to be harder and harder to remain relevant and competitive in the job market and other things like that because a lot of jobs will be destroyed. A lot of jobs will be augmented and a lot of new jobs will be created. If you’re not thinking about these things, I think you could be in a world of hurt.
Machine premieres on SBS VICELAND and On Demand on Sunday 28 June at 8.30pm.
Follow the author here