Cassie Kozyrkov - Chief Decision Scientist At Google

Listen & Subscribe Below!

Decisions are what create our life and dictate our futures. All our decisions have infinite compounding effects that shape the world. With that in mind, it is vital for us to make intelligent decisions when it matters. Chief Decision Scientist at Google and a leader in the field of Decision Intelligence, Cassie Kozyrkov, unravels the process of decision-making and what role data science and the concepts that underlie play. Cassie explains the importance of humans - the decision-makers - in the safe, reliable, effective, and responsible production of artificial intelligence. She emphasizes how we can use our innate analytical decision-making to better ourselves and society for the future.

---

Cassie Kozyrkov - Chief Decision Scientist At Google

The Humanity Of AI

Cassie, welcome to the show. I'm honored that you could join us.

Thank you so much. It's a delight to be here.

You created the field of decision intelligence and you are also the Chief Decision Scientist at Google, but what I found most impressive is your mission is to democratize decision intelligence for all of us. That is why you're here is because we are all data analysts in a way, and you're going to help my readers and me hone and sharpen our skills, even if we're not involved with high-tech data science. To give a big tease to my audience, we're also going to be dealing with the super Sci-Fi word of AI, which is another thing that you are an expert in, and I looked forward to that, but we're going to save that later. Why this mission to help all of us become good analysts?

I like to call myself a recovering statistician. I find data really beautiful as do many of my kind. What I realized is that while data are beautiful, it is decisions that are important. It's through our decision-making that we affect the world around us. What we should all be focusing on is turning information into better action. Whether we do that with data in its electronic form or simply the information that we take in through our senses. If we all get a little bit better at turning that into better decisions, then we'll all make the world a much better place. Call it purely selfish, but I would love to see a better world and I would love for that to start with all of us getting a little bit better, moving a little bit towards the best decision-makers that we could be.

I want to bring up something that you just said because right now we are at a time where so much data and information is plowed our way that it is obviously even a more difficult task. I remember the famous quote by T.S. Eliot, where he says, "Where is the knowledge we've lost in information, and where is the wisdom we've lost in knowledge?" Part of your goal is to find that out for us because now we're inundated with information, but not necessarily knowledge and we have to turn that knowledge then into wisdom.

One of the things that one starts getting in the habit of as one gets familiar with the practice of decision intelligence is moving from a passive consumer of information of data to an active seeker of the information that you need. It's a little bit about flipping your order of approach. Here's an example. If you were interested in figuring out whether you wanted to stay at a particular hotel, and I told you that this hotel has a 4.2 out of 5-star rating, a 4.2 out of a 5-star average review.

In telling you that, I have crushed your ability to use that data, that 4.2 for making your decision. Because if in your heart of hearts you really want to stay at this hotel, you're going to say, “4.2 out of 5? What an amazing score.” On the other hand, if you didn't want to stay at this hotel, then the way you'll respond is what, "Am I some kind of animal to stay at a 4.2 out of a 5 stars hotel?" What has happened is that information is no longer able to drive your decision.

You're allowed to frame the way that you ask your question based on what that information was. If you already have a decision that you want to take, you're just going to let confirmation bias drive that you're just going to let what you already want it to do be what pushes you towards your outcome, and you're just going to use that data, that information as an excuse. The way that you would want to approach the decision, that you like to start with is the default action.

In the absence of any information at all, what are you going to do? Are you going to stay there or are you not? Being honest with yourself about how you would take that decision if you've got no further information, is an amazingly powerful starting point. From there you move to what is perhaps the career-making question for data scientists and all of us should get in the habit of asking ourselves this question as well. This question is, "What would it take to change my mind? What has to be true about the universe?" Let's say that you do want to stay at this hotel, the question you're asking yourself is, "What information would convince me not to do that? What has to be true for me to change my mind?” You would then ask yourself, "Am I framing it in terms of a score that is below some number and what number is that?"

Is that below 3.9? Is that below 4.5? It will vary from decision-maker to decision-maker, but you were then going to go out and seek that information after having figured out how you want to frame your decision. Maybe for you, it's not the stars at all. Maybe you want to know something about bedbugs or whether they have free Wi-Fi or a whole host of other things. Maybe even a blend of factors. By really understanding what you need, you're then going to tailor your approach through seeking information, and you're going to be more immune to people, throwing all kinds of things at you, "This hotel has three bounce castles and six swimming pools." Maybe that has nothing to do with anything for you. Six pools, that's not what you're looking for. You won't get swayed by this outside irrelevant information.

You mentioned the word bias and the one thing you want us to be totally aware of is without exception, we human beings are biased. We can't help it. In fact, in a certain sense, it's a feature that literally has allowed us to evolve. We never would have gotten out of the plains of savanna if we were not able to be biased. Yet you want us to curtail those biases so that we are able to make a proper decision, and I find that to be so difficult in itself.

One needs to realize that the word bias is both complicated and technical. Different disciplines use it in different kinds of voice. We need to be clear on what we mean by bias. If we're talking about bias in the algorithmic or AI sense, what we mean here is that there's something wrong with the inputs. The inputs are skewed in some way. What you mean in your use of the word bias here, is cognitive bias and that is that our brains have evolved to take certain shortcuts through our decision-making. Because, if we're being attacked by a lion on the savanna, we don't have time to sit down and perfectly think through the best route to take to run away from the lion. We have got to act quickly and we've got to take certain shortcuts. Also, the brain would require so many more calories from us if we optimized everything.

Decision Intelligence: As one gets familiar with the practice of decision intelligence, they start moving from a passive consumer of information and data to an active seeker of information that’s needed.

Decision Intelligence: As one gets familiar with the practice of decision intelligence, they start moving from a passive consumer of information and data to an active seeker of information that’s needed.

We need those shortcuts, but those shortcuts don't serve us well in every situation and we didn't evolve those shortcuts in the environment that we find ourselves in now. We may find ourselves taking our decisions in ways that are completely unsuitable in our modern setting. First, being aware of how we make our decisions, why we do what we do and then thinking through whether we do want to use our intuition or heuristics. If we want to let our biases drive us. Sometimes they’re good. Sometimes they're not.

Bias as I'm using it now is more of a technical term. Also, there's a use of "bias" in the sense of unfairness and that's not the way I'm using that word. Treating people unfairly is never a good thing, being nasty to our fellow humans. And I also want to make it clear when I say that things like "Data don't let you make objective decisions because there's always some human subjectivity in there." "Humans are fundamentally biased because of our cognitive limitations and because of the types of information that we use." When I say that, I'm never saying that this is any excuse to treat anyone else badly. It’s a totally different conversation. I'm not talking about that bias. I'm talking about shortcuts that might've been helpful and healthy on the savanna that aren't helpful and healthy in modern life.

You said you're a recovering statistician. I saw that video where you said that and you used the thing that must be used which is assumptions, to make assumptions. We've been always raised those assumptions are the worst things you can do and how all of a sudden, assumptions are based on the information, how do we then turn that theory around so that assumptions become beneficial for us?

Here's the thing with assumptions, when you're asking any questions... let's go back to this hotel example. Think about all the shortcuts that we've taken to boil down the question to what rating out of five we would accept for the hotel? We are assuming just so many things. We are assuming so many things about what a hotel is. We are assuming so many things about the experience that you don't need to explicitly build into your metrics. We are assuming that there's a bed in the room. There are so many shortcuts that we're going to take. If you also think about our structures for language... so let's think about defining what is a bed. What makes a bed? On what kind of fine-grained level? What's in a bed frame? What subatomic particles are in it? We're not discussing any of those things.

What we're doing is we're assuming them away. There's a joke that physicists like to make about themselves and that is in order to make progress in physics, they have to narrow their attention to what they're interested in at the time. They're going to assume away all of the rough details. The joke is that the physicist will say, "Assume a perfectly spherical cow." Cows are not perfectly spherical, but for the purpose of discussion, let's assume that the cow is a sphere and move along. What that will do, is that will make the calculations easier? That will require us to deal with less of the ugly real-world detail and it'll let us focus on what seems important. What we also do is ignore a lot of reality by making assumptions.

That's a good thing. I always tell my fellow former statisticians that the more fine-grained you go with your assumptions, the longer your calculation is going to take. You don't want to make perfect assumptions because the value of the decision and the calculation are not infinite. If you're going to keep checking every tiny, next level of detail on every little assumption that you make, you will find yourself sitting in a cave for 5,000 years trying to do this data analysis. Getting more and more fine-grained. It never finishes and you don't even live that long. To get things done, we make assumptions. That's why they're useful.

By reading all of your works and watching all of your videos, I even made the awareness that there's a difference between a statistician and an analysis. An analysis is basically telling you the story according to you the data or the information. I couldn't help, but think of Mark Twain's and then you paraphrase it in a different way where he said, "Lies, damn lies and statistics," and you came back with, "Lies, damn lies and analytics."

One of the things I know is that the analytics has to be made by what you can physically see. I've learned that from you. It has to be in the now. It can't be something from the past or the future. It's what you can tangibly see now, but how do we get away from that notion that statistics and even analytics can steer us astray? You give a great example of it when there was a front page of the byline of COVID in New York. Everyone who read that one line would have had a whole different image in their mind about what was going on, than the reality.

About that COVID byline and I wrote that blog post and I was a little bit cheeky in making that title sound like. I took issue with the article. I didn't at all. What I took issue with was how people would read it and what they would take away with it. This brings us back to assumptions. Every reader is going to bring some assumptions to it. As they read, they will fit that information onto their assumptions without reading deep enough to have some of those assumptions challenged. All those small shortcuts that we take, can lead us astray if we're not aware of it. Now to your specific question about analytics versus statistics. I like to think of data science as the discipline of making data useful.

It's an umbrella term that holds three sub-disciplines, statistics, analytics, and machine learning/artificial intelligence, AI. The way that you separate these disciplines is based on how you use them to make decisions. I think of it like this, it is none, few and many, because with analytics, statistics, and machine learning. If you don't know what decisions you want to make before you approach the data, what you are doing is analytics. You are going and you're having a look at the data that you have. What this will do for you is it will inspire you to ask good questions, but the one golden rule is don't take any of it seriously.

Statistics is where you're making a few important decisions under uncertainty. That is about the quest for good answers, not for good questions. There you know already, what the data need to do to get you to go one way or another way. Statistics plus analytics together is how you find good questions and you get good answers to them, but you need to use the two together carefully. In machine learning, that's about automation. That's about making many decisions.

The statistics and analytics combo is a powerful one, but what people misunderstand is that analytics is specifically about having a look, taking in the data, the information from your environment, and being inspired to ask questions, not to take yourself seriously. Data doesn't have to be in an electronic form. This is when, if I never look outside my window, then I might never notice that there's a whole bunch of emergency vehicles outside my windows. I might never notice that.

Decision Intelligence: By understanding what you need, you can tailor your approach by seeking information.

Decision Intelligence: By understanding what you need, you can tailor your approach by seeking information.

I might never ask any questions about it but, as I walk up to my window and I see all that, all I can conclude is, "Interesting. Six ambulances on my street," and now I can start wondering, "I wonder what happened," but I'm not able to answer any of these questions carefully and rigorously. Not to think well about what sorts of decisions I should make. I can start framing those decision questions. I can move on to statistics from there.

If I start getting confused and I start jumping to conclusions and I start overreaching based on what I'm seeing, I can lead myself and everyone else astray. I have to be very careful and remember when I just casually take a look and I let the information find me that I don't know what decisions I'm making in advance. I'm not seeking information carefully and actively. That is like our hotel example. What I've seen outside my window is 4.2. Is that a big number? Is that a little number? Who knows? There wasn't any decision framed.

You even say that rather than say, "We conclude." A good way to express an analysis would say, "We are inspired to wonder." I want to dig deep into that inspiring to wonder because one of the things you say about even statistics is its ability to change your mind when you're uncertain. That in my opinion includes the ability to wonder because wonder is dealing with uncertainty. I have a personal mission here since I live my life uncertainty, I want to dig deep into the importance of inspiring wonder. That's one of the highest levels that a human can attain is the ability to wonder. That doesn't necessarily mean a question. It could also mean just having thoughts that you wonder about.

I can see it in the way that you're framing these questions that you've got this deep commitment to wanting to use information in the most awesome way that you can. What we are looking at is turning information into better action. The way that an analyst contributes is they are about going and seeking information, looking at it and recognizing what they're seeing. Taking information in through their senses, whether that's digital information or whether they're just interacting with their universe.

There are certain principles that an expert analyst is just great at, but every person can also do. We're all analysts already because we're all absorbing information through our senses. Principles like not taking the information that you possibly absorb too seriously, trying to encounter as much of it as possible without getting stuck on it. The open-mindedness of coming up with many different explanations for what you're seeing. A great analyst is a person for whom you hold up one of those Rorschach blots. One of those inkblots that you drip some ink on a page and you fold the page over and it makes some shape.

If you ask a good analyst of what they're seeing there, they might say, "I see a bat and I see maybe two goats." A regular person might stop at the bat. I see a bat. That's what I see and that's my thing, but a great analyst will just keep going, "That could also be a pumpkin. That could also be a butterfly. I also see an angel." They're looking at exactly the same data. The data is that ink thing, but they are seeing many different things that it could meet.

They just keep going. They've got this incredible open mind and that's a principle that all of us could bring to it. We see some inflammatory tweets, the non-analytic mindset would go, "I'm going to take us exactly at face value." The way that it aligns with my personal biases, I'm just going to take that and remember it as strengthening my belief. If you take the analytics mindset, you start saying, "It could mean this. It could also mean some other thing. It could also mean twenty other things and anyway, how were they able to make the statement that they made? Let me find some alternative sources on this. Let me just force myself to extend all the impressions that I could get out of this." I think that would help us, help humans in navigating information that's thrown at them. Remember that Rorschach blot, remember to see as many explanations from that same piece of data as possible. Also, as a good analyst, find as many other Rorschach blots as you can to really try to get the fullest picture.

To me, even as you're talking, it seems almost joyous to be free to absorb. Because we're always in such a state of fear of making mistakes, we're not allowing ourselves the pleasure of pure absorption, which is joy in its highest form.

This is where that one might say, I might even have been abused a little by my statistics programs because a statistician comes into that part of the information project where you have to be careful and rigorous and go so slowly because the biggest sin in statistics is coming to the wrong conclusion. You want to make sure that when you come to a conclusion, it's the right conclusion. You're so careful. You go so slowly and it's very rewarding work when the decision that you're helping to inform is one that is worth that careful tiptoeing with bated breath. You get so used to going slowly and freaking out whenever anything looks sloppy, and then you move over and you see analytics, and it's just so free. Because what we're doing over there as we're saying, "Rigor is not the game here." Not in statistics because we're not coming to any conclusions beyond our data. We're not doing any of that stuff. We're not coming to conclusions.

All we're doing is we're describing our reality. All I know is what is here, not going anywhere beyond it. How do I become the best analyst that I can be? What I do is increase the speed with which I explore my reality. I'm doing the opposite of what a statistician does. I am putting on that jet pack and zooming through. Having a look and enjoying what's around me, just taking in as much as I can through my senses. Am I doing it well? No, but I don't care. Because later a statistician will follow up. Maybe if I'm the statistician, if I'm the data scientist, I'm going to do both, but it's a completely different mindset.

Decision Intelligence: We ignore a lot of reality by making assumptions.

Decision Intelligence: We ignore a lot of reality by making assumptions.

Another thing that confuses both statisticians and machine learning engineers about expert analysts, people who understand what quality looks like. We'll praise this person for being an amazing analyst. When we look at how they write their code and it's the sloppiest code ever. How they document their work and how well they even take notes, and it's all over the place. Actually, they're doing it right for what their discipline is all about and that is quickly exploring your reality. If you sit down, the equivalent is exploring the world. A statistician will have a specific question about a specific thing that is specifically in Paris and they will go and they will go to Paris carefully and they will write 100 pages of documentation that is carefully about this one thing that's in the Louvre.

An analyst says, "I don't even know that it's the Louvre that I want to be looking at. How quickly can I zoom through all of the world's museums?" If I'm stopping at each one and for each item in each museum, I'm writing 100 pages of documentation. I'm not going to get through the whole world in my life. I mustn't approach it that way. I have to free myself to run around. I go into this museum. I quickly find out it's not what I needed. I turn around and go to the next one as quickly as I can. That's exploration. It's a lot of fun, but don't jump to any conclusions. You need to totally switch your approach and mindset if you want to believe that you can make a rigorous conclusion. It is joining the three together.

This reminds me very early on when I had my show on PBS. I had the great science fiction writer, Ray Bradbury, on my show. I can’t forget the line he said and it's paralleling with what you said. He said, "Sometimes you have to jump off the cliff and then build the wings on your way down." That's what it sounds a bit like. I love that and you seem to say the same thing. Don't be afraid to start, just keep starting, just keep moving and you'll sort it out as you go. Here's where I want to get into a perfect transition, for us to get into AI. Because what happens with AI and science fiction is it becomes a little tricky because we think we were talking about data before, just by our own absorption. Now, these machines take in data at the speed of light practically.

One of the things that you shatter right away is you say AI is a Sci-Fi term. It's machine learning that we're talking about. What machine learning is and this blew my mind, is label making. AI is label-making. When we get into that, we'll explore some of the ways that we can expand our notions of AI and also take away some fear. That's what you do so well. Let's talk about the first thing is, it's a label-making machine. Is that all it is?

Even machine learning, are the machines running around learning? Is there a nursery for machines where they're being shown their ABCs? That's pretty Sci-Fi as well. I like to joke that if statisticians had named machine learning because we statisticians like a thing to be named to say exactly what it does on the tin. We like the most boring names for things. If we would have named it, we would have named it Thing Labeling with Examples.

What do I mean by this? What do I mean by the labeling? This is a way that we can put inputs into a system and get outputs or labels out. That's what computer programmers are already doing. What does the programmer do? They write some code and what happens is, some input comes into that code and then the code gives out an output. That's what programmers have been doing for all those decades that they've been doing it.

It could be any system. It could be something that controls the temperature in your home. The programmer might write that if the temperature rises above 75 degrees, then turn on the cooling. Some input comes in from a sensor and some output that is a decision or action that the machine takes is now to turn on the AC. Where does that recipe or that code come from in the middle? It comes from the human. They have to sit and communicate with the universe and they say, "Here's how we're going to react to the inputs." If it's more than 75, now here is what we do. If it's 60, now here is what we do. If it's 200, ...oh dear.

The programmer who has to reason about how that recipe in the middle, takes the inputs and turns them into outputs or labels, actions, decisions about how that has to look. What machine learning does is it turns that around a little bit. Machine learning says, "Instead of the human, having to craftly come up with that recipe themselves, cook it up in their own brain. Why don't we let some data speak?" Why don't we look at some patterns in data and automatically turn those patterns into recipes? Human doesn't have to author it themselves. Maybe you have this big old dataset that showed you when users of your system would turn on the cooling themselves, and maybe it wasn't at 75 degrees.

Now you don't have to apply the arrogance of the programmer to decide on behalf of all the users that 75 is where we should start turning on the cooling system. If we were automating it, now we can look at the data and we can say, perhaps under these circumstances with this combination of patterns that the user is doing, now we turn on the cooling. The difference is that the people, the programmers, are using data and information as well. They're using their lived experience and they're stitching that information they took in through their senses into a recipe. We can also just take the data itself, which we collect carefully, and we curate carefully and think about what would be the right data for designing the system and then we can pull patterns out of that data. We can then propose a recipe based on those.

That is what machine learning is all about. We're not finished right there because what we're also going to need is analytics and statistics in this process. Because with machine learning, you think you've made a recipe that works. Does it actually work? That's a very good and important question before you release and launch the system. The, "Does it work," is the part that is handled by statistics at the end. There should be some performance bar and you should carefully test that it does work. What about analytics? What sorts of data inputs are even worth trying out for building a recipe about suitable? Someone's got to look at available data inputs and suggest them.

Is it that users fiddle the dial themselves? That seems to be an easy one, but what if there were some other interesting indicators that might perhaps suggest, that now might be the time to start cooling? Maybe you had sensors that figured out that the users are moving away from the AC. The fact that they're stepping away from it, they might not even realize maybe they're doing it because they're cold. Maybe that could be a sign that the distance from the AC unit is worth using in the recipe to decide where to switch that unit's temperature.

Decision Intelligence: Statistics plus analytics together is how you find good questions and get good answers to them.

Decision Intelligence: Statistics plus analytics together is how you find good questions and get good answers to them.

You say recipe because you give us another example about finding the lamp. You rubbed the lamp and the genie comes out. You say it's important and I think it plays right off of this because what are you using as your information? You say, "It's not the genie or the machine that's dangerous." That's the fear that everyone has with AI. Your quote and I'm quoting you. You say, "It's the unskilled wisher." We have to wish responsibly. That goes back to the thing you said earlier in our conversation, that we as humans must take into account the golden rule. We must care about our fellow humans. We must understand what virtues we possess, our benevolence, all of those things, if we're going to wish responsibly.

When it comes to machine learning and AI, what we're really doing is we are automating the generation of these recipes. The technical term would be models that take data inputs and create data outputs or decisions or actions. We are automating decision-making there. A programmer also does that same thing by crafting the recipe or model themselves carefully by using their imagination. What happens is that when the programmer has to do that, they have to craft and maybe the code is 10,000 lines or 100,000 lines. They have to write every single line of code themselves. Maybe it's not one person who is writing the whole thing, but at least human hands would have written every little part of that. Maybe you can think of it as Lego blocks and then collections of Lego blocks that they have to put together to turn it into this model, but at least that's all been written by humans.

Let’s say a hundred thousand lines of code written by humans, whereas when it comes to generating these recipes automatically, there's a whole lot of code that needs to be written, that's because the tools are ugly. You're fighting with these tools that I'm sure in the future are going to get much easier, and then everyone will be able to create these machine learning recipes. All that code boils down to two lines only. Which data do we use for finding patterns in and what does success look like? How are we going to score this system? What score is our minimum for releasing it? How well does it need to do? It's those two simple things which data and what does it mean for it to work? That is something that the human decision-maker who is in charge of creating the system comes up with. That is the form that the wish takes. That's the extent of the careful, intelligent human interaction with the design of this recipe. What am I asking for and which data?

You say that is the uniqueness of the human. That's what we bring to this table. It's the inspiration. There's something else I wanted to share with you and get your take on it. I remember this line from Westworld. Talk about taking AI to its Sci-Fi. I'll never forget where one of the artificial intelligence gurus who we find out is an AI machine. One of the others that you think is gaining some form of consciousness almost because they have such sentient types of behavior.

They said, "What is the difference between an AI and a human?" The “AI man” who responded said, "The one difference is the human cannot be replaced." I found that to be beautiful. I found that to run through almost everything you bring out. The human cannot be replaced. If it's great art that the machine is making, it's the human that inspired it to make that art. You even used the example of carrying a bucket. Buckets are better water carrier than a human, but it's a human who made the bucket. I love the way you see AI, its future and the human quality of it all.

What it really all boils down to is the idea that it's the human that decides what is important. To go back to the start of our conversation, the way we do that is by blending some subjective things, taste, judgment, individual qualities with what we seem to have collected from the environment with our lived experience, with some data that we might have done some analytics. We take those two things together and we say, "Based on what I know, here's what I would say is important." Maybe you will see exactly the same things as I did, and you will say, "For me, something else is important." That's okay.

In fact, we can clearly see that we have done that because we're both extremely successful in two different kinds of jobs. We have at least found some things in life where we have different views of what is the most important way to spend our time. There is no one right answer. There's no perfect answer to the question of what is important. That's fundamentally subjective and individual. We might also answer questions like that on a social level, but different societies at different points in time will believe that different things are important.

Figuring out what's important, that is something human. We might use data to inform that, but in the end, our judgment cannot be automated. It's not objective, it's deeply subjective. The role that we take when we interact with any systems in any technology is we have to first say, as humans, "This is what's important. This is what I want this thing to do," and then we build the solution that does it.

Cassie, you add one more thing, and I want to use your exact words because I think this takes it to the next level. You say, "It's not only that, but it is through our action that we affect reality." It's not just what we program. It's not just what we think, it is how we act and how we behave. That affects reality. You never let that leave your mind in almost every lecture, every class you teach. I always see you concerned with what actions are we going to take.

Why worry about what's important if we don't do anything based on that? The way that we think about what's worth doing and what's valuable, is for us to prioritize our actions, our efforts and our ability to interact with our world. All of these tools, all these from the bucket that holds water, we have to first decide that it's important to have the water to bring it to a particular place. Then we lift the bucket and we might move it with our hands. That is us moving the water. Taking the action to move the water in a better way than just cupping it in our hands, but it's still us extending ourselves with the bucket.

Decision Intelligence: Figuring out what is important is something human. We might use data to inform that, but in the end, our judgment cannot be automated.

Decision Intelligence: Figuring out what is important is something human. We might use data to inform that, but in the end, our judgment cannot be automated.

If we build a big, complicated technological system based on AI that also moves a whole lot more water from one spot to another spot in a much more efficient way than we could by cupping our hands or by using a bucket, that is still the person could have decided, it is important to do this, and let me now act on it. That individual or group of individuals said, "This number of tons of water needs to be moved from one place to another place and then moved it." They have changed their world. It is them that did it, not the pipes, not the bucket, not the AI system, but the people who chose to take that action. All of our technology extends us. It allows us to scale our actions.

Besides speaking about the importance of framing everything from the perspective of our actions and our decisions, I also like to remind people that, as we extend ourselves and scale ourselves with technology, as we enlarge ourselves, it becomes easier and easier to step on the people around us. We have an extreme responsibility to enlarge ourselves responsibly in a way that doesn't do damage. We have to learn to drive these larger and larger systems that we built because what they do is they take one individual or a small group of individuals and they extend, they inflate those people's ability to have an impact on their world. As we scale ourselves, we also have to put a lot more effort into doing that responsibly.

Cassie, I promised you that I would let you go at a certain time because I know you're not a busy woman with all those jobs that you do, but I know how busy you are. I know you have to go. I want to thank you so much, but I do want to end with your words. When you say these words, this is when I got goosebumps. You say, "Make sense of your universe and do something useful with it." You have done that. You've shared that with me and my audience and I am so grateful that you took the time to be here with us. Thank you so much, Cassie.

Thank you so much for having me, Barry. It was an honor and a pleasure to be here.

Important Links:

About Cassie Kozyrkov

Web_Summit_2018_-_Binate.io_-_Day_2,_November_7_HM1_6789_(43948198530).jpg

I'm a data scientist and leader at Google with a mission to democratize Decision Intelligence and safe, reliable AI. I bring a unique combination of deep technical expertise, world-class public-speaking skills, analytics management experience, and ability to lead organizational change. I've provided guidance on more than 100 projects and designed Google's analytics program, personally training over 20000 Googlers in statistics, decision-making, and machine learning.

Previous
Previous

Annie Murphy Paul - The Extended Mind

Next
Next

Ron Garan – Floating In Darkness