Highlights

A Conversation on the (Current) Power of AI

FacebookTwitterEmailLinkedIn

Power of AI Adam Davidson

Award-winning journalist and speaker Adam Davidson has a unique talent for explaining the complicated interplay between business, technology, and economics. What better person to ask about one of today’s biggest technological developments: Artificial Intelligence? We spoke to Davidson about the impact AI can and is having on various industries, organizations, and stakeholders.

PRHSB: You recently took a deep dive into AI for a Freakonomics Radio Podcast series. Going into your research for it, were you team “AI will kill us all!” or “AI will save the world!”?

Adam Davidson: Like most of us, I have found myself a bit overwhelmed with all the coverage of AI. It seemed so extreme, like there is no middle ground. AI is either all wonderful or all terrible. That doesn’t seem possibly true—it has to be more complex. I assumed a lot of folks felt the way I did: We want to know how to think about AI without having first to make a big decision about which side of some massive argument we stand on.

Sure enough, I quickly learned that AI is fascinating, maddening, thrilling, scary, wonderful. It can also be boring, frustrating, sometimes useful, sometimes a waste of time. But it cannot be ignored. AI is here. It is going to change things. It’s crucial for most people to understand how and why.

PRHSB: Let’s break down how the technology can impact organizations and individuals across sectors. First off, college students. What does AI mean for them?

Adam Davidson: I say only half-jokingly that the first bit of advice [I give] to college students is to not listen to people, like me, who are over 50. You live in a different world than we did. You will see a near-total transformation of the way companies work, the way individuals work within them. You will have to carve a new path.

I think that AI is going to be really good for a specific set of young people—people who have passion, curiosity, a bit of self-direction.

For older generations, it took years—sometimes more than a decade or two—to become proficient enough in the basic skills of a profession to be able to really make your own mark. But with AI, many of those basic skills—like writing, coding, analyzing—are, essentially, free. You have access to them the very first day you think you might want to have access to them. So you are in a position to make your mark much earlier. In your 20s, you will be able to differentiate yourself not just based on your development of skill-based competency… [but also] on your ideas, the problems you want to solve, the ways you put unlikely things together.

This will bring new risks. We actually have no idea what a world looks like when many major activities no longer require a training/apprenticeship period. We have no models for it. Young people will need to largely figure this one out on their own. There will be a handful of older folks who at least understand that this is a new thing happening (I flatter myself that I’m one of ’em), but many bosses and older colleagues and parents will simply see this new way of working as lazy, inept, bad. So, in addition to having to carve out a new kind of career, young people will have to be increasingly discerning about what advice and guidance they take from older folks.

My strong feeling: Young people will figure this out. They always do.

PRHSB: How about an entrepreneur working on starting a business. Can AI offer a competitive advantage?

Adam Davidson: The big challenge is that AI offers advantage to everybody. It is now the floor, the assumption. So, I don’t think AI per se offers advantage; AI can supercharge other advantages. If you have a fresh solution to a real problem, AI will allow you to go far more quickly from initial idea to an actual product or other solution.

In that sense, I think the fundamentals of entrepreneurship are even more important than ever. It is essential to come up with new ways to solve real problems; to zero in on exactly who most suffers from those problems. And to deeply understand your customers and continuously adjust your solutions to make sure they are maximally impacting the folks you target.

This is great news for some—for people who love quick, rapid response and are energized by constant innovation. And it’s terrible news for folks who prefer a slow, steady, easily predicted path.

PRHSB: How about the CEO and senior leadership of a company. What does AI mean for them (and their employees)?

Adam Davidson: I think senior leaders face (at least) two types of challenges with AI:

1. A strategic challenge. How do they build an organization that can continue to thrive with the near-constant disruption of advancing AI technology? How do they embed the kind of nimble responsiveness that is required?

2. A leadership challenge. How do you run an organization in which every single employee feels increasing threat from disruption?

These two are in clear conflict. To succeed strategically, business leaders need to be open to the fact that nearly every position in their company could be replaced or dramatically changed because of AI. No job—including CEO!—is safe.

Yet to succeed, an organization needs some structural integrity; it needs to have staff that feel an investment in the future. I do think this will lead to a rethinking/reshaping of the relationship between employer and employee. Probably the closest model I’ve seen is the book The Alliance by Reid Hoffman, Ben Casnocha, and Chris Yeh. It lays out a different sort of arrangement in which employees engage in short-term (think: 2-year) stints with an employer. It’s clear what they will do and what they will learn in that period. At the end, both parties understand that the relationship will only continue if it is mutually beneficial.

As with everything, this new type of organization will be great news for some and horrible for others. It’s great for the passionate, curious, nimble, novelty-seeking. It’s terrible for folks who want a long-term, predictable, steady career.

PRHSB: Responsible business leaders introducing AI-powered solutions into their companies also need to consider the ethics surrounding this new technology. How can we regulate AI, and who should regulate AI?

Adam Davidson: Most of the big, impactful decisions are made in the very beginning of training up an AI model. It is crucial to remember that AI does nothing without human beings. It is designed by humans. It is trained on data that humans have selected. It is optimized by humans.

Once a model is fully operational—like, say, GPT4—it is way too late to build in safety. While I love the idea of thoughtful regulation in principle, I have not heard a good argument for how, precisely, AI should or could be regulated. I am skeptical that government laws will be nimble enough to respond to rapidly changing technology. I’m skeptical that AI companies will be aggressive at self-regulation. I think—as with much on corporate responsibility—the pressure needs to come from customers and from employees. Everyone—or, at least, a large number of people—need to let their voices be heard. They need to make it painful to ignore ethical lapses.

PRHSB: After your research on AI, did your opinion on it change?

Adam Davidson: I think the side I’m on is this one: AI’s role in our society is up to us. AI is not (yet!) an independent force that decides what it does, how it’s trained, how it’s used. It is a human invention, designed by people, used by people. We people have every ability to make sure it minimally damages our society and way of life. We have to all take on that responsibility. It’s up to us.

Contact us about booking Adam Davidson for your next event.