Executive Interview: Sol Rashidi, Executive VP, Chief Data Officer, Sony Music Entertainment
Advice: Figure out What Is and Isn’t AI, Be Patient, Institutionalize What You Have Built to Gain Competitive Advantage
Sol Rashidi, a thought leader in the data, robotics, AI and IT space, She doesn’t love the term “artificial intelligence”, preferring instead augmented or automated intelligence. She’s quick to point out the data scientists, engineers and human creativity behind the “artificial” solutions. But she has also been working with AI long enough to see a future past the hype. There’s no silver bullet she warns, but the advantages are real and the number of companies solving real problems is growing.
Rashidi is currently executive VP and chief data officer, Sony Music Entertainment. She has been issued seven patents related to data requirements, data governance and IT management. Her past positions have included chief data and cognitive officer for Royal Caribbean, partner of data, analytics and AI at Ernst & Young, and member of the IBM team that first brought Watson to market. She has a bachelor’s degree in chemistry from the University of California, Berkeley and an MBA in Strategy and Leadership from Pepperdine University. She played on the water polo and rugby teams at Berkeley, and on the Women’s National Rugby Team for several years.
Rashidi spoke with AI Trends editor John Desmond about how company leadership is reacting to AI, the role of intellectual property in the space, ethical pitfalls, and GDPR. Their conversation has been edited for length and clarity.
AI Trends: For companies interested in pursuing AI to gain some advantage in their business or to keep up with their competition, what are the trends you see in data science and data analytics?
Sol Rashidi: I try putting my finger on the pulse of what I think the trends are, and almost in every company or organization I’ve worked with, where I’ve either driven the AI agenda or the cognitive services agenda, the answers have been completely different. I wish there was a silver bullet; I wish there were two or three things. I think what it comes down to is everyone’s looking for that competitive advantage to make sure that they survive in this ever-changing world of ours. And they don’t necessarily know what the answers are, but they’re willing to explore, they’re willing to invest.
But I think what distinguishes one company from another is the internal culture, and whether it’s set up for and supports innovation, There is a difference between driving innovative agendas, and truly being able to absorb things that need to change: identifying the culture, mindset, organizational readiness elements that need to change, ensuring that whatever AI aspects are introduced, they truly get operationalized and institutionalized. That’s something all companies are having an issue with right now as it relates to AI. While there’s a need for AI—100%—the answer is different for each company. It’s still tenuous as to whether or not companies are ready to absorb this. A lot of the stuff is forward-thinking for a lot of industries.
Can you describe some of the range of maturity you see among companies with regard to AI and data governance?
The fintech industry is doing a really good job because they’ve always been forward-thinking. They may have an innovation center, a design lab and they also have their internal structure, so that when something comes along from a lab, they do a better job of institutionalizing it.
Pharmaceutical companies do a really good job as well. Other industries doing well are travel, hospitality, consumer products, and retail. It’s spilling over to media and entertainment as well. Certain industries are definitely leading the pack in how quickly they can operationalize the capabilities that they bring along.
Are business executives getting a handle on how to exploit AI to help their businesses?
They are getting better. The challenge that executives have—myself included—is there’s so much to research and study and understand, and there are so many sources of information that you just can’t quite put your finger on what AI really means, and how it’s changing the world. We view it as artificial intelligence, but time and time again I have said: “There’s nothing artificial about this; it’s still fingers to keyboard.” We’re still training, we’re still modeling, we’re still fine-tuning. And it’s just our brains, it’s our creativity, it’s our engineers, it’s our developers, it’s our data scientists that are really running these machines that we’re referring to when we say, “Oh, it’s an artificial solution.” I think at best there’s assisted intelligence. There’s augmented intelligence, automated intelligence. I just don’t think there’s anything artificial about it.
And, unfortunately, because all the sources are providing executives with this information, they’re almost creating this aura that if you do AI, your problems will be solved. That’s the furthest thing from the truth. It’s not a silver bullet. I think there is a disadvantage in that, if for example you’re an executive in marketing, sales, or research and technology. Your plate is so full trying to manage and maintain and keep the strategy and the vision of the company in place. But to really dive in deep and understand what AI means, you just don’t have that balance and capacity. So, by default, you have to depend on your sources.
But our sources are not doing anyone any justice because they’re painting this rosy picture and, unfortunately, it’s just creating a little bit of an unrealistic expectation of what AI does. For those of us who are actually in the space who have delivered and deployed [solutions], sometimes we have to do some level-setting and say, “Well, technically these concepts are true. In reality, this is what happens.”
How is the role of a data scientist different from what a software analyst does?
We often refer to analysts as individuals who are mining through the data to understand patterns of what’s taking shape and then producing operational reports. Here were our sales, here was our revenue, here’s our forecast of what we think based on historical data. And that’s what analysts have really done today. Back in the day, you should know, at a minimum, SQL. If you are a statistician, you would know R; it depends on your language. But now we’ve built a lot of great tools that shortcut that process. You don’t need to know SQL. It’s a lot of drag and drop functionality. From my perspective, an analyst’s job is really to report out on what has happened.
A data scientist, however, is a completely different animal in nature. They know scripting and coding and data engineering, and their job is to predict something before it’s even happened. Their job is to put completely unrelated pieces of information together to find out if there’s a way that these things relate. Their job, essentially, is to be able to find a way to react to something that’s never been reacted to before. In my viewpoint, data scientists deal with the unknown, analysts deal with the known.
I think that is a hard concept for people to grasp, especially within our industry. The data scientist is the new buzzword title of the year, and analysts who are not data scientists are saying, “I’m a data scientist.” Organizations are naming their head of data science because they feel like if they don’t, they’re not ahead of the curve. But people really have to know the difference between the two because it’s discrediting those that really put in the time and effort to understand the language, the coding, the engineering that’s needed to be able to do the stuff.
How important is the intellectual property being developed around AI? Are businesses willing to share their experiences around AI and business strategy?
Not yet. Not because they don’t want to share, it’s that they haven’t figured it out just yet themselves. You have your main players in the AI space, who can have anywhere from 2,000 to 3,000 engineers whose sole job is to find IP. But for those not with the Amazons and Googles of the world, the entire open-source network that includes TensorFlow or SageMaker or just whatever tool kit is available out there, has democratized AI. But how corporations and organizations are harnessing what’s available, how they’re operationalizing and institutionalizing that stuff is TBD. It’s still a work in progress.
If they have figured it out, it makes no sense to share it because they lose competitive advantage. And if they haven’t figured it out, they still don’t want to share that because they don’t want to let anyone know that they are still figuring it out. So I don’t think companies are there yet. They have very good reasons to stay hush-hush about what they’ve done or what’s in flight right now.
We have seen a few groups that have been quite boisterous in the marketplace, saying, “Look at all the amazing things we’re doing!” But when you look under the hood, you might not categorize that as AI or really innovation because a dozen other companies have done that before. They may not have marketed it. I think it depends on your PR strategy, quite frankly.
So the companies with an aggressive PR strategy might be willing to get the story out more?
Absolutely. Because you want to attract top talent. You want to create that demeanor, that brand in the marketplace, the leading-edge. It benefits the investors. It benefits the employees. There is a ton of benefit to it. But to what degree you are actually doing it, is a different story. Sometimes in the marketplace, perception does become reality. So one strategy a company can have is to be absolutely bullish in the marketplace about innovative things they’re doing.
On the topic of data privacy, how does GDPR—the General Data Protection Regulation of the European Union—and new privacy laws and legislation impact companies’ ability to incorporate AI into the business strategy? Is it difficult to get the required data?
Very much so. Ten years ago it was a non-issue. We thought we had volume then to do any of the deep learning techniques, but that wasn’t volume. Today’s information and the amount of data ingestion that we can do—that is volume. However, because of GDPR, the type of data we can assess, collect, analyze, has become highly scrutinized.
In this day and age, where a ton of data is available to us, sometimes it’s not actually the data that matters. I always go back to consumers. And, by the way, I wish GDPR wasn’t a part of my job or something I have to deal with. My friendly name for GDPR the acronym is Gosh, Darn Pain in my Rear, because I have to know the legislation and the laws, to understand what we can and cannot do.
Ten years ago, we were dealing with cookies and website recommendation engines. Five years ago, it was a lot easier to collect consumer data. Our only issue was to be able to understand the behaviors; that’s what everyone was after. To understand consumer behavior better, gives business better targets, upsell and cross-sell opportunities. But now with all the privacy terms and conditions, they have to opt-in. And in Germany and Austria, the consumers have to double opt-in. If they don’t do it, you can’t use their information.
Today in order to analyze data, you need to answer: One, can we actually collect the data? Two, how do we have to store it to be compliant? Three, what words and verbiage do we have to use to ensure that when consent is given, it’s the right consent so that we can use it? Then you can get to the analysis part. And there is a fifth step. You can only use the analysis for certain means and measures. You can’t just do it for everything. So what sounds simple in nature is not anymore. It’s really, really difficult.
Are you worried at all about AI getting out of control in any area such as maybe ethics?
Yes. Everyone has a different moral compass and not everyone is built the same. Not all companies are built the same. Not all countries are built the same. And just like in human nature, there are good people and there are bad people. There are good-natured people and not so good-natured people; that exists everywhere. Because the technology has become democratized and has become available, there will be ethical dilemmas. I always say, just because something’s cool doesn’t mean we should do it. You’ve got to have the balance of cool, but not creepy. As soon as something becomes creepy and you feel it—like it’s a gut reaction—don’t do it. It’s not right. Everyone is totally different on where they draw that line in the sand of what’s creepy versus what’s cool.
Here is an example. When chatbots first came out, people were essentially asking questions and getting a response. For companies, chatbots saved limited time and resources to answer basic questions. However, not everyone uses chatbot for that reason. Companies have the option of listening in to a chat, whether it’s one Facebook messenger chat to another Facebook messenger chat, or one Skype chat to another Skype chat, or whether it’s interacting with a chatbot on a website. Those are words. In those words are letters, and all that is tracked. So, cool would be, “Let’s offer a platform to give people the means to communicate in a fast, easy, and a cost-effective means?” Creepy is, “Let’s listen in on a conversation and find out who’s talking about what, so that we can do something about it.” Depending on where your moral compass lies, I do think you can misuse the stuff that is available out there.
Any advice to readers for overcoming obstacles in working with AI?
First, be patient and don’t expect that everything you build is going to be a success. Everyone has a different maturity level or understanding of what AI really is. Patience is key. You can’t expect everyone to get it. Second, you think you can build the most amazing thing and you probably have, but it doesn’t mean it’s going to go anywhere. Expect that half your projects, in my opinion, will not become institutionalized. If you can get 50% out there, you’re doing a really good job. And third, sometimes you have to level-set the expectation. Everyone has a completely different answer and a totally different definition of what AI is, what machine learning is. We need a lot of education about AI as well as application.
A good mentor of mine said, “You know, in this space you need a backbone and not a wishbone.” I thought that was really funny.
Good one. How well do you think the software and services industry is supporting AI right now? Are you getting what you need from the vendor community?
I see lots of pluses and minuses. The minuses are creating unrealistic expectations and building the hype around artificial intelligence. A line of code around predictive modeling doesn’t mean your solution is AI, at least not in my book. I think it takes many, many more components than that. And I think the industry has been diluted as a result. Now everyone has an AI solution and I would really challenge that. I do think vendors, in order to capitalize on the hype that’s occurred today, will add a piece to the code and rebrand something as AI. I would really question whether it really is.
On the other hand, the services and certain software companies have done a really good job of investing in the area and saying, “Listen, we [understand it as well] as we can get it right now.” As close as the market and our understanding will allow us, we’re there. They may help us be forward-leaning on some concepts we want to explore; they may have learned some lessons. I do think there are some phenomenal software companies and services companies who’ve done it more times than others and they’re really helping companies propel forward.
What do you see the future holding in this area of AI, data science and business?
There is a ton of room to grow and a lot to be learned. We have scratched the surface. All the hype will die down. Everything goes through its cycle. This is going to have a two- or three-year cycle just like everything else, then we’ll be on to the next thing. Companies will be figuring out what is and isn’t AI and how to institutionalize what they’ve built to actually give them a competitive advantage. We’re not done yet. This is not a mature space whatsoever.
Regarding the AI workforce, are you able to find the people you need in the market? Do you have any advice for young people who are interested in getting into AI? What should they study and what kind of work should they try to do?
It’s very difficult to find the AI talent, no doubt about it. The geeks are the new rock stars of today’s age. Applied mathematics, computer science, and systems engineering are having their moment to shine. Every college whether it’s community college, university, Ivy League or not, has courses and majors specifically focused on data analytics and data science. If that’s a space you want to get into, that’s a pure foundation. Pure, pure, foundation. When you want to be a business executive, you need to take economics. You need to know how numbers work. This is the same way. If you want to get into the AI and cognitive space, you need to understand how the models work, where the tool kits are, what’s used for what. How do you detect something that’s an anomaly versus something that’s a norm? If you compile all that together with some aptitude and muscle, you’re good to go.
Learn more at Sol Rashidi.
from AI Trends https://ift.tt/2SXpH5k
via IFTTT
Comments
Post a Comment