The Art and Science of Spotify’s Localization

With Jennifer Vela Valido, Spotify’s Localization Quality Program Manager


Below is an automated transcript of this episode

Antoine Rey (Host) 00:17
Hi everyone, my name is Antoine Rey and I’ll be your host today for this Global Ambitions podcast episode. My guest today is Jennifer Vela Valido, who’s a localization quality program manager at Spotify. Jennifer, welcome to the program.

Jennifer Vela Valido (Guest) 00:32
Thank you. Thank you for inviting me.

Antoine Rey 00:35
So today we’re going to be talking with you, Jennifer, about how you adapted and evolved a quality program in an enterprise localization situation there, and we’re going to be looking at both from a process and from a tech stack perspective. So the first question for me would be to understand how was it at Spotify “before Jennifer? Because you joined them two years ago, so you know if you can maybe explain a little bit what you inherited when you arrived and why you were hired, and your vision for what you’re doing for them.

Jennifer Vela Valido 01:09
Yes, so two years ago, Spotify was going through a very, very rapid growth. They had been adding several languages and they realized that, with the really high amount of languages that they had at that moment I think at that moment we were 60, 64 languages when I joined. So they had added a lot of new languages, new flavors, for example, we realized that we needed to find a way to have a quality program that was as scalable as possible, that would allow to manage all of these different languages. Quality and tone of voice has always been very important for Spotify. We are very proud of having a unique tone of voice has always been very important for Spotify. We are very proud of having a unique tone of voice. We really want people to feel that Spotify is speaking their language, their local language, so therefore, that’s even more important for us to make sure that we are getting that tone of voice in every single language.

02:01
So “before Jennifer”, we had a way in which everything was a source of production, but also quality evaluation was outsourced. So we have two vendors doing these two tasks and they would communicate to each other, but because everything was completely outsourced, there was no strategy, it was very difficult for Spotify to make sure that both vendors were following the direction that we want to follow, because there wasn’t anybody in between. So that was the reason why there was decided to have a localization quality program manager, which is me, and that’s the person that communicates with both vendors to understand, to explain our expectations in terms of quality and to be the middle woman, in this case, between our stakeholders, our local market reviewers, our business and our vendors, to make sure that everybody’s understanding what the other needs and expects.

Antoine Rey 02:52
So who defines the tone of voice or the style that you adopt? Do you work with? Is it with your marketing team, or how does that work?

Jennifer Vela Valido 02:59
So this is very interesting because the style guides were created several years ago. Right At that point we didn’t have this fluid communication with the market. So at the very beginning, the style guides were created by the quality vendors, so by language specialists with their knowledge, what they knew at that point, and that was a great exercise to really have that very good style guide. But then the next step actually was to be able to make sure that these style guides also were following our markets and our marketing teams. 

That’s actually the other thing that is a challenge, because now we enter into different opinions and different preferences and the communication. So here communication is key and aligning is key, because in many, many cases it’s not about what is right or what is wrong, but more about what our local marketeers want to have in that particular market, how they envision the tone of voice in that particular market. Do they want it to be more bold, more inclusive, funnier? So that’s the part where, precisely, we need to have a very close alignment and collaboration between language specialists on one side, but then our internal teams and stakeholders on the other.

Antoine Rey 04:05
So is there like a kind of a guideline from the mother company, somehow, and then each local country might have a following the guideline, but also a local flavor to be adapted to what the tone of voice should be in that country. It might not be acceptable to have the same tone of voice in the US than in Korea, for instance, right?


Jennifer Vela Valido
 04:27
And it shouldn’t be, because that’s the thing. That’s where us, as the localization experts, we do have these conversations with different local markets to communicate right, whether it makes sense to have the same tone of voice in different markets. How do we want to adapt it? And I think it’s the beauty of each of us having different expertises, so we do have user researchers also in our teams. We do have the local markets, we have the language especially, so we are able to have these holistic conversations on what is the language strategy for this country, for this market. Take into consideration the experience, the tone of voice, the brand, the goals of the business.

Antoine Rey 05:03
So the team you manage, you’ve got people really working that aspect with the local marketers and linking back to the central organization.

Jennifer Vela Valido 05:13
Exactly so the quality team, which is one of the vendors that we use. We do have the reviewers, the language specialists, program managers, and all this team is the one that, with my help, is communicating again with the production team, because, of course, everything that is learned by the quality team needs to be also shared by the production team, by the translators, but also with the local market reviewers, with the marketing teams, etc.

Antoine Rey 05:39
You’re going to have a lot of envious people listening to this podcast and say how do you get a budget for these people internally?

Jennifer Vela Valido 05:46
I think the good thing here is that we do a lot of collaboration. So the budget, of course, is our localization budget. That’s what we’re using for everything done by our vendor and then the rest of the internal team they do other tasks right, so they do marketing, they do local market review, but they are dedicating their time to have these alignment goals and to have these conversations. So here’s what about we’re all in the same boat, we all want exactly the same and we all dedicate part of our time to discuss these matters.

Antoine Rey 06:17
Okay, well, that’s great, but it also shows the drive of like Spotify, do you want to have high quality and a tone that is specific to them in every country and that is recognizable, I guess, and that’s important to them. Therefore, they make the investment, which is fantastic in this case. So, and beyond style guides and tone of voice, did you establish other metrics? What did you change? You know any specific metrics that you’re tracking that are important to Spotify in general and that can be useful for other listeners to understand.

Jennifer Vela Valido 06:47
Yes, and actually that was one of the most interesting challenges that I had when I started, because we didn’t have metrics. We did have, of course, scorecards, so we had visibility over the quality of each language because, with our vendor, we were receiving quality scores per language, so we could know the objective quality in terms of number of issues, number of mistakes that we had per language. But we do know that, in quality management terms of number of issues, number of mistakes that we had per language but we do know that in quality management, that’s just one metric. If we only use that metric, we are falling behind what quality means as a whole. Right? So as I entered the program, what I did is I developed a quality management dashboard with more metrics. So we have the objective quality.

07:26
That’s very important to understand what is the quality our vendor is giving us. But then to that we are adding other metrics that are related to the perception of our users. So how are the users enjoying our apps? What do they say about our language? So we do have mechanisms to ask users about their experience in a particular language and we’re collecting that feedback. We’re also collecting the feedback from our own native speakers inside Spotify, again, the marketing teams and local market reviewers. We are asking them to rate the quality. What is their perception of the quality that they’re receiving? So we are comparing objective quality with perceived quality from the stakeholders, from the users, and all of that is giving us a much better idea of what is working, what is not working and why.

Antoine Rey 08:13
And, like for the users, you’re running surveys, or how does that work?

Jennifer Vela Valido 08:16
It’s different ways. We do have kind of like a survey that we have in the app, but also from time to time we conduct audits. So for these audits we do it with UX researchers, and that’s when we do have a particular use case. We want to check something in a particular market. We have something in particular. We want to get more specific data, we want to see how the localized UX experience is, and for that we get much more information from users, yes.

Antoine Rey 08:44
Okay, and that’s interesting, I’d say, because in the end, maybe to try and sell that to executive teams, it’s not always easy to say to them “oh, because we have great quality, we’re going to get more users or retention”. But suddenly, if you can show both retention and acquisition of users and correlate that to the style and the tone of voice that is specific to Spotify, then you’re on to a winner and then you’re creating value as opposed to being a cost.

Jennifer Vela Valido 09:14
Exactly, and even though, as you very mentioned, retention acquisition of revenue right, is not a direct metric for localization, but it is very related. So that’s a really good point. We do also look at retention acquisition and in our particular case, what we’re always looking is our monthly active users, our MAUs, to see precisely in which markets we’re growing more and which markets maybe we’re not growing so much, and then to understand where we should focus our efforts. So, yes, that’s a metric that also influences our quality management strategy. So we want to focus our attention in those markets where we see that we have more users, but also in the ones that we want to grow more. That also means that we want to be more aware of how the quality is in those markets.

Antoine Rey 09:59
And from a tech stack perspective, then if we flip to that side of the equation there, what are you using? And, of course, the question that everybody has on their lips is what about the inclusion of AI into your process?

Jennifer Vela Valido 10:12
That’s a very good question. I think everybody’s experimenting with AI. I think that, particularly for quality evaluation and quality management, there is a huge potential and a lot of companies are doing a lot of experiments on how we can incorporate it. We know that we have the quality estimation. We know that a couple of TMSs are incorporating automated quality evaluation using AI. So I think we are now at a phase in which we are experimenting.

10:38
We just are trying to understand how accurate and how much we can trust the results that we get with automated evaluations or quality estimation, because at least the first experiments that I’ve been doing is that, because of this tendency of AI to product results that might not be accurate or reliable and it’s difficult to explain why. What I mean with this is that in some cases for quality evaluation, quality estimation, the score that the AI gives is not correct. It’s not correct and we don’t know why, and that’s the problem with AI. It’s very difficult to fine tune the logic that is not working when you don’t know why it’s giving you that result. So it’s very exciting because at the moment, the way in our use case so for what do we need AI? For quality evaluation estimation is not better than human evaluation. It’s cheaper and faster, but it’s not better, and we’re trying to understand how we can use it, maybe in a particular task not for the whole process, but maybe there is some particular tasks so that they allow us to do more with less time but keeping our level of accuracy.

Antoine Rey 11:45
Okay, so it’s honing on to what those tasks might be and then prompting around it. I guess there, and then, without naming any names, I guess they are using some of the TMS, their own quality evaluation tool, or you’re also looking at external specialized quality evaluation tools, or both.

Jennifer Vela Valido 12:04
So for the moment, we are with a TMS that has a feature, an LQA dashboard feature, that follows NQM.

12:12
So for us, yes, we do believe in using NQM as a framework for quality evaluation. We are seeing how much we can do with that and also we are waiting to see what our own provider, our own team, is providing, how they are incorporating AI in the tool. So, yes, we’re trying to see what is going to happen and when, how fast we can try that too. But at some point we also have been looking at other. We’re just looking everywhere, I think, like everybody is, we’re looking to everywhere to see what everybody’s doing, to try to understand what is the solution that is going to be better for us. But I think, generally for quality management, I do believe it’s one of the tasks or one of the processes that is more manual, and one of the reasons why we always struggle to really do quality evolution and assessment is because it takes a lot of time and a lot of money. So definitely it’s a field where any kind of automation is going to be very beneficial.


Antoine Rey 13:03
Well, hopefully, by the time this podcast is published, we’ll have a lot more answers, because we’re going to be spending the next three days at Lockworld in Dublin and Jennifer you’ll be there and hopefully more talk about this, quality programs and all the tech stack and AI that we can put in there. Ok, well, thanks very much for joining our show and we’ll see you this week.

Jennifer Vela Valido 13:25
Thank you. 


Jennifer Vela Valido

Localization Quality Program Manager

https://www.linkedin.com/in/jennifervela/

Scroll to top