Machine Learning at Dell

With Georg Kirchner, Globalization Technology Manager at Dell


Below is a full transcript of this episode

Antoine Rey
I’m Antoine Rey and I will be your host today for this Global Ambition podcast episode. And my guest today is Georg Kirchner, who is the MT Program Manager at Dell. And the topic we’re going to be talking about is machine learning. Georg, welcome to the program. 

Georg Kirchner
Thanks, Antoine. Thanks for having me. 

Antoine Rey
Do you want to briefly tell us about your role with Dell at the moment? 

Georg Kirchner
Sure. So I’ve been with Dell since 2013. I really came to Dell through the acquisition of EMC. In the last couple of years are really focused on machine translation. Previously at EMC, I was managing technology, setting up translation management system and really got into the weeds around integrating the supply chain and also setting up a framework by which we were able to to capture interesting KPIs around machine translation. And before that, I spent 17 years on the production side. I started off as a staff translator, was a project manager and then a PMO. So I came to this role of technology manager with the pretty well rounded experience from the business side. And I find that now looking after machine translation is one manifestation of machine learning at Dell. And is really a logical progression in my career.

Antoine Rey
Interesting progression towards, like you said, our topic today, which is machine learning. So let me dive directly into some of the questions we want to address today for our audience. I’m interested to find out what data you use at Dell for your machine learning program and for what purpose?

Georg Kirchner
Let me focus on machine translation because this is really where things become most real. As you know, machine translation is a good proxy for machine learning because the principles for machine learning are really so widely applicable to the various use cases. Anyway, that said, what do we use in terms of data? We do have translations and we produce a lot of translations there captured in the form of text files. We have an extensive glossary. So that’s captured in the format. The data that we would want to have to back up a program is post-edit distance which is the time the translator spends to correct machine translation output. We’re even interested in the raw, MT output. And to understand post-editing edit patterns, we want to look at the segment level audit trail so we understand which kinds of changes the linguists make along the way. Lastly, I would say, given that the quality of output is so much dependent on the quality of input, we’re also interested in getting scores of the source, language, and quality. 

Antoine Rey
And what do you use the data for? What purpose? 

Georg Kirchner
In this case, we are using it. Well, let’s start with the TMX Files. We use those to customize our models so we don’t use the generic models for all the use cases. We do want to adjust the output to adhere to our terminology, to our style. So model customization is one used because the terminology we use to control the translation of branded items. Just think of a company name, Dell Technologies tends to get transliterated when you go into Japanese. They would want to preserve the rendering of a company name, even in Japan or other locals. 

The other data points post-editing distance. Which you basically is a way of measuring the post-editor effort in characters changed. So the post-edit distance is a proxy of MT output quality. So we want to capture that. And we also would want to have that to triage feedback. So you can imagine if you are on an MT model and you ask someone else to give you a discount based on the output from that model, you kind of have to defend the quality of that model. And so you often will get feedback that goes along the lines. You know, the output of that model is really poor. Now, what those translators will do, you know, because they only have so much time, they mostly will comment on what is not working, they will not tell you how great the MT model is in all the other situations. So in order to really triage feedback from the linguists, you need to have context. 

And so having post-edit distance at scale provides a context where you can say, oh, this is really a systemic issue. This is a one-off. It may be caused by the source. So we’re talking about quality data here. 

Antoine Rey
We’ve experienced ourselves. Some machine learning is only really going to be learning if you input quality data, you know, as opposed to a vast amount of poor quality information. So what are the challenges that you’re getting to get quality data? 

Georg Kirchner
Well, you mentioned the source quality somewhere along the line. So this is where it really starts. Not all of our authors are professional authors. You know, we have a pretty large area where translatables are being created by what you could call citizen authors. OK. 

Antoine Rey
Right. 

Georg Kirchner
And you need to translate that. And the quality of that source may not be great. Now, what does the translator do? Always have the time to turn around and say, hey, also, could you improve on the source? No. Instead, what they will be doing is they will over-translate to compensate for the deficiencies in the source. Then you have good translations, poor source, which makes basically poor parallel data. OK, so all of a sudden you end up with training data that’s not great to train a machine learning model, the source and the target do not match.

Yes, we are a big organization which shouldn’t come as a surprise. We have used translation management systems for a long time and you can imagine with many, many different programs. So we are faced with the legacy data structure, which is pretty complex at this point. So we have translations in many repositories, which is translation, memories. So we need to think about how can we consolidate the data so we can get to it more easily. I mean, think of translation memories structured byproduct versus quality tier. You know, we have many products, but how many quality tests do we have? Maybe three, four. So going from hundreds of times to let’s say four would really help an MT admin like me to process that data. 

Antoine Rey
So would you discard some of the data based on the number of criteria, whether that data is too old, for instance, or the quality or the type of content that you’re getting? Would you discard it from using it in the model? 

Georg Kirchner
Yeah, absolutely. So as you can imagine, terminology changes all the time right now. And DELL and EMC coming together, Dell buying EMC Our terminologists tend to make choices around which of which term we kind of go forward with. So there are changes within linguistic data around terminology. So you would want to cut out all the data. And that’s what we in fact are doing when we train models. The most simple way of pruning the data is going by timestamp. 

Antoine Rey
And you only use data that comes from internal to Dell or do you also sometimes buy external data. 

Georg Kirchner
So this is one of the things that are good or beneficial from working in a large organization now produces tons of data. So we do not have to buy the data. So so we produce sufficient data continuously to update models continuously. So that’s the good news. So we don’t have to buy it. As long as you’ve got quality data, you’re probably better off with lower amounts of data of high quality rather than having a vast amount of data that is of poor, low quality. Now, the interesting thing is, you know, all that data originates with translater, right? And the translator is certainly not staff at Dell. 

A human translator is sort of how should I say, sitting in what I call a multi-tier, multipronged supply chain. So they’re removed from us through various layers. And so what we’re still struggling a little bit with is rolling up this data to us through that complex supply chain. So this is, again, where legacy technology is a bit of a hindrance, but it’s just a matter of time before we switch to new technology for those kinds of data flows more easily facilitated. 

Antoine Rey
So there are some challenges, as we discussed there. But I presume you also have a certain number of successes of practical applications that you have experience with the data. Can you share some of that with us? 

Georg Kirchner
Well, the one is the customization of models so that the output here is more close to Dell terminology to Dell’s files protecting the branded items is pretty important as well. We did in the days of EMC, we did have a platform where we were able to capture data such as post-edit distance and scale. And, you know, this is where we then benefited from the insights. We were able to draw trend lines. So when we were moving from one technology, let’s say statistical MT to neuro MT, first generic and then customized, we were able to sort of monitor as we went how this affected the productivity. The productivity is basically the number of words postdated per hour and also how the post at a distance evolved. So, you know, if you manage to set up an environment like this, it is pretty powerful. 

Antoine Rey
And can you get to a place where you potentially are removing the need for post-editing to go to an MT only workflow, for instance? 

Georg Kirchner
You know, we are seeing this already where sometimes we have end-users who are very driven by quarterly budget constraints. And so just talking about a program around the knowledge base. So they are on a machine translation plus editing plane and then mid-quarter. They may run out of money and then they have the choice of not providing any translations at all or providing raw MT. And at this point, this team is comfortable enough with switching to raw MT sort of midstream. 

Antoine Rey
Right. 

Georg Kirchner
Which is kind of forcing the conversation. I think at this point, what their challenge is, is to capture the, you know, the customer reception of the raw MT to see if the customer is willing to go along with it. But it’s also, rather than being driven by budget constraints, it should be in a place at some stage where your stock engine is performing well enough for customization, no longer to be necessary or post-editing is no longer to be necessary. Now, that’s the holy grail of stock engines to become so good that we don’t even have to customize them anymore. 

Antoine Rey
Right. 

Georg Kirchner
And the reason why that’s interesting is for two reasons, I would say. Well, one is just the effort of training engines. But what I find more intriguing is that the stock engine may, at last, bring us to a point where terminology sort of gets normalized across the various companies. You know, why is it necessary that Dell and HP and NetApp may refer to whatever, let’s say, database differently? Only because we have three different providers from the end-user perspective, that’s really an impediment to on ramping and cognition. OK, so if the stock engine becomes the lingua franca at last, I’d be all for it. You know, from the user perspective, that’s really the Holy Grail. 

Antoine Rey
The Holy Grail. OK, well, we’re coming up to the end of our 15-minute session, I’m sure, our listeners will be very interested to hear, you know, some of the ins and outs that you’ve talked about and how machine learning works. So thanks very much for joining our program. And I’m sure we’ll be talking to you again very soon. 

Georg Kirchner
Well, thanks again for having me. It was great fun, thanks. 


Georg Kirchner

Globalization Technology Manager at Dell

Scroll to top