Long before Meg Mitchell founded the Ethical AI team at Google in 2017, she loved Boggle, the classic game where players come up with words from random letters in three minutes or less. Looking back at her childhood Boggle-playing days, Meg sees the game as her early inspiration to pursue studying computational linguistics. “I always loved identifying patterns, solving puzzles, language games, and creating new things,” Meg says. “And Boggle had it all. It was a puzzle, and it was creative.”
The creative puzzles she tackles today as a Senior Research Scientist at Google are developing tools and techniques to help artificial intelligence (AI) evolve ethically over time, reflecting Google’s AI Principles. We caught up with Meg to talk about what took her from playing Boggle to working at Google.
How do you describe your job at a dinner party to people who don’t work in tech?
When I used to work in language generation, my partner would say, “she makes robots talk.” Now that I work on AI Ethics as well, he says “she makes robots talk and helps them avoid inheriting human biases.” Everyone gets it when he says that! But I say “I work in AI Ethics.” I’ve found that gets people curious, and they generally want to know what that means. I say: ”When people create an AI system, it might not work well for everyone, meaning, it might limit what they can do in the world. What I do is develop frameworks for how well an AI system is doing in terms of offering equitable experiences for different people, so that the AI doesn’t affect different people disproportionately. This helps us avoid creating products that consistently don’t work well for some people and better for others.”
What’s an example that illustrates your work?
My team has developed what we call Model Cards, a way to help anyone, even non-technical people like journalists or designers, as well as everyday people, understand how specific machine learning, or ML, models work. The technical definition of an ML model: An ML model is the mathematical model that makes predictions by using algorithms that learn statistical relationships among examples. And the technical definition of a Model Card is a framework for documenting a model’s performance and intended usage.
Here’s a less technical explanation of Model Cards: You know the nutritional labels on food packaging that talk about calories, vitamin content, serving size, and ingredients? Model Cards are like these, but for ML models. They show, in a structured and easy-to-read way, what the ML model does, how well it works, its limitations, and more.
Recently, two cross-industry organizations, Partnership on AI and OpenAI, decided to apply our work on Model Cards to their frameworks and systems, respectively.
You started out studying linguistics. How did you know this field was for you?
Growing up, I was equally good at math and reading and writing, but I generally thought of myself as being good with language. Of course, this was a gender norm at the time. But I also taught myself to code and started programming for fun when I was 13. When I was a junior in high school, I liked doing creative things, and I really wanted to take a ceramics class in my free period. At the same time, I was in a calculus class, and my teacher literally got on her knee to encourage me to take advanced math instead. By the time I got to college, I was balancing both language and math, and my senior thesis at Reed College was on computational linguistics, and more specifically, on the generation of referring expressions. In non-technical terms, it’s simply about making appropriate references to people, places or things. My Ph.D. is in language generation, too—specifically vision-to-language generation, which is about translating visual things, like photos, into language, like captions or stories.
Eventually, I had an “aha moment” when I knew I wanted to pursue this field, and it’s thanks to my dog, Wendell. Wendell was a Great Dane. When I walked Wendell, tons of people would stop and say, “That’s not a dog, that’s a horse!” Once in a while, they’d say, “You should put a saddle on him!” They said the exact same phrases. After six years of hearing people say the same thing when they saw Wendell, I thought the consistency was so fascinating from a psycholinguistics point of view. I literally saw every day that people have stored prototypes in their minds. I realized through Wendell that although language is creative, and expressive, we say predictable things—and there are clear patterns. And sometimes, these predictable things we say are inaccurate and perpetuate stereotypes.
Looking back, I see I was very naturally interested in ethics in AI, in terms of fairness and inclusion, before it was “a thing.”
What’s your favorite part of your job?
Programming! I’m happiest when I’m coding. It’s how I de-stress. My colleagues ask me “how long has it been since you coded?” the way some people ask each other “how long has it been since you’ve had coffee?” or “how long has it been since you had a vacation?” If I haven’t coded in more than two weeks, I’m not my happiest self.
What’s the most challenging part of your job?
When we’re thinking of the end-to-end development of an AI system, there are challenges to making them more ethical, even if it seems like that’s obviously the right thing to do. Unintended bias creeps in. Unintentional outcomes occur. One way to avoid these are to represent many points of view and experiences, to catch gaps in terms of where and when an AI system isn’t performing as well for some people than for others. Who is at the table making decisions influences how a system is designed. This is why issues of diversity, equity and inclusion are a core part of my AI research, and why I encourage hiring AI talent that represents many dimensions of diversity.
What’s one habit that makes you and your team successful?
I message with the people I work with often. Everyone is remote, but it doesn’t feel like it. We share a lot of crazy, celebratory GIFs and happy emoji. Which makes sense, given my appreciation for fairness and language: GIFs and emoji are something that everyone can understand quickly and easily!
by Reena Jana via The Keyword
Comments
Post a Comment