Friday 30 April 2021

This new emoji has been years in the making

When Jennifer Daniel, Google’s creative director for emoji, first joined the Unicode Technical Committee, she wondered, what’s the deal with the handshake emoji? Why isn’t there skin tone support? “There was a desire to make it happen, and it was possible to make it happen, but the group appeared to be stuck on how to make it happen,” Jennifer says.

Image shows a texting keyboard with various hand emojis with the Black skin tone, except the handshake emoji, which is yellow only.

So in 2019, she submitted the paperwork for Unicode to consider the addition of the multi-skin toned handshake.The proposal detailed how to create 25 possible combinations of different skin tones shaking hands. But encoding it all would be time-consuming; creating a new emoji can take up to two years, Jennifer explains. And while a regular, one-tone handshake emoji already existed, this particular addition would require making two new emoji hands (a right hand in all the various skin tone shades and a left in the various skin tone shades) in order to, as Jennifer explains, “make the ‘old’ handshake new again.” 

Every Unicode character has to be encoded; it’s like a language, with a set of rules that are communicated from a keyboard to a computer so that what you see on your screen looks the way it’s supposed to. This is called binary — or all the ones and zeros behind the scenes that make up everything you see on the internet. 

Every letter you are reading on this screen is assigned a code point. The Letter A? It’s Unicode code point U+0041, Jennifer says. When you send a word with the letter “A” to someone else, this code is what ensures they will see it. “So when we want to send a 🤦,  which maps to U+1f926, that code point must be understood on the other end regardless of what device the recipient is using,” she says.

This means when one emoji can come in different forms — like with gender or skin tone options — the coding gets more complex. “If emoji are letters, think of it this way: How many accent marks can you add to a letter? Adding more detail, like skin tone, gender or other customization options like color, to emoji gets more complicated.” Adding skin tone to the handshake emoji meant someone had to propose a solution that operated within the strict limitations of how characters are encoded.

That someone was Jennifer. “I build on the shoulders of giants,” she quickly explains. “The subcommittee is made up of volunteers, all of whom are generous with their expertise and time.” First, Jennifer looked at existing emoji to see if there were any that could be combined to generate all 25 skin tone combinations. “When it appeared that none would be suitable — for instance, 🤜 🤛 are great but also a very different greeting — we had to identify new additions That’s when we landed on adding a leftwards hand and a rightwards hand.” Once these two designs and proposals were approved and code points assigned, the team could then propose a multi-skin toned handshake that built on the newly created code for each hand.

Image showing the handshake emoji in various skin tones and skin tone combinations.

Aside from the actual coding, COVID-19 added new hurdles. Jennifer had proposed the emoji in November 2019 with the expectation it would land on devices in 2021, but because of COVID-19, all Unicode deployments were delayed six months. 

Fortunately, the multi-skin toned handshake emoji should appear in the next release, Emoji 14.0, meaning you should see it appear in 2022. For Jennifer, it’s exciting to see it finally come to fruition. “These kinds of explorations are really important because the Unicode Consortium and Google really care about bringing inclusion into the Unicode Standard,” she says. “It’s easy to identify ‘quick solutions’ but I try to stop and ask what does equitable representation really look like, and when is it just performative?”  

“Every time we add a new emoji, there’s a risk it could exclude people without our consciously knowing it,” Jennifer explains. “The best we can do is ensure emoji continue to be as broad, flexible and fluid as possible. Just like language. Just like you. 🦋”


by Molly The Keyword via The Keyword

A dietitian’s website and blog stir up more business

Like any savvy entrepreneur, Marisa Moore first launched a website to promote her brand and attract more business. “It was back in 2008,” recalls Marisa, an Atlanta-based registered dietitian and nutritionist. “I was making regular appearances on CNN and different media outlets. And I wanted a way for people to be able to easily find me. So, I put up marisamoore.com and started sharing nutrition tips.”


Since then, Marisa’s website has grown to become a key ingredient in her recipe for success — leading to more consulting work, media appearances and most recently, a cookbook-writing opportunity. She uses her blog to “to share (mostly) vegetarian recipes, credible nutrition information and a peek into my Southern roots and travels!” Popular posts promote Marisa’s brand of healthy and delicious nutrition, such as recipes for Mediterranean chickpea pasta salad and peanut butter banana breakfast cookies


Marisa’s webpage with pictures of salad and pizza.

Marisa’s blog serves up healthy veggie recipes with a Southern flair

Here are a few tasty highlights from our recent conversation with Marisa. 

Tell us about how you went from having zero online presence to becoming a content creator. 

It was a natural transition from sharing things with people one-to-one or in groups or in classes offline, to sharing that information online. Eventually, people started taking pictures of food with their phones. And I got wrapped up in that, [as well as] writing. So those two things came together and led me to putting up nutrition tips and recipes, to progressively getting better with my photography. Because I was appearing on CNN, I was also used to doing video. So eventually, I started to translate that into  doingmy own videos and putting them on my blog. 

How is your community involved?

I've learned so much from other bloggers. We support each other because we're in this alternate universe where our families don't really understand everything we do and if something goes viral, we're the only ones who really care.


Marissa Moore leans on a counter smiling and holding a coffee mug.

Marisa brings healthy, nutritious cooking into fans’ kitchens via the web

I'm also a part of the registered dietitian community, and I also have my [consumer] audience. So I have several different communities that I move about in, and it's all online and all fantastic.

How do you stay in touch with your audience?

Of course, social media —Instagram,Facebook and others. There are several groups that cater specifically to bloggers, and it's a great way to meet people. People can DM me and ask questions. Also, the comments on my blog provides important information from my community. I know the kinds of questions they have and the things they're looking for. I also use a newsletter to stay in touch. And now we're doing lots of virtual events such as live cooking demos. 


Marisa Moore’s Instagram page with food photos.

Marisa stirs up brand awareness on Instagram and other social media platforms

How do you monetize your blog or website?

I didn't add any ads to my blog until 2019. Then, I became part of an ad network, and that's worked out really well.  I also have partnerships and ambassadorships where I have some sponsored content and represent clients as a media spokesperson. I do a lot of public speaking and writing for other platforms. I also do a lot of consulting work with restaurants and food companies, separate from my blog. It’s really important to diversify your income if you’re self-employed. 

In my work as a registered dietitian, I speak to different groups as well. As a business owner, it’s also important to have a blog, because that's often how people find me. I've gotten some of the best consulting gigs just because someone Googled "registered dietitian in Atlanta,"  found me and hired me for a job. 

Exactly. It’s like the face you have on the web.

It's the one place that you own. And I think that's what's so important, because all of our social media could disappear tomorrow, which would be tragic … but it's really important that we own a piece of the web, and marisamoore.com is my little piece.



by Adriana DiamondGoogle Web Creators via The Keyword

New resources on the gender gap in computer science

When it comes to computer science, we still have a lot of work to do to address gaps in education. That’s evident in our latest report with Gallup, Current Perspectives and Continuing Challenges in Computer Science Education in US K-12 Schools. This report is our most recent in a multiple-year series of Diversity in K12 CS education reports with Gallup in an effort to share new research with advocates, administrators, nonprofit partners and the tech industry to continue addressing gaps in computer science education. 


While the 2020 Gallup reports shed light on many gaps related to race, gender and community size, we wanted to increase awareness of the gender gap, specifically, since the gender gap for girls and young women is still as stark as it was when we first released the report back in 2015.


Seventy-three percent of boys told researchers they were confident about learning computer science, compared with 60% of girls. (You can see more details in the full report.) Behind these statistics are real students who are missing opportunities for acquiring critical skills, knowledge, and opportunities. When girls miss out on opportunities to learn computer science, the tech industry misses out on their perspectives and potential innovations.


To help bring attention to the challenges, beliefs and stereotypes with which girls grapple, we partnered with London-based designer Sahara Jones to highlight the young girls’ voices behind these statistics:

Three poster images next to each other that have stats from a Google commissioned research study with K-12 young women's quotes next to the stats. First poster: 9% of girls think learning computer science is important. 91% do not. Quotes on this poster next to the 9% stat, in small font: It's exciting. Building stuff is fun.I'm good at it. It's rewarding. It could be a career, I love it. Quotes under the 91% stat, in much larger font: I've never considered it. It's too hard. I'm the only girl in the class. It's geeky. It's what the boys do. I don't belong. My school doesn't teach it. Second poster: 12% girls are likely to pursue a career in computer science. 88% are not. Quotes on this poster next to the 12% stat, in small font: I met some amazing computer scientists. I feel inspired. I'm good at it. Quotes on the poster next to the 88% stat in much larger font: I feel judged. I don't know what a computer scientist does. It's a boys career. None of my friends want to either. I don't know any female engineers. Third poster: 29% of parents of girls are eager for them to pursue a computer science career. 71% are not. Quotes in small font next to the 29% stat: It's a good opportunity. She really enjoys it. Tech is the future. She is so talented. Quotes, in much larger font next to the 71% stat: It's a man's job. I won't be able to help her.Can she do it? It might be too hard.I want her to do a more traditional job.

We’re making these graphics available for advocates, nonprofits and policymakers to use in presentations, publications or on social media. Our goal is to help increase awareness about this important topic and ultimately engage advocates in their own work to close the gender gap in computer science education. 


Also, for the first time, we’re making the detailed Gallup data in the report available to all [download here]. Our aim is to provide as much useful information as possible for educators, researchers, journalists and policymakers who care about equity and computer science education. We look forward to seeing how this data is used by the community to advocate for important policies and dedicate resources towards this work. We know there’s a long way to go but we hope that making data from our latest Gallup report freely available will aid in efforts to address equity gaps and make computer science truly open and welcoming to all.


At Google, we are committed to trying to close equity gaps in computer science, whether it’s due to race or ethnicity, gender or other limiting barriers. One of our initiatives is CS First, Google's introductory computer science curriculum targeted at underrepresented primary school students all around the world, including girls. Another is Code Next, which trains the next generation of Black and Latino tech leaders — many of whom are young women  — with a free high-school computer science curriculum, mentorship and community events. 


We’re grateful to educators for motivating girls to believe in themselves and encouraging them to explore how computer science can support them, no matter what career paths they take. We’re also proud to be part of a group of technology companies, governments and nonprofits in this fight for change. 


by Carina Box via The Keyword

A Matter of Impact: April updates from Google.org

Last week we celebrated Earth Day — the second one that’s taken place during the pandemic. It’s becoming clear that these two challenges aren’t mutually exclusive. We know, for example, that climate change impacts the same determinants of health that worsen the effects of COVID-19. And, as reports have noted, we can’t afford to relax when it comes to the uneven progress we’re making toward a greener future. 


At Google, we’re taking stock of where we’ve been and how we can continue building a more sustainable future. We’ve been deeply committed to sustainability ever since our founding two decades ago: we were the first major company to become carbon neutral and the first to match our electricity use with 100 percent renewable energy. 


While we lead with our own actions, we can only fully realize the potential of a green and sustainable world through strong partnerships with businesses, governments, and nonprofits. At Google.org, we’re particularly excited about the potential for technology-based solutions from nonprofits and social innovators. Time and again we hear from social entrepreneurs who have game-changing ideas but need a little boost to bring them to life. 


Through programs like our AI for Social Good Initiative and our most recent Google.org Impact Challenge on Climate, we are helping find, fund, and build these ideas. Already they’re having significant impact on critical issues from air quality to emissions analysis. In this month’s digest, you can read more about some of these ideas and the mark they’re making on the world. 


In case you missed it 

Earlier this month, Google sharedour latest series of commitments to support vaccine equity efforts across the globe. As part of this, Google.org is supporting Gavi, The Vaccine Alliance, in their latest fundraising push with initial funding to help fully vaccinate 250,000 people in low and middle income countries, technical assistance to improve their vaccine delivery systems and accelerate global distribution and Ad Grants to amplify fundraising efforts. We’ve since kicked off an internal giving campaign to increase our impact, bringing the total vaccinations funded to 880,000 to date, which includes matching funds from Gavi. And in the U.S., we’ve provided $2.5 million in overall grants to Partners in Health, Stop the Spread and Team Rubicon who are working directly with 500 community-based organizations to boost vaccine confidence and increase access to vaccines in Black, Latino and rural communities.


Gavin McCormick, Executive Director of WattTime

Gavin McCormick, Executive Director of WattTime

Hear from one of our grantees: WattTime  

Gavin McCormick is the Executive Director of WattTime, a nonprofit that offers technology solutions that make it easy for anyone to achieve emissions reductions. WattTime is an AI Impact Challenge grantee and received both funding and a cohort of Google.org Fellows to help support their work, particularly a project that helps individuals and corporations understand how to use energy when it’s most sustainable and allows regulators to understand the state of global emissions. 


“Data insights powered by AI help drive innovative solutions — from streaming services’ content suggestions to navigation on maps. But they’re still not often applied to some of the biggest challenges of our time like the climate crisis. My organization harnesses AI to empower people and companies alike to choose cleaner energy and slash emissions. Like enabling smart devices such as thermostats and electric vehicles to use electricity when power is clean and avoid using electricity when it’s dirty. Now with support from Google.org, we’re working with members of Climate TRACE — a global coalition we co-founded in 2019 of nonprofits, tech companies and climate leaders — to apply satellite imagery and other remote sensing technology to estimate nearly all types of human-caused greenhouse gas emissions in close to real time. We can’t solve the climate crisis if we don’t have an up-to-date understanding of where the emissions are coming from.” 

Alok Talekar, a Google.org Fellow with WattTime

Alok Talekar, a Google.org Fellow with WattTime

A few words with a Google.org Fellow: Alok Talekar

Alok Talekar is a software engineer at Google who participated in a Google.org Fellowship with WattTime. 


“I am a software engineer at Google and work on AI for social good with a focus on the agricultural sector in India. The Climate TRACE Google.org Fellowship with WattTime gave me the opportunity to change my career trajectory and work on climate crisis solutions full time. The mission that Gavin McCormick and team are pursuing is ambitious, and technology can help make it a reality. Over the course of the Fellowship, the team was able to use machine learning to process satellite imagery data of power plants around the world and determine when a particular plant was operational based on the imagery provided. I then helped the team to model and validate the bounds of accuracy of this approach in order to predict the cumulative annual emissions of a given power plant. I was proud to be able to contribute to the project in its early days and to be part of the core team that helped build this massive coalition for monitoring global emissions.”



by Jacquelline Fuller via The Keyword

Thursday 29 April 2021

Android Enterprise security delivers for flexible work

As many companies integrate return to the office plans with existing work-from-home strategies, a key component is building a device management and security strategy centered on remote access. In this era of hybrid work,  mobility is the critical link for workers who need the ability to connect to company resources from anywhere. 

A recent Forrester report highlights why IT administrators should use on-device security and enterprise management features to build a powerful and adaptive security strategy, noting how remote access is now paramount for business continuity. Organizations can enable the multilayered protections and management features in Android Enterprise to help their teams thrive in this hybrid world, giving teams powerful built-in security without layers of complexity.


Security built in as a foundation

In its research, Forrester found that 78% of IT admins surveyed are planning to increase their use of on-device security in the next year. When it comes to anti-malware defense, securely configuring devices and managing mobile applications, Android offers enterprise-grade security solutions that meet the needs of today’s organizations. 

Forrester recommends that operating system platform security be the key foundation to a device security strategy. With Android Enterprise, organizations benefit from on-device protection that is built to help secure data, protect employee privacy and equip IT admins with a rich set of management features. The report calls out how Android makes use of the anti-malware protections in Google Play Protect to provide an ongoing defense against potentially harmful apps.  In doing so, an IT security team can rely heavily on such built-in features to achieve the security posture that businesses of all sizes require to defend against complex attacks. 

Our recently updated Android Enterprise Security Paper provides a comprehensive review of the hardware and software security features available in Android which can be trusted for accessing critical and sensitive information.


Security admins need, privacy employees require

Android provides a depth of security features that are built to provide automatic defenses against many layers of threats. Google Play Protect uses machine learning to adapt to changing security threats, providing organizations a built-in solution at no cost.

The Android work profile gives organizations flexibility to securely enroll personal devices and provide greater privacy on corporate-owned smartphones and tablets. In its report, Forrester notes Android comes with strong data isolation and protection features with the Android work profile. By separating personal and work apps on devices with distinct encryption keys for each profile, Android gives admins a built-in solution to provide employees with secure access that aligns to their work styles without sharing any access to data from personal apps on devices with IT.

Managed Google Play lets admins specify which public or internal apps can be installed in the work profile. The granular levels of security available to admins from Android Enterprise APIs and the built-in security through services like Google Play Protect serve as a strong foundation for mounting a robust threat defense. In addition, the SafetyNet Attestation API integrates with partner Enterprise Mobility Management (EMM) solutions to verify that devices have not been compromised. This now includes hardware-backed evaluations as an indicator of a stronger device integrity evaluation. 

No matter where your teams are working, you can have confidence in the platform and management security features found in Android Enterprise. Learn more about building an on-device strategy from the Forrester report, and go in-depth on integrating features with our security paper.


by Mike Burr via The Keyword

Trash to treasure: How Google thinks about deconstruction

For Lauren Sparandara, stepping onto a construction site transports her to the scrappy dollhouses of her childhood.

"I would scavenge styrofoam from the household trash and use it to build these elaborate cityscapes for my dolls," she laughs. "I see a similar opportunity when I look at buildings that are about to be demolished: What could we make with those?"

At Google, Lauren looks for ways to reuse materials in Google's design and construction process — like salvaging perfectly good doors and hardware, cabinets, furniture, and lockers from existing buildings to reuse them in Google’s spaces or donate to local organizations in need. 

I sat down with Lauren to talk about what she envisions for future Google construction projects, and how it relates to the circular economy.

First things first: What is deconstruction?

Typically, heavy machinery demolishes existing structures on a construction site, which means usable materials are often sent to the landfill.

The alternative is deconstruction, where a building is systematically dismantled from the outside in. To the greatest extent possible, building components — like interior doors or wood components — are kept intact and salvaged for reuse, creating a more circular system. Deconstruction also increases the recyclability of materials that can’t be reused.

Existing buildings should be viewed as resources rather than something to be disposed of. Lauren Sparandara
Bay Area Sustainability Partner

Why does deconstruction interest you?

Existing buildings should be viewed as resources rather than something to be disposed of. Construction and demolition activities account for nearly two-thirds of all waste generated annually in the U.S. 

While traditional demolition is certainly time and cost-efficient, there's a huge missed opportunity when salvageable materials are landfilled. Deconstruction can shrink the environmental impact of construction and expand green job opportunities — within both the construction industry and salvaged and refurbished materials market. 


Can you give us an example of deconstruction put into practice at Google?

We've salvaged materials from small-scale interior refreshes since 2012 and have diverted over 1,000 tons of materials from landfills in the Bay Area in the process — that's roughly the weight of five Boeing 747s. When designing new office spaces, we look for opportunities to repurpose existing buildings. Our Spruce Goose office in the Los Angeles area is a converted airplane hangar, and our Fulton Market office in Chicago was a cold-storage warehouse. In Munich, we’ve started converting the Arnulfpost — a 1930s modernist-style postal distribution facility — into an inspiring workplace with public spaces for the community.

In addition to all of that, we want to spread awareness and advance research on circularity in buildings. In 2019, we partnered with the Ellen MacArthur Foundation, Building Product Ecosystems, and Ackerstein Sustainability to publish a whitepaper on commercial deconstruction and reuse, with the hopes of driving the wider building industry toward more circular practices.

Where do you see this work going in the future?

There’s the potential to think big about what we can do with our existing building stock, and reframe our thinking to view existing buildings as amazing resources rather than waste. Unfortunately, most deconstruction examples are historic residential properties, so we’re asking: “How can we create circular material flows from a suburban office building built in the 1980s? How do we prevent any usable materials from going to the landfill?’ 

We're starting to answer these questions as we work on new development projects. At the Caribbean office development in Sunnyvale, California, we salvaged 35 tons of material to donate to California charities and nonprofits. And at the Charleston East development project in Mountain View, California we’re incorporating over 30 types of salvaged materials.

A fork lift loads stacks of wood doors onto the back of a truck to get ready for donation.

A fork lift loads stacks of wood doors onto the back of a truck to get ready for donation.

Circularity is simple in concept but can be complex in practice — especially in industries that have long operated on a "take-make-waste" model. What challenges do you face?

First and foremost: existing office parks were not designed for deconstruction. Most of today's existing commercial buildings were built between 1960-2000, an era that relied on adhesives and composite materials, which make these structures challenging to dismantle. Furthermore, buildings can contain hazardous materials that shouldn’t be reintroduced into new construction.

In our white paper, we identified three additional barriers to deconstruction: regulatory hurdles, a limited deconstruction workforce, and an under-developed reuse marketplace. Luckily, there’s progress already being made in these spaces. 


Given these challenges, what are you doing to build circularity into Google's future workplaces?

We need to approach all elements of design with circular economy practices in mind. Our goal is to create workplaces that are resilient to change and don’t need to be demolished every twenty years. This requires thoughtful design — from adaptive reuse of existing buildings and avoiding building new structures in the first place to using healthy materials and small details like designing joints that can be mechanically dismantled. 


Back to your childhood dollhouses, what was a deconstruction or reuse example of your own that makes you proud?

My family recently remodeled our home, which happens to be the home I grew up in. Whenever possible, we have attempted to donate or reuse materials. We’ve found ways to reuse wood to replace our backyard fence, and we've donated our older appliances. My 5-year-old son even decided to repurpose old packaging material to make his last Halloween costume. I guess as they say, “the apple doesn’t fall far from the tree”!

Lauren Sparandara’s son, Jack, in his Halloween costume made of old packaging materials.

Lauren Sparandara’s son, Jack, in his Halloween costume made of old packaging materials.


by Mike Werner via The Keyword

Finding the intersection of social justice and tech

Welcome to the latest installment of our series, “My Path to Google.” These are real stories from Googlers, interns, and alumni highlighting how they got to Google, what their roles are like and even some tips on how to prepare for interviews.


Today’s post is all about Xiomara Contreras (seen above with her mother), a product marketing manager in our San Francisco office. Xiomara’s passion for social impact is deeply rooted in her work, both in her core role of supporting small businesses and in building community for underrepresented groups both in and out of Google.


How would you describe your role at Google?


I’m a product marketing manager working on Google My Business. Specifically, my team is dedicated to supporting small-business owners. Google My Business is a free tool that allows users to promote their Business Profile on Google Search and Maps, allowing them to respond to reviews, post photos of products or special offers and add or edit their business details so they can connect with customers.


My role focuses on core product marketing, meaning I work with product managers and engineers to determine who our users are, what they need and how to align our product with those needs. As a product marketing manager, I show the value of our product to small business owners. Additionally, I recently contributed to the creation and launch of the Black-owned business attribute to support Black-owned businesses.


What made you decide to apply to Google?


When I initially started thinking about a career, I thought I would be in the nonprofit sector because most of my previous experience was in that space. Also, I studied Communication Studies and Latina/o Studies at Northwestern and I wasn’t aware of the breadth of opportunities available to “non-technical” students in tech. 


Then I learned about Google'sBOLD Internship Program through Management Leadership for Tomorrow (MLT), an organization that prepares and connects university students from underrepresented backgrounds to internships and full-time careers. Through the support and encouragement of the organization, I applied to the internship. Once I was an intern at Google I was able to see how my passion for social justice issues, education and youth mentorship intersect with tech, and I knew I wanted to work at Google full time.

Three people sitting around a large “G” sculpture.

Xiomara and fellow Googlers/MLT alums, Janice and Olivia, representing Google at the Management Leadership for Tomorrow 15th Anniversary Celebration in 2019.

Can you expand more on that intersection?


Google has exposed me to different mentorship programs both inside and outside of the company. I volunteered for TutorMate and Spark, and I currently volunteer for iMentor, a three-year commitment to empower first-generation students from low-income communities to graduate high school, succeed in college and achieve their ambitions. I only learned about these opportunities through other Googlers. 


I’m also involved in increasing racial equity at Google through our Black and Latinx Marketers (BALM) employee resource group. This group is designed to help make Google a place where people like me can see themselves, be successful and feel fulfilled. Last year I was the Global Community Lead, organizing events like a dialogue series with external speakers to discuss issues impacting our community and fun activities like learning how to make café de olla in a workshop led by a small business owner.

What inspires you to log in every day?


First, just knowing that my core work is very impactful for small-business owners. My grandma is a small-business owner, and I use Google My Business for her business. I see how the product helps her stand out online and connect with new customers. So believing in the mission of Google and the mission of my own team keeps me invested in the company. 


Second, personally, being the daughter of Mexican immigrants, and the first person in my family to go to college motivates me every day to continue to grow here because my family sacrificed a lot for me to get where I am. This way, I am able to support them too.

Three people wearing Google shirts at an indoor event.

Xiomara (middle) with Googlers, Lucy and Huyen, at an event in the Google NYC office in 2019.

What resources did you use to prepare for your interviews?


Keeping up with Think with Google and The Keyword was extremely helpful as it gave me a deeper perspective on Google’s top priorities and new products. In particular, I read the small business section in The Keyword because I was passionate about Google’s initiatives for underrepresented business owners. It also helped to browse through other companies’ blogs and social channels to learn about their programs for small business owners. 


Because I wasn’t a marketing student, I also brushed up on my Google Ads skills as well as marketing 101 basics. 


Any tips you’d like to share with aspiring Googlers?


Your resume is your first impression. To make sure it’s at its best I encourage you to show it to a lot of people, even those outside of the company or marketing (or whatever area you’re interested in) to provide feedback. 


Also, don’t erase the other parts of you. When reviewing current students’ resumes, they often only show the things related to marketing and remove everything else. But things like student organizations, campus jobs, volunteer work and life experience all highlight how you are different and often demonstrate leadership and problem solving experiences well beyond, for example, a marketing internship.


by Ivan De La Torre via The Keyword

Five ways we’re making Google the safer way to search

The web is home to a lot of great things. But it is also a place where bad actors can try to take advantage of you or access your personal information. That's why we're always working to keep you safe while you search, and also to give you the tools to take control of your Search experience.  


Here are five ways we're making Google the safer way to search: 


1. Fighting spam

The last thing you want to worry about when you’re looking for cake recipes or researching a work project is landing on a malicious website where your identity might get stolen. It’s our job to help protect you from that, and it’s one we take very seriously. 


In 2020, we detected 40 billion pages of spam every day — including sites that have been hacked or deceptively created to steal your personal information — and blocked them from appearing in your results. Beyond traditional webspam, we’ve expanded our effort to protect you against other types of abuse like scams and fraud. Since 2018, we’ve been able to protect hundreds of millions of searches a year from ending up on scammy sites that try to deceive you with keyword stuffing, logos of brands they're imitating or a scam phone number they want you to call. 


We’re also providing web creators with resources to understand potential website vulnerabilities and better protect their sites, as well as tools to see if their sites have been hacked. This work is helping the entire web stay safer, and making it easier for you to land on safe sites with great experiences. To learn more about our work to fight spam on Search, read our 2020 Webspam Report.


2. Encrypting searches 

We also safeguard you from more than spam. We use encryption to prevent hackers and unwanted third parties from seeing what you are looking up or accessing your information. All searches made on google.com or in the Google app are protected by encrypting the connection between your device and Google, keeping your information safer.  


3. Helping you learn more about your results before you click 

Another way we protect users is by giving you the tools and context to learn more about your Search results. Let’s say you’re searching for something and find a result from a source you aren’t familiar with. By clicking on the three dots next to your result, you can see website descriptions, when Google first indexed the site, and whether or not a site’s connection is secure. This added context enables you to make a more informed decision about the source before clicking the blue link.


4. Browsing safely 

Sometimes in the excitement of trying to learn more about a topic, you end up clicking on a link to a dangerous site without even realizing it. But with Google Safe Browsing, we’ve got you covered. This feature currently protects over four billion devices and when enabled in Chrome, displays warning messages letting you know that the site you are trying to enter might be unsafe, protecting you and your personal information from potential malware and phishing scams. 


5. Protecting you from bad ads

Providing you access to high-quality and reliable information on Search also extends to the ads you see while searching for products, services and content. To ensure those ads aren’t scams or being misused, we are constantly developing and enforcing policies that put users first. In 2020, we blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads, across all of our platforms including Search. 


All of these tools were created with you in mind, so you can click on that carrot cake recipe knowing that we are working hard to help keep you safe online.



by JK KearnsSearch via The Keyword

New ways to save, commute and manage money with Google Pay

Last year, we launched a reimagined Google Pay to be a safe, simple and helpful way to pay and manage your finances. The app is full of ways to pay friends and businesses, save with offers and rewards and stay on top of your money. Today, we are announcing three ways to help you save money on groceries, pay for transit fares in more cities and better understand your spending.

Save at the grocery store

Small expenses add up, but finding ways to save on everyday items like groceries is one way to keep your budget in check. However, it can be a cumbersome task. Taking the time to look through coupons, finding the right offer, remembering to bring them with you or tracking down that promo code you saw online (where was it again?) can be tedious. 

We teamed up with Safeway to make it easy to find weekly grocery deals from the Google Pay app. You can find deals on thousands of items across more than 500 Safeway stores nationwide. You can also discover similar deals at Target stores nationwide.

Weekly deals on grocery items at Target and Safeway displayed in the Google Pay app.

Find weekly deals on groceries at Safeway and Target stores in the Google Pay app.

To find the latest grocery deals, search for Safeway or Target in the Google Pay app and tap "View Weekly Deals.'' If you’ve turned on location in Google Pay, soon the app will notify you of the weekly deals at Safeway and Target stores when you’re nearby. 

Pay for transit with Google Pay in more cities

Google Pay already supports buying and using mobile transit tickets in more than 80 cities and towns across the United States. Starting soon, we are adding Chicago and the San Francisco Bay Area to the list. In order to bring mobile ticketing to more people, we continue to expand not only to large cities, but also to dozens of smaller towns across the country through our integration with Token Transit

Soon, Google Pay users on Android will be able to access transit tickets from the app’s home screen. Tap the “Ride transit” shortcut and you will be able to purchase or add a transit card, top up the balance and pay for your fare. Once you purchase a transit card, there’s no need to unlock your phone. Just hold it to the reader and go. In cities without readers, commuters can simply show their visual tickets on their mobile devices. 

Google Pay app home screen which shows the new “Ride transit” shortcut.

Soon, you can access transit tickets through a new “Ride transit” shortcut.

See your monthly spending in just a few taps

The new Google Pay app was designed to help you stay on top of your money by providing a full view of your finances. Navigate over to the “Insights” tab for a view of your account balances and helpful insights on your spending, like upcoming bill reminders, weekly spend summaries or alerts when large transactions are made. 

As interest in social activities ramps back up, we are making it easy to keep a close eye on your spending. We recently added a fast way to see your spending by category or business. For example, if you search for “food,” you will see the amount you have spent on food this month as well as a list of all your transactions related to food. You can get even more specific, for example searching for “burgers” or for a specific business like “Burger King.” You don’t have to worry about the tedious task of categorizing or totaling your expenses; Google Pay does that for you. 

A list of all “food” related transactions from the month in the Google Pay app, and a list of all transactions from “Burger King” in the Google Pay app.

Quickly see your monthly spending by category or business.

With features like saving on groceries, paying for transit and keeping an eye on your spending patterns all in one spot, Google Pay keeps helping you manage your everyday money tasks.


by Josh Woodward via The Keyword

Bringing digital skills to previously incarcerated jobseekers

When I was in federal prison, I witnessed firsthand how incarceration affects people's lives — even long after they're released. After my own release in 2015, I created The Ladies of Hope Ministries (The LOHM), which helps previously incarcerated women transition back into society through education, entrepreneurship, spiritual empowerment and advocacy. 


In the U.S., more than 600,000 people make the transition from prisons to the community each year. While many are ready to start working, they often face systemic barriers to entering the workforce. The unemployment rate for people impacted by incarceration is five times the national average. Because of systemic racism in the justice system, this disproportionately impacts the Black community, who also experiences higher unemployment rates than any other racial group. Additionally, 82% of middle-skill jobs in the U.S. require digital proficiency, but many incarcerated individuals lack digital literacy after being removed from technology in prison. The research is clear: Ensuring people have jobs is key to helping them stay out of prison and contributes to our country’s economic health


Everyone should have access to economic opportunity. That’s why my nonprofit, along with the Center for Employment Opportunities, Defy Ventures, Fortune Society and The Last Mile, is partnering with Google on the Grow with Google Career Readiness for Reentry program. This program will train more than 10,000 people who have been impacted by incarceration on digital skills they can use to get a job or start businesses. This initiative builds upon Google’s existing criminal justice work — which includes more than $40 million in Google.org grants to organizations advancing reform in the U.S. justice system over the last six years — and is part of Google’s racial equity commitment to help Black job seekers grow their digital skills. 


The Grow with Google Career Readiness for Reentry program provides free training on digital fundamentals — like how to search and apply for jobs online, how to make a resume using web-based tools and how to send professional emails — as well as more advanced topics like including entrepreneurship and using spreadsheets to make a budget for your business. Several partners will also provide job placement support or help place learners into paid apprenticeships and entrepreneur-in-residence programs.


Partnering organizations like mine have worked with Google to develop the curriculum, designed as an easy-to-use guide to help community organizations deliver digital skills training to people returning from incarceration. Any nonprofit organization offering training to the reentry population can also join the Grow with Google Partner Program and access resources, workshop materials and hands-on help, completely free of cost. 


We can’t change the past, but we can build toward a better tomorrow. The ability to secure a job or start a business can pave the way for a brighter future, and I’m thrilled to work with Google to give others like me the opportunity for a fresh start.


by Topeka Sam via The Keyword

Wednesday 28 April 2021

When artists and machine intelligence come together

Throughout history, from photography to video to hypertext, artists have pushed the expressive limits of new technologies, and artificial intelligence is no exception. At I/O 2019, Google Research and Google Arts & Culture launched the Artists + Machine Intelligence Grants, providing a range of support and technical mentorship to six artists from around the globe following an open call for proposals. The inaugural grant program sought to expand the field of artists working with Machine Learning (ML) and, through supporting pioneering artists, creatively push at the boundaries of generative ML and natural language processing. 


Today, we are publishing the outcomes of the grants. The projects draw from many disciplines, including rap and hip hop, screenwriting, early cinema, phonetics, Spanish language poetry, and Indian pre-modern sound. What they all have in common is an ability to challenge our assumptions about AI’s creative potential.


a graffiti-style visualization of the artwork

Learn more about the Hip Hop Poetry Bot

Hip Hop Poetry Bot by Alex Fefegha  

Can AI rap? Alex explores speech generation trained on rap and hip hop lyrics by Black artists. For the moment it exists as a proof of concept, as building the experiment in full requires a large, public dataset of rap and hip hop lyrics on which an algorithm can be trained, and such a public archive doesn’t currently exist.  The project is therefore launching with an invitation from Alex to rap and hip hop artists to become creative collaborators and contribute their lyrics to create a new, public dataset of lyrics by Black artists. 

A woman, partly smiling, in an industrial-style room

Read more about Neural Swamp

Neural Swamp by Martine Syms 

Martine uses video and performance to examine representations of blackness across generations, geographies, mediums, and traditions. For this residency, Martine developed Neural Swamp, a play staged across five screens, starring five entities who talk and sing alongside and over each other. Two of the five voices are trained on Martine’s voice and generated using machine learning speech models. The project will premiere at The Philadelphia Museum of Art and Fondazione Sandretto Re Rebaudengo in Fall 2021.

A dashboard with toggles for changing the letters in a sentence

The Nonsense Laboratory by Allison Parrish  

Allison invites you to adjust, poke at, mangle, curate and compress words with a series of playful tools in her Nonsense Laboratory. Powered by a bespoke code library and machine learning model developed by Allison Parrish you can mix and respell words, sequence mouth movements to create new words, rewrite a text so that the words feel different in your mouth, or go on a journey through a field of nonsense. 

A collage of images, in the style of old cinema film

Let Me Dream Again by Anna Ridler 

Anna uses machine learning to try to recreate lost films from fragments of early Hollywood and European cinema that still exist. The outcome? An endlessly evolving, algorithmically generated film and soundtrack. The film will continually play, never repeating itself, over a period of one month. 

A woman in a desert holding a staff

Read more about Knots of Code

Knots of Code by Paola Torres Núñez del Prado

Paola studies the history of quipus, a pre-Columbian notation system that is based on the tying of knots in ropes, as part of a new research project, Knots of Code. The project’s first work is a Spanish language poetry-album from Paola and AIELSON, an artificial intelligence system that composes and recites poetry inspired by quipus and emulating the voice of the late Peruvian poet J.E. Eielson. 

An empty stage with bells hanging on wires

Read more about Dhvāni

Dhvāni by Budhaditya Chattopadhyay 

Budhaditya brings a lifelong interest in the materiality, phenomenology, political-cultural associations, and the sociability of sound to Dhvāni, a responsive sound installation, comprising 51 temple bells and conducted with the help of machine learning. An early iteration of Dhvāni was installed at EXPERIMENTA Arts & Sciences Biennale 2020 in Grenoble, France.  

Explore the artworks at g.co/artistsmeetai or on the free Google Arts & Culture app for iOS and Android.



by Freya MurrayGoogle Arts & Culture via The Keyword

Google Translate: One billion installs, one billion stories

When my wife and I were flying home from a trip to France a few years ago, our seatmate had just spent a few months exploring the French countryside and staying in small inns. When he learned that I worked on Google Translate, he got excited. He told me Translate’s conversation mode helped him chat with his hosts about family, politics, culture and more. Thanks to Translate, he said, he connected more deeply with people around him while in France.


The passenger I met isn't alone. Google Translate on Android hit one billion installs from the Google Play Store this March, and each one represents a story of people being able to better connect with one another. By understanding 109 languages (and counting!), Translate enables conversation and communication between millions of people which otherwise would have been impossible. And Translate itself has gone through countless changes on the path to one billion installs. Here’s how it has evolved so far.
A screenshot of one of the earliest versions of the Google Translate app for Android.

One of the earliest versions of the Google Translate app for Android.

January 2010: App launches

We released our Android app in January 2010, just over a year after the first commercial Android device was launched. As people started using the new Translate app over the next few years, we added a number of features to improve their experience, including early versions of conversation mode, offline translation and translating handwritten or printed text.

January 2014: 100+ million

Our Android app crossed 100 million installs exactly four years after we first launched it. In 2014, Google acquired QuestVisual, the maker of WordLens. Together with the WordLens team, Translate’s goal was to introduce an advanced visual translation experience in our app. Within eight months, the team delivered the ability to instantly translate text using a phone camera, just as the app reached 200 million installs.

An animation showing a person using Google Lens on a smartphone, taking a picture of a sign in Russian that is translated to “Access to City.”

November 2015: 300+ million

As it approached 300 million installs, Translate improved in two major ways. First, revamping Translate's conversation mode enabled two people to converse with each other despite speaking different languages, helping people in their everyday lives, as featured in the video From Syria to Canada.

A phone showing a flood-warning sign being translated between English and Spanish.

Second, Google Translate's rollout of Neural Machine Translation, well underway when the app reached 500 million installs, greatly improved the fluency of our translations across text, speech and visual translation. As the installs continued to grow, we compressed those advanced models down to a size that can run on a phone. Offline translations made these high-quality translations available to anyone even when there is no network or connectivity is poor.

June 2019: 750+ million

At 750 million installs, four years after Word Lens integrated into Translate, we launched a major revamp of the instant camera translation experience. This upgrade allowed us to visually translate 88 languages into more than 100 languages.
A phone showing a real-time streaming translation of English text to Spanish.

February 2020: 850+ million

Transcribe, our long-form speech translation feature, launched when we reached 850 million installs. We partnered with the Pixel Buds team to offer streaming speech translations on top of our Transcribe feature, for more natural conversations between people speaking different languages. During this time, we improved the accuracy and increased the number of supported languages for offline translation.


March 2021: 1 billion — and beyond

Aside from these features, our engineering team has spent countless hours on bringing our users a simple-to-use experience on a stable app, keeping up with platform needs and rigorously testing changes before they launch. As we celebrate this milestone and all our users whose experiences make the work meaningful, we also celebrate our engineers who build with care, our designers who fret over every pixel and our product team who bring focus.

Our mission is to enable everyone, everywhere to understand the world and express themselves across languages. Looking beyond one billion installs, we’re looking forward to continually improving translation quality and user experiences, supporting more languages and helping everyone communicate, every day.


by Jeff PitmanGoogle Translate via The Keyword

AI assists doctors in interpreting skin conditions

Globally, skin conditions affect about 2 billion people. Diagnosing and treating these skin conditions is a complex process that involves specialized training. Due to a shortage of dermatologists and long wait times to see one, most patients first seek care from non-specialists.

Typically, a clinician examines the affected areas and the patient's medical history before arriving at a list of potential diagnoses, sometimes known as a “differential diagnosis”. They then use this information to decide on the next step such as a test, observation or treatment. 

To see if artificial intelligence (AI) could improve the process, we conducted a randomized retrospective study that was published today in JAMA Network Open. The study examined if a research tool we developed could help non-specialists clinicians — such as primary care physicians and nurse practitioners — more accurately interpret skin conditions. The tool uses Google’s deep learning system (that you can learn more about in Nature Medicine) to interpret de-identified images and medical history and provide a list of matching skin conditions.

In the study, 40 non-specialist clinicians interpreted de-identified images of patients’ skin conditions from a telemedicine dermatology service, identified the condition, and made recommendations such as biopsy or referral to a dermatologist. Each clinician examined over 1,000 cases — clinicians used the AI-powered tool for half of the cases and didn’t have access to the assistive AI tool in the other half.


Main takeaways of study: AI-assisted clinicians were better able to interpret skin conditions.

Main takeaways of study: AI-assisted clinicians were better able to interpret skin conditions and more often arrive at the same diagnosis as dermatologists. 

Clinicians with AI assistance were significantly more likely to arrive at the same diagnosis as dermatologists, compared to clinicians reviewing cases without AI assistance. The chances of identifying the correct top condition improved by more than 20% on a relative basis, though the degree of improvement varied by the individual.

We believe AI must be designed to improve care for everyone. In the study, clinicians' performance was consistently higher with AI assistance across a broad range of skin types — from pale skin that does not tan to brown skin that rarely burns. In addition to improving diagnostic ability, the AI assistance helped clinicians in the study feel more confident about their assessment and reassuringly did not increase their likelihood to recommend biopsies or referrals to dermatologists as the next appropriate step.

These research study results are promising and show that AI-based assistive tools could help non-specialist clinicians assess skin conditions. AI has shown great potential to improve health care outcomes; the next challenge is to demonstrate how AI can be applied in the real world. At Google Health, we’re committed to working with clinicians, patients and others to harness advances in research and ultimately bring about better and more accessible care. 


by Ayush Jain via The Keyword

Introducing Android Earthquake Alerts outside the U.S.

In a natural disaster or emergency, every second counts. For example, when it comes to earthquakes, studies show that more than 50% of injuries can be prevented if users receive an early warning, and have the critical seconds needed to get to safety. That's why last year, we launched the Android Earthquake Alerts System, which uses sensors in Android smartphones to detect earthquakes around the world. The free system provides near-instant information to Google Search about local seismic events when you search “Earthquake near me.”


Today we’re announcing an expansion of the Android Earthquake Alerts System that uses both the detection and alerts capabilities, bringing these alerts to Android users in countries that don’t have early warning alert systems. We’re introducing the Android Earthquake Alerts System in Greece and New Zealand, where Android users will receive automatic early warning alerts when there is an earthquake in their area. Users who do not wish to receive these alerts can turn this off in device settings.


We launched alerting in August 2020, in partnership with the United States Geological Survey (USGS) and powered by ShakeAlert®, which made alerts available for Android users in California. This feature recently expanded to users in Oregon and will be rolling out in Washington this May. 


Early warning alerts in New Zealand and Greece work by using the accelerometers built into most Android smartphones to detect seismic waves that indicate an earthquake might be happening. If the phone detects shaking that it thinks may be an earthquake, it sends a signal to our earthquake detection server, along with a coarse location of where the shaking occurred. The server then takes this information from many phones to figure out if an earthquake is happening, where it is and what its magnitude is.


New Zealand and Greece will be the first countries to take advantage of both the detection and alert capabilities of the Android Earthquake Alerts System. Through this system, we hope to provide people with the advance notice they need to stay safe.



by Boone SpoonerAndroid via The Keyword

Loud and clear: AI is improving Assistant conversations

To get things done with the Google Assistant, it needs to understand you – it has to both recognize the words you’re saying, and also know what you mean. It should adapt to your way of talking, not require you to say exactly the right words in the right order.

Understanding spoken language is difficult because it’s so contextual, and varies so much from person to person. And names can bring up other language hiccups — for instance, some names that are spelled the same are pronounced differently. It’s this kind of complexity that makes perfectly understanding the way we speak so difficult. This is something we’re working on with Assistant, and we have a few new improvements to share.


Teach Google to recognize unique names 

Names matter, and it’s frustrating when you’re trying to send a text or make a call and Google Assistant mispronounces or simply doesn’t recognize a contact. We want Assistant to accurately recognize and pronounce people’s names as often as possible, especially those that are less common.

Starting over the next few days, you can teach Google Assistant to enunciate and recognize names of your contacts the way you pronounce them. Assistant will listen to your pronunciation and remember it, without keeping a recording of your voice. This means Assistant will be able to better understand you when you say those names, and also be able to pronounce them correctly. The feature will be available in English and we hope to expand to more languages soon.


A good conversation is all about context

Assistant’s timers are a popular tool, and plenty of us set more than one of them at the same time. Maybe you’ve got a 10-minute timer for dinner going at the same time as another to remind the kids to start their homework in 20 minutes. You might fumble and stop mid sentence to correct how long the timer should be set for, or maybe you don’t use the exact same phrase to cancel it as you did to create it. Like in any conversation, context matters and Assistant needs to be flexible enough to understand what you're referring to when you ask for help.

To help with these kinds of conversational complexities, we fully rebuilt Assistant's NLU models so it can now more accurately understand context while also improving its "reference resolution" — meaning it knows exactly what you’re trying to do with a command.  This upgrade uses machine learning technology powered by state-of-the-art BERT, a technology we invented in 2018 and first brought to Search that makes it possible to process words in relation to all the other words in a sentence, rather than one-by-one in order. Because of these improvements, Assistant can now respond nearly 100 percent accurately to alarms and timer tasks. And over time, we’ll bring this capability to other use cases, so Assistant can learn to better understand you.


These updates are now available for alarms and timers on Google smart speakers in English in the U.S. and expanding to phones and smart displays soon.


More natural conversations

We also applied BERT to further improve the quality of your conversations. Google Assistant uses your previous interactions and understands what’s currently being displayed on your smartphone or smart display to respond to any follow-up questions, letting you have a more natural, back-and-forth conversation.

Have a more natural, back and forth conversation with Google

If you’re having a conversation with your Assistant about Miami and you want more information, it will know that when you say “show me the nicest beaches” you mean beaches in Miami. Assistant can also understand questions that are referring to what you’re looking at on your smartphone or tablet screen, like [who built the first one] or queries that look incomplete like [when] or [from its construction]. 

There's a lot of work to be done, and we look forward to continue advancing our conversational AI capabilities as we move toward more natural, fluid voice interactions that truly make everyday a little easier. 


by Yury Pinsky via The Keyword