With New Microsoft AI, The Web Hopes For Clippy, Not Tay

Image: CliqueApi Composite
Tammi L. Coles

Please, Sweet Lord, Let Microsoft’s Latest Foray Into AI Be A Raging Success

Those of us who are old enough to remember Clippy are still trying desperately to forget. It was like the shiny, big pimple that erupted into the middle of your forehead the morning of high school prom. Wrong time. Wrong place. God I hate you.

Despite the sunshiny nickname, the officially named “Clippit” was the digital help assistant that first shipped with Microsoft Office 97. More nag than helpmate, the paperclip-designed animation became notorious in the tech world for All Things Not Considered in software design. After Clippy must die! became a rallying cry for Microsoft Office users and employees alike, Microsoft turned off the default feature in 2001.

So you can imagine just how hesitant we old-timers are now that Microsoft has announced that Microsoft Word and PowerPoint will use artificial intelligence to automatically write photo descriptions.

… hesitant but also hopeful that it’s not Tay.

Watch Me Whip, Watch Me Tay

Clippy aside, our reasons for not trusting Microsoft in the AI department are not without merit. It was just earlier this year in March that the company thought it could roll out a too-cool-for-school chatbot called Tay on Twitter. In the end, it was more “unleash” than “roll out” though.

You remember Tay, don’t you? Tay, the “AI with zero chill” who loved Hitler, hated feminists, and perhaps ahead of its time, believed in Donald Trump as savior?

Yeah, that Tay.

Microsoft’s pre-launch press declared that Tay got smarter the more users engaged with her. Uh huh. Tay was all of 16-hours-old when Daddy Microsoft had to yank her ‘net privileges for all her Twittersphere misbehavior.

Deep Diving in the ‘Net Pit

Microsoft needs to get this one right.

The new AI application is being promoted as part of a disability rights initiative, whereby Microsoft Office will automagically provide captions where others thoughtlessly forget to. That simple feature will be a boon to those using screen readers.

But look at how it’s going to do it. The AI will employ “deep learning” to suss out what’s in the photos and then use its plain language to explain, out loud, what’s being shown.

As Tay certainly showed, the AI that is drawing on popular data — from the likes of content produced by Facebook and Twitter users—cannot assume that the source data is all good. Nobody is interested in censoring the tech tools, but there are many stages between bizarro censorship rules and belatedly responding once your tools have been hijacked by hatemongers. Not everyone welcomes new technology with the same benevolent open arms.

After the Tay incident, games developer Zöe Quinn (#gamergate) tweeted:

It’s 2016. If you’re not asking yourself “how could this be used to hurt someone” in your design/engineering process, you’ve failed.

“It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

Microsoft, we didn’t think we’d be begging for Clippy-style happiness, but please don’t give us another round with Tay.

Start typing and press Enter to search