twitter mockup.png
 
 

Twitter

Concept Mobile App Redesign

My Role

UX Designer

Tools

Pen & Paper, Sketch, InVision

Methods

Demographic Research
User Research
User Interviews
Affinity Mapping
Persona Ideation & Creation
UI Design
Wireframes & Mockups- Onboarding Experience
Prototyping
Usability Testing & Iterations

During a 2-week sprint, our team of 4 UX Designers was challenged to implement ways to mitigate hate speech on Twitter with as little damage to the 1st amendment as possible.

Background: Since its launch in 2006, Twitter has become one of the largest social media platforms in the world. With 33.6 million users in 14 countries, it’s changed the way people consume news, interact with friends, and express opinions. In the last few years, Twitter has been under fire for allowing racists, conspiracy agitators, and bots to spread misinformation and tweet with impunity.

What Should we Consider?

 
thoughts.png
 

With these questions in mind, we began the initial stage of research.

I focused on learning current Twitter demographics and reading up on how active users feel about hate speech and bots on the platform. I also created a Twitter account and went through the onboarding process with my team, discussing the experience through the user’s perspective.

We used this information to construct an assumption journey map of the onboarding experience:

Twitter at a glance

We performed 15 interviews focusing on social media usage, expectations & experiences, and safety & accountability. Afterwards, we created an affinity map in order to identify trends and pain points. Here are some statistics about our users:

infos.png

OUr 3 key findings

  1. Users find different things offensive

  2. Users are frustrated with the content on their feeds

  3. Users are unaware of the current filtering options

We were surprised to learn that safety wasn’t as big of an issue as we thought it was… people didn’t feel threatened online, they were just annoyed that those spreading offensive content and misinformation weren’t being held accountable.

 
ppl.png
 

Users find different things offensive

We heard from people on both sides of the spectrum. Some users believed that people hide behind their computer screens, typing things they would never say out loud to another person, and using the first amendment as a shield to get away with being unkind and malicious.

On the other hand, we also spoke with users who felt that people just need to grow thicker skin and shouldn’t be offended by things they read online.

Users are frustrated with the content on their feeds

Almost all of our users expressed some degree of frustration with the content that appeared on their feeds. Alongside that frustration was a desire to have more control over what appears on their feeds and what doesn’t.

Users are unaware of the current filtering options

Based on these content-related frustrations, we realized our users didn’t know about the filtering options that already exist on Twitter. Our users had a desire for more control, but they actually already had this control, they just weren’t aware of it.

These discoveries allowed us to hit the ground running, brainstorming design changes that could transform Twitter into a kinder, friendlier, more intuitive app.

Who are WE Designing For?

I was responsible for creating personas based on the user research we did as a team. I came up with three personas: a new Twitter user, a Twitter-savvy micro influencer, and a troll.

Check out full personas here!

OUr 3 main solves

  1. Cleaning up the onboarding process

  2. Creating a downvote button

  3. Designing a pop-up to deter trolls from posting hateful content

Cleaning up the onboarding process

We made a few changes to the onboarding experience to accommodate Nora, our new user:

• New user welcome pop-up

It was my idea to create a pop-up to welcome new user and provide a short warning message for the more sensitive users.

• Straight-forward, intuitive filtering process

While the current Twitter model walks users through selecting content they want to see, it isn’t easy to figure out how to block or mute words. We incorporated this into the onboarding process.

grey 3.png
phone downvote.png
 

Creating a downvote button

For Jamie, the every day Twitter-savvy user, we designed a downvote button.

We found that most Twitter users preferred not to block/mute/report since it felt so drastic, but they were still sometimes upset by what popped up on their feeds. The downvote button addresses this problem.

How it works:

  • Downvoting a post allows a user to react negatively to a post without completely banning it from their feed.

  • Downvoting a post sends it to the bottom of their feed, and adjusts the algorithm so that they see less of posts with similar content.

The user is able to see less of the content they find offensive, but they aren’t completely removed from the conversation.

Designing a pop-up to deter trolls from posting hateful content

In order to stop Seth, a.k.a. the troll, we created a pop-up that would appear after someone had posted offensive content.

The pop up would ask if the user had lost their password, implying that Twitter didn’t think they would post something so controversial.

We were inspired by a MIT study about Facebook trolls:

  • After a troll had done something considered not cool, Facebook sent them a message, asking them if their account had been hacked, sneakily suggesting that there was no way the user would post something like that and hinting that someone else must have gotten into their profile.

  • This guilt trip worked! Trolls that received the “Have you been hacked?” message from Facebook actually posted less offensive content! So we decided to give it a try.

So could it work for us?

pop.png

ITERATIOns

Once we had our initial designs, we did a round of usability testing with Twitter users. Here is what we discovered:

The onboarding process is long and confusing

To counter this, we made a few small UI changes:

  • We changed “name” to “username” because new users didn’t know if they input during onboarding would be their handle, or if they would be prompted to choose one later

  • We made the profile picture bigger. The small picture and excess white space mislead users into thinking they could upload multiple profile photos

  • We added a radial progress indicator to show users where they were in the onboarding process.

1.png
2.png

The downvote button should be a broken heart, not a thumbs down icon

vote.png

While users approved of the downvote action, they were confused by the thumbs down icon we originally designed.

With no complementary thumbs up button, users were confused if the heart still meant “like” or if it now meant “favorite” or “save.”

We changed the downvote icon into a broken heart in order to stay consistent with Twitter’s icons and current UI.

 
trollbeanie.png
 

The pop-up to stop trolls was terribly received during usability testing and didn’t go over well with any of our users.

  • Some were confused about what it meant

  • Others understood it but didn’t like it

  • Almost all agreed that it wouldn’t stop them from posting offensive content if they were trolling.

We brainstormed about how we could change this pop-up to make it effective. We thought of locking troll IP address so that they can’t just create a new account if they get blocked or muted, or restricting their account to read only so they couldn’t post.

 

We kept getting stuck because we hadn’t actually been able to speak with any trolls, so we realized that we truly didn’t know what could stop one from posting offensive content.

 

The Seth persona remained incomplete because we realized we didn’t actually know him well enough to create a full one.

We didn’t want to ignore the trolls, or the bots, or anyone promoting hate speech on Twitter, but we came to the conclusion that it was unrealistic (and impossible) to remove hate speech and trolls and bots from the platform in just 2 weeks, so instead we focused on giving the user more intuitive control over the content they see.

What’s NExt?

 
trollme.png
 

Although the changes we made were approved by users, I don’t feel like we really solved the problem. We didn’t technically remove or reduce hate speech from the platform, we just made it easier for users to ignore.

I kept circling back to the troll. I heard about these trolls during most of my interviews, and we knew they existed, but we weren’t able to speak with one directly. In the end, we didn’t have adequate research to truly understand what might influences a troll’s behavior.

Going forward with this project, our next step should be speaking with a troll so that we can better understand where they are coming from and why they behave the way they do.