August 16, 2019

Write/Speak/Code 2019 Conference


I’m attending the awesome Write/Speak/Code (thanks Benchling for sponsoring!) and saw twelve talks today. Here were my top talks!

Designing Against Domestic Violence

Speaker: @epenzeymoog

This talk was my favorite of the day. Eva explored the different ways technology can enable abusers to exert greater control over their victims and conceal their abusive behavior. The best parts, though, were the concrete and actionable suggestions for technologists:

  • A couple gets a join bank account. All of the security questions are about the man in the relationship, so he stops giving his partner the answers to the security questions as a way of taking over control of their finances and giving her an “allowance.”
    • Concrete suggestions for how to fix: I don’t remember the suggestions for this one.
  • A man uses his control over their shared IoT devices in their home to harass, gaslight, and spy on his his partner. He changed the code of their door lock and insisted he hadn’t, turned the thermostat up and insisted he hadn’t, and used their Amazon Alexa’s “drop-in” feature to spy on his partner while he was out of town on business.
    • Concrete suggestions:
      • Prominently display records of who did what action when to provide logs and evidence of behavior that’s accessible to non-tech-savvy people
      • Make all communication opt-in so that users can’t initiate conversations without agreement on both sides.
  • A man is physically abusive toward his pregant wife. He knocks her to the ground and prevents her from seeing a doctor. She goes to her pregnancy-tracking and health apps to try to figure out what type of medical care to get, but they don’t have any injury- or violence-related features.
    • Concrete suggestions:
      • In pregnancy applications, plan for the existence of violence and build features that allow women to log violent interactions since they may affect the health of their child.
      • In health applications generally, allow logging of injuries with descriptions.
      • Run analyses of descriptions to detect when a user may be being abused, and offer meaningful help and connection to resources if so.
  • A man exhibits controlling behavior toward his girlfriend, so she dumps him. He starts stalking her. She disconnects all of their shared apps, but he keeps being able to find her no matter where she goes. It turns out he had configured her car’s GPS to share its location with him. She turned that off, and he couldn’t find her anymore.
    • Concrete suggestions:
      • Prominently display when location tracking is enabled, and with whom.
      • Periodically prompt users to review who has access to their location data, so that they’re guaranteed to know if someone has configured their apps to share without them knowing. Make that information easy to find.

The entire talk was moving, upsetting, and powerful. One thing I hadn’t considered was that in abusive relationships, it’s apparently very common for abusers to have access to their victims’ passcodes, and therefore can easily exert control that’s difficult to manage or design away from from a software point of view. Banks and other financial companies instead have to look for patterns that tend to map to abuse, which is similar to how they detect fraud.

Additionally, her format was so well-done! She used real case studies to frame examples of design that can be exploited by abusers, and interspersed those anecdotes with statistics about the prevalence of domestic violence. Best of all was anticipating the audience’s emotional state after hearing about one of the more upsetting examples, and taking a breather to look at pictures of her cute dog.

Advocating for Salary Transparency

Speaker: @nancy_hawa

This talk was fascinating to me, as beforehand I wasn’t of the opinion that transparent salaries were always a good idea, and now I’m if not convinced, at least definitely ambivalent. Her definition of transparent salaries (named and exact numbers, not aggregates or salary bands) and her description of how she convinced her company to do transparent salaries were a great demonstration of persistence.

Making Good Trouble (without being fired)

Speaker: @duretti

This was a hilarious talk about how to implement change at work even when it’s risky to do so. The talk was good (

Key points:

  • Don’t do glue work unless you’re already considered highly technical: 🤔 not sure how I feel about this one, but maybe I’m just being too optimistic
  • Decide what’s more important to you: the credit or the result; you can get a lot done if you get someone more powerful to do it and get the credit instead
  • Choose your battles wisely and address the real workplace you’re in, not the ideal one you want it to be

) but the most important part was when she described her parents by saying:

“Their love language is critique”

because I cried a little I was laughing so hard.

Building Software with Trust and Safety

Speaker: @_gallexi

Similarly to the domestic violence talk, this was focused on how to keep people safe, but from the perspective of community moderation at GitHub. Lexi split the talk into three portions:

  • Protecting against behavior that GitHub identifies as toxic, aka abuse vectors (“like a security vulnerability but social”). She discussed that any time you take in UGC (user-generated content), store it, and show that content to another user, you have a vector for abuse. Examples are comments, pull requests (obiously), but also repos on users’ profile pages: GitHub displays repos you collaborate with on your profile page. But malicious actors invited random users to projects that had abusive names, forcing them to display those repos on their profile. The solution was to introduce the ability to deny collaboration invitations, display what information collaborators will have access to, and provide easy ways to report abuse.
  • Discourage behaviors that aren’t abusive, but that community maintainers identify as not condusive (can vary from community to community), like temporary interaction limits to impose cooldowns for heated conversations
  • Encourage behaviors that are condusive: community profiles, participation guidelines for new users, issue templates for different types of issues

Key quote:

Any vector for abuse will be abused. If users don’t feel safe, they’ll leave your platform and go to a competitor. It makes business sense to protect these users.

As someone who loves GitHub and has been really happy to see a bunch of new features (ex: pointing to the best comment in long issue threads), I was delighted to see that they’re putting so much effort into moderation and protection.