Communal Blocklist - How should it work?

So, I was inspired by a Quinn Norton post (and various other harrassment incidents on the web recently) to build an open-source subscribe-able, communal blocklist (or mute list) that anyone can deploy.

The best technical approach for Twitter abuse I’ve heard of is Ella Saitta’s suggestion of subscribe-able block lists, which could be very like normal lists, but which would make sure you never see the sort of people your friends block. Trolls could shout all they wanted; no one of relevance would be able to see them. It would be a way to never manage to wander into the wrong neighborhood.

Quinn Norton - Context Collapse, Architecture, and Plows

But how should it work? I have two models in mind, and I know which I prefer, but I am looking for feedback. The two models are the follower model, and the master list model.

1. the follower model

The gist of the follower model is this: if user A subscribes to user B’s blocklist, every ‘troll’ user B blocks, user A will automatically block as well. Participation is invite only.

Pros:
Simple - almost no additional behaviors beyond current Twitter block mechanisms.
Automatic - a 'block’ self-propagates across a community.

Cons:
Unpredictability - the chain of follower-ship can grow long, and lead unexpected places. You may know the person you’re following, but do you know who they are following?
Circularity - If person A follows person B, and visa versa, then anyone who’s on their blocklist will never be able get off of it. (This can probably be solved with clever programming)
Lack of Nuance - If person B blocks an account for personal reasons (i.e. not due to harassment or bad behavior) there’s no way of marking that distinction.
Filter-bubbly - see: http://en.wikipedia.org/wiki/Filter_bubble

Nuances

Blocking can be delayed, and ask users to respond if they want to not block this particular account they their “followee” blocked. That’s inelegant though, and creates spam.

2. the master list model

The gist of a master list model is: a master list is created for a community, administrated by one or more authorities that have write access. Anyone can subscribe to the list, but only the admins can add to it.

Pros:
Curated - Given that the admins are the ones you trust, the list is assembled intentionally.
Automatic-ish - For subscribers, it is still automatic.

Cons:
Requires Maintenance - it takes work for the admins to keep the list up to date
Authoritarian - why should you trust the maintainers of the list? Who keeps them accountable?
Still Filter-bubbly - =(

Nuances

One way to lower the maintenance required is to have opt-out blocking for administrators. i.e. If admin A blocks troll B on Twitter, admin A gets a notification that says, “Unless you respond, troll B will be added to the blocklist in X minutes.”

Another way to lower the maintenance required is to make “bootstrapping” easy, e.g. a one-time import of an admin’s existing blocklist.

It’s debatable whether the admin’s identities should be default public. On the one hand, authoritarianism, on the other hand, publicizing an admin’s identity invites harassment.

More thoughts

A couple features I think this will need regardless of which model this takes.

  1. One’s following list always takes precedent. No one you already follow gets blocked automatically.
  2. The 'troll’ list is always public.
  3. The software is open source, and new instances can be easily and independently deployed.
  4. Opt-outs - the subscriber should always be able to disregard/override the blocking of an account. This opt-out list should be easy to push to and pull from.

I am leaning towards the master list model. It’s a bit more work for the admins, but the overall system effects are more manageable, and creates a more predictable experience.

Thoughts?

p.s. code is here: https://github.com/tonyhschu/communal-blocklist but it only has Twitter OAuth so far.