The Strata + Hadoop World Conference ran last week at the Javits Center in New York. I was fortunate enough to be sent there by Gawker.
Wednesday was the first day of keynotes. it was a sticky, humid day thanks to Hurricane Joaquin. I was thankful that the new, sparsely used Hudson Yards station meant I got to ride in air conditioned comfort most of the way to Javits.
I walked in during the middle of the keynotes. I can’t say that I was particularly taken with any of them.
Session 1: Law, Ethics, and Open Data
I started by sitting in on a talk by Evan Selinger of RIT and Jules Polonetsky from the Future of Privacy Forum. We at Gawker have been talking about changes that may impact our data policies recently, so I was anxious to hear what they had to say.
The session, which was conducted without slides or visuals of any kind, was effectively a review of the currently state of privacy law in the US, with a discussion of consumer boards tacked on at the end. I thought it was excellent.
- The US has no general consumer privacy law: privacy regulation is made up by a patchwork of specific industry regulations (HIPAA, RFPA, COPA, FERPA, etc).
- The speakers talked about a number of recent, famous privacy stories that had been in the news: “The Target Story” (as it’s universally known), the MIT gaydar study, okCupid’s Christian Rudder using data to quantify racial preference, and Facebook manipulating what posts people see in order to see the effect on mood. In particular they spoke about the involvement of the Cornell Institutional Review Board (IRB) with the Facebook study, and concerns around ‘IRB Laundering’ by large organizations. This led into discussions of consumer subject review boards, though it seemed like the idea isn’t fully hashed out yet.
- One point that the speakers made that I found especially interesting was that there may be a moral imperative to do product testing. With medication, we would think it’s crazy to expose a people to a product on a large scale unless it has already been tested. We all need to test our products to see what the effects are.
My thought on this are that there is the issue of consent: human trials require informed consent, with potentially drastic consequences if not followed. The FDA does have programs to allow for experimentation when it is impossible to obtain consent, though the regulations that I read specify Participation in the research holds out the prospect of direct benefit to the subjects.
I think that’s a reasonable standard. So in this case, manipulating FB feeds (without consent) with the intent to make people happier is ok, while doing so in order to make people less happy is not. Of course, who the subject of the experiment is sometimes isn’t totally clear on a platform: one of the speakers mentioned that he never saw a friend’s posts about her breast cancer because people did not ‘like’ them enough for them to be promoted into his newsfeed, though he wanted that information.
- The speakers were realistic about privacy laws: namely, we aren’t getting major new privacy laws in this country soon. However, most of us are working on platforms with the big companies. They (Apple/Google/FB/Microsoft) are restricting targeting and will make privacy decisions to protect their users for us.
- Also, the speakers told story that they said is “famous in privacy-land”: during Justice Bork’s confirmation, a reporter looked into his video rental history and reported on it. This spooked Congress, and shortly after that they passed the Video Privacy Protection Act. Direct displays of poor privacy to Congress are a great way to get regulated!
To see my next post on the conference, go here.