Digging Democracy
Lately, my fondness for Pierre Levy's Collective Intelligence ideas have kicked in again. This is partly due to some writing we did a few weeks ago about OPENSTUDIO and partly because a lot of goings on seem to be reflecting those ideas. The most recent reflection comes by way of yesterday's Digg corruption scandal, which left me wondering about the trustworthiness of a collective trust.
For those who don't “digg”, here's a very, very brief background description about what took place yesterday. For a more detailed description, you should definitely consult a more authorative source. Digg is a technology news site (similar to slashdot) that uses virtues of shared social bookmarking to serve in place of editors. As people discover articles, they are can “digg” them indicating to other members (and friends) that they personally recommend the article. When an article collects enough diggs in a period of time, the article is promoted to the front page. The front page is, as you might expect, the place where a lot of people turn for the technology news of the day. The thought is that with democratization of the editorial process, comes an elimination of individual bias and the execution of a collective editorial will. [1]
This idea of using a statistical approach based on a lot of human decisions to solve problems that have proved troublesome for artificial intelligence systems is one that I find really appealing. But a few days ago as evidence was produced to show that some of the players in the democratic process were just a little too well orchestrated in their voting on Digg, I began to wonder about techniques available to verify that the results produced are truly that of the collective will. In essence, is there a reliable way to detect this sort of mass-collusion?
The original discovery that something was amiss at Digg seems to have been entirely providence as Macgyver over at forevergeek noticed an unlikely pattern of diggs on a couple of articles that were quickly promoted to the front page. After that, the plot thickened as articles that described the problems he had encountered were mysteriously removed, the submitter's account was suspended, and all subsequent submissions with links to forevergeek were refused. To make matters even worse, it seems that one of the founders of Digg is involved in some way. As is pointed out in Macgyver's original article, Kevin Rose appears in the list of eerily synchronized voters.
It is pretty clear that Digg has some issues that are undermining the democratic ideals upon which the site was supposedly founded. And it's a little disheartening to me that this type of abuse would have been almost completely undetectable had the “Digg Army“ been made to vote in a more random fashion.
For me, this drags out two questions that have plagued all practical democracy. First, how does one detect the sort of artificial cooperation described above. In the online world, these generally take the form of scripts or bots that act according to the directives of others. In government elections, we see it in the form of large deceased voter turn out and other crafty identity tricks. And also, one can never truly account for the role of the infrastructure. In this case, it could be that Digg insiders have set up special provisions for themselves. These mechanisms often emerge as part of the debugging process to test out more realistic scenarios and often they're hard to part with even in the production phase. [2] We had some similar situations on OPENSTUDIO. One of which occurred when I created some pieces using my inside knowledge of the system to create type when the only publicly available tool couldn't. A lot of people assumed that I was playing on the same field and were amazed at how I had so precisely duplicated the characters from certain popular typefaces. I felt pretty bad later even though no one really complained. The truth is, I had created the pieces to prove that the shaky document format we were using could actually handle text down the road. I never considered the continued existence of those pieces after we went live. Essentially it was an unintended side-effect of some debugging. The point is that as one of the system creators, it is hard to not disrupt the community in some way even when it isn't intended. And as outsiders, there is little choice but to accept the system on blind faith. Transparency seems to help in both cases, but I wonder how much of that is predicated on our own faith in the panopticon effect. That is, we just assume that nobody will try to pull something while their being watched. The truth, however, may very well be that most people assume that only the honest open up their systems. I've heard, for instance, that most drug traffickers consent to a search when asked by law enforcement. Refusal, I guess, is tantamount to confession in their minds. How many times do we actually probe into the inner workings of transparent systems? [3]
- There is another open question of whether this resorts in a stream of news where opposing views are cancelled out, leaving only a drab middle view point. For now, we'll just assume that the collective editorial will works in everyone's favor. And to be fair, digg also leverages personal social relationships, so the opportunity exists for communities to carry out their own collective editorial wills with a particular bias or perspective.
- To be clear: I am speculating; I have no idea if this is what happened.
- Though I am pointing out issues with transparent/open systems, I honestly believe they are ultimately far more reliable than closed systems. While I do think we tend to rely on the panopticon effect, there still exists the opportunity for review. Closed systems are entirely faith-based.