An editorial, "We'll Have to Check, Sir," in this morning's Times criticizes the apparent difficulties in purging incorrect information from the government's terrorist "watch list." It's not, at first glance, at all counter-intuitive that organizations put less energy in deleting data than they put into collecting and storing it. Further thought, though, and I think this is a point that's implicit in the editorial, suggests that databases that contain errors may be more costly than you would think. A catalog company, for example, that has stale addresses in its database wastes money when it sends catalogs to nowhere. We might guess that there's an equilibrium somewhere -- the cost of identifying and purging bad addresses stands in some relation to the cost of producing and sending an advertisement to a landfill. The problem comes when the costs are invisible or externalized or when a powerful entity operates on a "better safe than sorry" logic or has the "luxury" of imputing an infinite price for false negatives (e.g., since the cost of letting a terrorist get away is infinite, any effort wasted by a false positive is worth it).
But one wonders whether the Homeland Security policy would stand up to a thorough cost benefit analysis. Unless it has access to infinite resources, every time it detains a Ted Kennedy, or other unfortunately named individual, it is diverting resources from its important duties.
But all this brings up a more general sociology of information question. How should we think about mis-information. I don't mean the act of spreading mis-information. Rather, how should we think about recorded information which is false? This includes information recorded in databases that is incorrect as well as information in circulation that is either false or has been stripped of context in a way that renders it uninformative. It looks like information, but it is not.
One approach is to think like a statistician or electrical engineer and describe it as error or noise. That's helpful, but we probably need to distinguish random noise -- high entropy, "signal-less" gibberish -- and non-random, incorrect signals born of lies, mistakes, or unrecorded change (as when I move and don't update everyone who has me in their database).
How are the data-miners thinking about this? What will we learn about the sociology of information through trying to port concepts from EE? More to come....