Stephen Downes’ new column on e-Learn does a great job of showing that solving the informational cascade problem is more challenging than I had presented it to be in my own article on the topic. In fact, his own analysis reveals that the problem may be harder to solve than even he himself suggests. The problem, as is often the case, is largely due to those pesky externalities, i.e., reality stubbornly refuses to conform to an elegant theory (and alas, Stephen is even more of a sucker for an elegant theory than I am).Stephen writes:
Though Feldstein’s solution would certainly solve the cascade problem, it does so at the cost of adding substantial overhead. “Informational cascades can be prevented but generally only with deliberate and specific intervention,” he writes. But the cost of such intervention impairs the functioning of the network. For example, Feldstein suggests the employment of “active moderators who have the authority to direct the group’s information-sharing activities.” People would be, for example, stepped through a polling process such that they would decide simultaneously whether to adopt Plan A or Plan B, thus ensuring that no person is influenced by the choice of another.
The problem of coordination this raises is staggering. Suppose four people are ready to choose a plan but the fifth is not. Are the first four retarded in their progress, or is a hasty decision forced on the fifth? Moreover, it is not even clear that communications between the people can be managed in such a way-what prevents their use of backchannels (such as telephone calls or after-hours meetings) to circumvent the limitations imposed in the communications network? Further still, some activities are inherently serial. How could we conduct an ongoing activity such as stock-market purchases were all transactions required to be conducted at the same time?
Dead on. Even with small groups, the administrative overhead that is entailed by the solution I suggest is high and, in some cases, prohibitively so. And the problem multiplies dramatically as we scale up. In a class, where the instructors control the problems that the students must solve, we can also control the costs of the administrative overhead to a substantial degree. But this is much less true in a work environment. If you have ever participated in a real-world “coordinated” roll-out of an organization-wide initiative then you probably were nodding your head vigorously when reading the quote above.
Unfortunately, Stephen’s own solution requires that people act as perfect (or near-perfect) network nodes:
If you have no friends, your choices will not be influenced by your friends. But if you have one friend then your friend will have a disproportionate influence on you (the centralized authority model). If you have 100 friends, however, the influence of one friend is once again reduced to the point where that one opinion, by itself, is unlikely to sway your decision.
But research strongly suggests that people simply do not grow circles of friends/influence that are large enough to reach anywhere near 100 on many of the issues that matter most to them. (I used to know the numbers for various kinds of human network affiliations but I can’t seem to remember them or find them again. If anybody knows these numbers, please add a comment.) I may know 100 people, but I don’t pay attention to what they all say when picking a political candidate. (Network researchers account for this kind of a limitation in their models with a massive fudge factor that they call “connection quality.”)
Furthermore, Stephen’s solution of increasing the number of network connections can be hobbled by the same realities that he pointed out would cause problems for my solution:
To return to the practical example set out by Feldstein, let’s look at the case of various managers opting for Plan A or Plan B. In the example, where there is a small number of managers, the problem isn’t simply that one manager is being influenced by the other, the problem is that the influence of the one has a disproportionate influence on the other. But instead of cutting off communication with the other manager-Feldstein’s solution-a more robust response would be to increase the number of managers with whom the first interacts. Thus, when one manager opts for Plan A, it will not automatically cause the other manager to opt for Plan A; the other managers’ inertia (or varied choices) counsels caution, and this allows for the influence of local knowledge to be felt.
In order to increase the number of managers with which the decision-maker interacts before making the decision, you need to first wait until enough managers have opinions to put into your bias-reducing pool. This is exactly part of the solution I suggested which, as Stephen correctly points out, entails significant overhead. The problem he raises of seriality is really just a symptom of a network model that is static rather than dynamic, and I don’t think Stephen has articulated a model that is any less static than mine. (Besides, what if you only have five managers?)
Stephen then raises a another problem that he also fails to put to rest; namely, power laws. He writes,
When we look at phenomena like the Kerry nomination, we see that the structure of the communication network that conveyed voter intentions was more like the manager model and less like a densely connected network. Voters did not typically obtain information from each other; they obtained information from centralized sources, such as broadcast agencies. These broadcasters, themselves sharply limited in the number of sources of information they could receive (and receiving it mostly from each other) were very quick to exhibit cascade properties, and when transmitted to the population at large, exhibited a disproportionate influence. Were the broadcasters removed from the picture, however, and were voters made aware of each others’ intentions directly, through bilateral rather than mediated communications, the influence of any one voice on the eventual vote would be minimized.
While I’d quibble and ask for some empirical validation of Stephen’s contention that “voters did not typically obtain information from each other,” in general, Stephen is right on once again. Information hub formation is inherent in scale-free networks. Hubs happen.
Stephen’s proposed solution to the power law problem is specialized RSS feeds rather than generic news hubs:
In my view, this will remain the case so long as access to content on the web is organized by Web site authors. Because of this, it remains difficult to find content on a particular topic, and readers will gravitate to a few sites that tend to cover topics in which they are interested rather than expend the time and effort to find items more precisely matching their interests. By drawing content from a wide variety of sites and organizing these contents into customized content feeds, the range of sites made available to a reader is much greater, decreasing the power law and reducing the probability of cascade phenomena. The shift from Web sites to blogs was, in effect, this sort of transition; the development of specialized RSS feeds will be a significant move in this direction.
This amounts to mass customization, and it’s not a new thing. The trend toward more consumer choices of many diverse and specialized has been a trend in periodicals and cable TV for a long time now. And as Cass Sunstein points out in Republic.com, there’s good reason to believe that most people will choose to use their new ability to create customized collections from a diverse pool of information sources to reduce the diversity of opinions they are exposed to rather than increase it. We tend to want to create the informational equivalent of gated communities, only letting in the perspectives that we want to hear.
So the net result of Stephen’s analysis (in my opinion) is that the problem looks even more serious and less tractable than it did at the end of my own article. He pokes some holes that he can’t quite manage to sew closed again.
Pandora’s box is open.
Stephen Downes says
I spent an hour typing a 6446 character response. On submission, I got a notice saying that comments have a maximum length of 5000 characters – something it would have been useful to know beforehand. On going back, my 6446 characters had, of course, disapperaed from the comments area.
the short version – we are not currently in the connected environment I describe — indeed, i actually said that we get information from concentrated sources. So observations about the current state do not refute my theory.
I am not committed to a static start sort of model as you describe in your counterexample. Discussions, decisions – these are made in the context of a prior set of interactions on similar phenomena. You don’t ‘wait for people to express opinions’ – you base your decisions on the interactions that have happened so far.
My model proposes large models. It is therefore unreasonable to expect it to work in a network of five people. A five person network is in trouble. Such a company should enciourage collaboration, networking activities, anything to increase connections – because otherwise a destructive local cascade is inevitable.
The presumption in Sunstein’s remarks is that people currently get information from varied sources, but that this would cease if people selected their own content. I reject both sides of that. Information today is not varied. And the popularity of blog indices shows that people opt for varied sources.
My model does no “amount to” mass customization. The aggregation is the key, not the content selection. Bringing 300 voices and putting them into a single feed. The idea is to make many voices easier to handle, to reduce the search time. Customization makes the quanbtity easier to deal with. But it doesn’t reduce diversity.
Sorry I’m being terse…
Michael Feldstein says
Hey Stephen,
First of all, I’m very sorry about the character limit thing. As you know, my blog is relatively new. This is the first time the problem has come up. I have upped the limit to 40,000. Regarding the silent failure, I’m not sure why that happened. I just tested the problem in my own browser (Safari on Mac) and it simply stopped me from typing further at the character limit. If you let me know what browser and platform you are using, I’ll report the bug to pMachine. At any rate, thanks for sticking with it and re-submitting your comment.
On substance, I think it worth pointing out that you and I are starting with very different core use cases. I was primarily concerned with small groups (e.g., online classes and company task forces) and extrapolating up while you appear to be focused on the large communities that the Internet affords and extrapolating down. Each of our solutions tends to work better for the core use case and starts to fray around the edges as we get closer to…well…the edge cases. As you pointed out, the administration model I proposed becomes cost-prohibitive as you scale up. This is why, for example, Fishkin’s deliberative polls have never been applied on a really wide scale and probably never will. Equally, though, saying that companies with five-person networks should encourage growth of connections, while true, isn’t pragmatic in many real-world situations. In my example for the article, the group of managers could have easily been regional managers for a company. There are plenty of companies that only have four or five regions in their geographical divisions. It’s a hard limit. And, while they could (and usually do) bring in outside perspectives to supplement, only those five people are going to have both the perspective and the skin in the game to make a proper evaluation. (Remember, one of the critical characteristics that drive efficient markets is that you really, *really* care about the outcome. It’s the best way to get humans to act as efficient network nodes.) A five-person group is a task force, a working group, a class, a board of directors…. Five-person decision-making groups happen all the time. You can say that you will encourage activities that will break up cascades, but the only way to ensure that, lacking a sufficiently large network, is through some kind of a process. Which is what I proposed.
As for the Sunstein argument and your emphasis on aggregation…please take what I’m about to say in the most complimentary way possible (which is how I intend it), but Stephen…you’re not normal. One of the things that I admire about you is that (as OLDaily demonstrates) you seem to be able to read, digest, and write about maybe 3 times as many articles in a day as the best of the rest of us can manage in a typical week. You are a stunningly voracious and omnivorous consumer of information.
Most people couldn’t and wouldn’t read a feed with 300 voices. It’s possible that I have over 300 feeds in my RSS reader, but most days I don’t scan even half of the headlines that I get. And I’m not normal either; I’m pretty far out on the leading tail of the bell curve. As are the people who read the blog indices. “Popularity” is a relative term. So while I agree with you that today’s information sources are not varied (or, at least, not varied in the right way), I disagree that the people in the fat middle of the bell curve are going to tend to vary their information sources given the choice. The empirical evidence we have so far suggests the contrary.
I wish we were all the information atheletes that you appear to be, but most of us aren’t. And lacking the ability and inclination to absorb massive quantities of diverse data points in the time available, most people logically (if instinctively) fall back on their trust networks. These networks are inevitably scale-free networks (as Malcom Gladwell has shown us); power laws –and therefore informational cascades– are baked in.
Piers says
Hi Michael,
Interesting stuff (and good article btw). The numbers you’re after are (I think) the Dunbar numbers (12, 50-150, and 150+).
On the preventing cascades, I suspect you’re probably right about them being intractable. One thing that occured to me was that while we may not be able to prevent them, we might be able stick on a band aid of sorts.
Bystander apathy, I think(?) can be seen as a negative information cascade – in that there is a decision but it’s one to do nothing. And research has shown that the Bystander effect can be mitigated by a) educating people about the effect, b) explaining what they need to do to avoid adding to the effect. (There’s more here). Perhaps the same could be done for other types of cascade?
At the beginning of your elearning piece you point to the vision of emergent utopia. Again, I think you’re right – it’s very much there – but the notion that people can be silly, lazy, selfish or worse doesn’t seem to sit well with it. Perhaps some human education needs to go hand in hand with these new decision tools? 🙂