First point: legislators have no business telling universities how they should be run, particularly when it comes to exclusively academic matters. I might concede some degree of oversight with regard to economic matters (which is why Boards of Governors exist), but academic matters are best decided by those who actually know something about academics: namely, academics. (Shocker!) I find it particularly interesting, incidentally, that the so-called "conservative" party, which decries legislative "interference" in the affairs of businesses, will gladly condone legislative interference in academic matters.
Second point: the academy must allow a certain number of cranks and crackpots. As long as there's no outright academic dishonesty going on (such as was, ultimately, the case with Ward Churchill), then there cannot be any limitations on which views are permissible. This is for several reasons. Anyone who has survived a PhD program has earned the right to be heard -- that's the point, after all; doctoral programs serve a gatekeeping function to keep genuine nuts out of serious academic discourse.
Furthermore, there is no possibility of foreknowledge as to which avenues of academic research will be fruitful, or, indeed, in which ways they could be fruitful. There seems to be a nice fantasy floating around in some people's minds to the effect that research monies can be "directed" towards "useful" research with a significant degree of confidence that something worthwhile will result. (Frankly, I wish it were that easy. It'd make my role a lot easier to fulfill.) Hence, this line would go, if we lack this confidence, then we should not fund the research. This is nuts. I've never met a researcher in any field who thought like this. At best, researchers think that the avenue they are pursuing may be interesting -- at worst, researchers think that the avenue they are pursuing may keep them busy. Practical benefit and clear worth is the exception, not the rule. (So why, then, even have universities? Because sometimes the results are doozies.)
So, then, given that we can't know what will and will not be a fruitful avenue of research in advance, we should not limit what avenues can be researched (except in the minimal way of requiring an advanced degree -- i.e., if you want to participate in the discussion, you have to put some work in to show you deserve to be taken seriously). And, given that legislators don't know enough about academics to interfere in academic matters, legislators should not be trying to compel universities to fire controversial academics. That, really, is (my sense of) the nutshell case explaining why academics get really angry when legislators try to tell them what to do. It implies knowledge that doesn't exist, and expertise that isn't there.
The usual rejoinder at this point is that "dangerous" views should be barred from the classroom. I tend to think that this rejoinder is itself a dangerous view, the first step on the slippery slope to full-blown fascist censorship, but let's put that aside. Suppose we know a view is dangerous (which we can't). Suppose legislators do know enough to interfere in academic matters (which they don't). Should dangerous views then be censored? The answer is a simple "no". Even if I add in the assumption that there are some clear and uncontroversial criteria about what counts as a "dangerous" view, the answer is still "no".
I'm always surprised that the reason for this is unclear to some people: namely that the best way to defuse dangerous ideas is to expose them to vigorous critique. Driving a dangerous idea "underground" is one of the surest ways I can see to turn it into a dogma and breed a fanatic loyalty to the idea. Take the one in discussion in the NYT article I linked to: the 9-11 conspiracy theories. There is a thriving underground discussion on whether 9-11 actually happened the way the 9-11 Commission claimed it did. (I'm willing to bet, incidentally, that the Commission was either a whitewash or was deliberately misled by the Bush administration. It would be simply unbelievable that this was the one time the Bush administration did something honest and sincere.) If this discussion is not exposed to the light, and consequently critiqued and scrutinized, there is a danger (although, I will accept, a small one) that the view will become inculcated in a small minority of the population. That is, there will be people who sincerely believe the most bizarre and unsupported claims about one of the worst terrorist attacks in my lifetime. (Just as there are people who sincerely believe the most bizarre and unsupported claims about one of the most vile acts of genocide in modern history, namely the Holocaust.)
That, on the face of it, is a bad outcome; and, worse, it was a bad outcome that came about when a good outcome was sought; and, worst of all, the bad outcome is exactly what the good outcome was trying to avoid! That is, by trying to censor dangerous views (and prevent people from believing them), people ended up believing the dangerous views. The only sure way I know to prevent people from believing things that it is actually dangerous for them to believe is by (1) repeatedly demonstrating that the views are wrong and/or ill-founded and (2) equipping people with the tools necessary to critically evaluate claims themselves. Both require exposure to dangerous views: the views cannot be refuted if they are not adequately understood, and critical evaluation skills cannot be fully developed if they have no targets. Hence, finally, we arrive at John Stuart Mill's claim (probably the only thing I really agree with Mill about):
The peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.