Tuesday, June 26, 2007

Where I've gone.

I'm neither dead nor dying, I swear. But my dissertation proposal draft nears completion, and I really need to stay away from blogging until I finish that thing. (But after that....)

Also, the university has been fucking around with my funding. First it was supposed to start being paid on May 25th, then June 25th, and now it's this Thursday... we'll see what happens on Thursday.... Just imagine not being paid regularly since the end of April and you'll see why this is a source of concern.

Finally, there's the whole Chris Benoit thing. If you haven't heard, surf over to Google News and search on the name. I don't normally care one way or the other about people I've never met (note my total lack of posting anything about Richard Rorty's recent death), but I respected Benoit for his ambition, his work ethic -- generally, his character. Now, I know that good people sometimes do bad things, but this was, by all accounts, a really good person, who did a really bad thing. It's disturbing, to say the least.

Wednesday, June 20, 2007

Bioplastics.

I haven't heard much in the ongoing climate change brawls about plastic, a pervasive material that we make, primarily, out of petrochemicals. Then today, lo and behold, I find this:
Carbon dioxide. Orange peels. Chicken feathers. Olive oil. Potato peels. E. coli bacteria. It is as if chemists have gone Dumpster diving in their hunt to make biodegradable, sustainable and renewable plastics. Most bioplastics are made from plants like corn, soy, sugar cane and switch grass, but scientists have recently turned to trash in an effort to make so-called green polymers, essentially plastics from garbage.


Cool.

Tuesday, June 12, 2007

Healthcare. A long post.

Another healthcare post, partially inspired by the last comment over here. I'll start by introducing the principles I consider important in distribution of healthcare resources, then discuss how they might be realized in policy.

The essential healthcare problem is a matter of fairly deciding who loses. Healthcare resources (including not only money and physicians, but support from allied professions, physical resources (e.g., ORs, organs), and so on) are scarce -- sufficiently scarce that there isn't enough to go around. So, when deciding between various methods of distributing resources we have to view the decision as a matter of deciding who will not be treated, rather than who will; as who will continue to live in pain, rather than who won't; as who will die, rather than who will live.

Immediately, then, this rules out full-competition, private healthcare as exists (largely) in the United States. It is grossly unfair to decide that certain people will fail to be treated, will live in pain and will die because they lack the ability to purchase the services they need. The only argument to the contrary that is worth taking seriously is the libertarian argument. The libertarian position holds that resources can be disposed of as one wishes as long as the acquisition was just; so, if one has justly acquired the resources used to purchase healthcare resources, then the acquisition of the healthcare resources is also just.

A just acquisition, for most libertarians, is an acquisition via (at possibly some distance) resources that one received directly for one's labour. So, the chain would be: labour, receive pay, use pay to acquire goods ... use goods or pay to acquire medical resources. But, the problems with the libertarian argument are legion. In this context, the most important is the labour-pay connection. Sellers of labour are very often on unequal footing with purchasers: the latter have tremendous power and leisure in their negotiations, while the former simply do not. Sooner or later, the seller of labour will have to accept what he is offered or die; the purchaser has no such limitation, insofar as he has already purchased much of others' labour and is capable of sustaining himself on the resources he currently controls. (The latter is a not-unreasonable assumption in the contemporary labour market.) At this point, a libertarian might reply that no interference in negative liberty is ever justified, except to correct a prior violation. That this is false follows from enriching the concept of liberty in a way most libertarians refuse: with positive liberty, the expansion of one's powers and abilities, and thus of one's liberty in ways of living. The contrast is not sharp, admittedly, but it amounts to something like this: a proponent of negative liberty considers a person free if he is not in chains, while the proponent of positive liberty considers a person free if he can achieve his goals.

Seen from this perspective, the injustice of the libertarian position is clear. Sellers of labour are not free, on their construal, because their achievement of their goals is blocked by the actions of another; sellers of labour are not in chains, but their options are curtailed by the demands of a fully free market. So, the libertarian argument cannot justify a free-market free-for-all when it comes to healthcare resources, because the libertarian argument cannot justify the system of labour and pay that operates underneath the healthcare market.

If not ability to pay, though, then what should be the principles by which we decide how to allocate healthcare resources? At a minimum, I would suggest, we should look to both consequentialist and deontological considerations: that is, we should look to the outcome of a given use of resources, and we should look to the need for a given use of resources. The former matters because of concerns regarding waste; when resources are as scarce as they in the healthcare sector, we should be careful to use them as effectively as possible. The latter matters because of concerns regarding respect and dignity of persons; even though resources are scarce, that does not justify cavalierly or callously dismissing particular patients from consideration simply because their outcomes are less optimal than those of others. So, whatever system we adopt to allocate healthcare resources, it must somehow weigh outcome and need.

This may not seem to rule out ability to pay completely, though. One could suggest that ability to pay serve as a tie-breaker: when confronted with two patients who are equal in terms of outcome and need, ability to pay could serve as a morally-neutral way to resolve the tie between them. Of course, this just assumes that ability to pay is morally neutral; as argued above, though, given the problems with the current system of labour and pay, it is extremely doubtful that this is the case.

Moving down from the level of principle to the level of policy, there are two distinct questions. The first is how we distribute the pay for healthcare resources to those who provide the resources. After all, even given the labour market is unjust, as defended above, it doesn't follow that the healthcare-related labour market should also be unjust -- the opposite, in fact. The second is how to decide who gets to access these healthcare resources.

With regard to the first, the options seem to be either free labour market or regulated. Clearly, a free labour market would just institute the problems with a free labour market demonstrated above: sellers of labour are under constant threat of violation of their positive liberty. So, what form should the regulation take? Free distribution of relevant information (e.g., complications records of physicians), of course; one of the causes of the above liberty problem is the labour-sellers lack of information regarding the purchaser. Minimum wages and other floors to the pay for providers of healthcare services also seems reasonable. The argument against price-fixing is that it leads either to surpluses (in the case of a price floor) or shortages (in the case of a price ceiling); however, there is already a shortage of trained healthcare service providers, thus a price floor, which would create a surplus, is actually a good thing. (FWIW, I have no ideas on how to solve the organ shortages, short of Max Headroom style murder-for-parts.) Whether or not there should be ceilings -- with the possibility that this sustains or worsens the current shortage -- relates to the second question, however: that is, how to decide who gets access to healthcare resources.

The pay for healthcare provision is, at least in part, borne by the consumers of healthcare resources. It follows from this that the prices for healthcare resources could serve as an access barrier; given the above claim that it is unfair to decide who loses healthcare resources on the basis of ability to pay (because ability to pay is determined by an unjust system), it would be unfair for prices to serve as a barrier to access. However, both the systems I will consider could adjust for pricing problems without putting a price ceiling in place.

Once the free-for-all of free-market healthcare is off the table, the remaining two options are public insurance and public health savings accounts (HSAs). Under a public insurance scheme, the public purse pays whatever the cost of the healthcare resources is. Under a public HSA scheme, the public purse contributes to a savings account, which is then drawn from in order to pay for healthcare resources. I envision the latter as a system whereby some fixed amount of money is reserved for every citizen under the government's jurisdiction, drawn from general tax revenues. This would prevent the wealthy from using their wealth to bulk up vast HSAs and consume far more than their share of healthcare resources, given that the middle-class and poor will run out their HSAs before the wealthy do. The amount should probably be determined based on the average cost of healthcare resources the average consumer needs (not uses, otherwise there's an incentive to overuse).

The former has good reasons to put a price ceiling in place: since healthcare consumers never have any real idea how much their use of the healthcare system costs, and since healthcare service providers won't be losing customers due to their prices, a price ceiling is the only way to prevent the system from being drained dry. The latter, however, has no good reason to do so: since consumers would control their own HSAs, they could shop around to the healthcare providers that they consider to offer reasonable prices for the service in question. One could also, if one chose, use non-HSA money to cover the difference between what the HSA money will cover and what one wants to purchase; since the HSA amount is determined by average cost of average need, this does not necessarily deprive anyone of healthcare resources without justification. The possibility of physicians rushing into elite practices and avoiding the middle-class and poor is a real one, I admit, but I suspect that tax disincentives could be used to steer physicians toward providing healthcare resources to the masses. (Something like: if you only performed six surgeries last year, but each cost $1 million, your tax rate increases by 50%. If you're already in a tax bracket of, say, 30%, that would increase the rate to 45%, which would constitute a big kick in the pocketbook.) Given that there is a price floor in place, the other concern -- that back-alley, disreputable physicians would undercut competitors' prices and offer incredibly bad service to the vulnerable -- is eliminated: if a consumer must pay the same amount to the back-alley doctor as a doctor providing better service, then why not go to the one with better service? (The incentive on the physician to provide better service would have to come from the profession; but that's an issue for another day.)Finally, HSAs have the advantage of encouraging rationing of routine medical procedures, as once the account is empty, it will remain empty until topped up again -- which, if the money comes from income taxes, will only happen once a year.

Overall, public HSAs seem to do a good job of balancing outcome and need, as long as the profession is doing its job, and the tax system works to push healthcare providers to work for other than the good of the rich. Where HSAs clearly fail, though, is in catastrophic cases and in expensive, routine procedures. The amount saved in an HSA may not be equal to the costs of a significant healthcare problem, and it seems unjust to punish those who are greatly in need. Similarly, it seems unjust to punish those who need expensive, but routine, procedures by forcing them to pay for the care out of pocket. The latter is fairly easy to solve: on a case-by-case basis, a given person could petition to have their HSA amount increased; if the problem is across a particular group (for example, women), then the distinction could be built into the HSA system from the beginning. The former is also fairly easy to solve: catastrophic cases are where an insurance scheme really shines, after all. So, the public insurance scheme would still exist, but as a back-up system to, on a case-by-case basis, fund necessary treatment that exceeded the amount contained in the HSA.

I think that covers most of the basic principled and policy ground. Any thoughts, omissions or critical remarks are welcome.

Wednesday, June 06, 2007

The self.

This is scientism gone completely over the edge. The basic story is about psychological theories that personality (by which, I think, they mean "whatever makes you you" -- so, the self) requires narrative. The problem is the slavish reporting of frankly speculative theorizing as undisputed fact. What particularly sticks in my craw is the leap from the claim that some people, when pressed, will tell stories about their lives to the claim that everyone's self necessarily has a narrative structure. The problem is pretty clear: the former is a matter of conforming to a questioner's expectations, while the latter is not.

It's at least possible that, as Galen Strawson has argued on several occasions (The Self, The Self and the SESMET (.pdf)), narrative selves are only part of the story. There are also episodic selves, those of us (and I do mean "us", including myself) who don't see themselves as living some kind of a story, with connected events and prevailing characters, but as a collectiong of largely-disconnected episodes. It's my theory -- I can't support it, so I should probably just call it a "hypothesis" -- that my lousy sense of direction and time are related to this. I have almost no awareness of how the things that happen in my life connect together, so it would be quite surprising if I had anything more than a vague idea of how much time is passing or what directions I need to go in order to reach a destination. I have to say I'm not quite as bad as Strawson, though; taking his word as sincere and accurate, he seems to be almost completely disconnected from his younger self.

Globe and Mail embarrasses itself. Also: water is wet.

Gah! What in the nine hells is this?! Jesus. I knew some of the US papers were embarrassing themselves vis-a-vis the new creation "museum" opened down thar, but the stupidity demonstrated by the G&M is pretty close. Some choice quotes (WARNING: Not to be read while eating):
Debunking evolution in dinosaur land
...
"Evolution is a faith, so is creation. We were not there, they were not there. Which faith fits the facts?"
...
The theory of creation science is not as widely accepted in Canada as it is in the United States.
...
it's too bad that the creation museum and the Tyrrell museum couldn't work together. "The Tyrrell is good place, but it has its timelines all wrong," she said. "The world can't be billions of years old."
Of course, I can't forget the token two sentences at the end from a paleontologist (read: someone who actually knows what he's talking about). Because, after all, it's not really lazy hack journalism if you give the "other side" some minimal chance to make "their" case.

Bias in drug trials.

According to this, drug trials financed by pharmaceutical companies tend to produce the results the sponsors want. In particular, in trials of one statin drug vs. another, any given study produced results favourable to the manufacturer that sponsored the study. Gasp, etc. I can think of a good half-dozen explanations for this that aren't particularly conspiracy theory; but, just for kicks, I went to find the paper that tried to establish this: here.

According to the "methods and findings" section, almost all the trials examined contained methodological weaknesses and errors -- which would lead one to expect some non-experimental factor contaminating the data. In trials that were better -- with adequate blinding, for example -- it was less likely that the conclusion would be favourable to the sponsor's drug.

So, overall, this isn't a big lesson about keeping pharmaceutical companies from funding research or what have you. Instead, it's a lesson about doing the trials properly. (What's more worrying to me, honestly, is that bad studies ever got published. So much for peer review.) The paper itself suggests one such explanation of their results:
Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial.
In other words, if the trial design is deliberately botched, favourable results can be obtained.

Accepting science.

This is something I recommend with caution. It's an interesting discussion of the "naive physics" and "naive psychology" which we gain in childhood, and clash with scientific conclusions, and how/why we may come to accept or reject scientific claims.

The caution is the rampant scientism in the piece: the claim, implicitly made, that whenever the former clashes with the latter, the latter is always right. A striking claim is this:
Dualism is mistaken — mental life emerges from physical processes.
I'll give the authors the benefit of the doubt to the effect that they don't know what "emerges" means, metaphysically speaking; but, that said, the nature of the connection between the mental and the physical is still largely up for grabs. To my understanding, the dominant idea in philosophy of mind these days is functionalism, the view that mental states are, in some way, constituted by lower-level physical processes. But this is a staunchly dualistic view -- as I've presented it, a dualism of kinds of states, but it could be cashed out as a dualism of properties. You have to be a pretty hard identity theorist (a reductionist, really) or an eliminativist to think that any kind of dualism is wrong.

So, the mere fact that some scientists think dualism (whatever they mean by that) is false doesn't really come to anything, unless they have good reasons for the claim. Since the reasons aren't given in this piece, it's hard to say one way or the other whether they exist. Given the general collapse of identity theories in the past couple of decades, though, and the rise of functionalism, I doubt that they do.

The punchline to the discussion is this:
This resistance [to scientific claims] will persist through adulthood if the scientific claims are contested within a society, and will be especially strong if there is a non-scientific alternative that is rooted in common sense and championed by people who are taken as reliable and trustworthy.
That seems basically right: when there's controversy about a claim, there will likely be more people who don't accept it than if the claim is taken as obvious; similarly, when trustworthy sources dispute a claim, people will tend to side with them; and people tend to side with their own "common sense". What's hard to see, though, is why any of this should always be a bad thing. Again, there's this presumption that when science says it, it has to be right. Science very often is right, but not always, and not about everything; so, when there's pushback against a scientific claim, it's important to do more than try to debunk or dismiss the resistance via causal mechanisms, and examine the reasons against the claim.

That the authors don't see this is revealed here:
The community of scientists has a legitimate claim to trustworthiness that other social institutions, such as religions and political movements, lack. The structure of scientific inquiry involves procedures, such as experiments and open debate, that are strikingly successful at revealing truths about the world. All other things being equal, a rational person is wise to defer to a geologist about the age of the earth rather than to a priest or to a politician. ... [O]ne way to combat resistance to science is to persuade children and adults that the institute of science is, for the most part, worthy of trust.
So, when dealing with an issue about which a geologist is genuinely an expert, and priests and politicians are not, we should defer to the geologist. True, and a little obvious. But what about issues geologists aren't expert in? (Evolutionary theory, for example.) What about issues no kind of scientist is an expert in? (Literary criticism, say.) Who do we go to, then?

That, I think, is at the heart of the ongoing "evolution wars": the sense that biologists are speaking about something they actually don't have expertise in. It's a false charge -- biologists, when they're being biologists, aren't really saying anything about the value of human life or the nature of the soul or any of that; and thus are, indeed, speaking on issues they are expert in -- but it's an easy one to make stick. I suspect it's because of the difficulty seeing what actually follows from a particular scientific claim, and what is only claimed to follow from it. Continuing the previous, there's nothing about contemporary biology which proves the soul does not exist. You're perfectly free to go on believing in the soul while accepting everything contemporary biology tells you (i.e., although you may not have reason to do it, you don't, just from biology, have reason not to). There are those, both biologists and creationists, who claim otherwise, that accepting biology requires rejecting the existence of the soul (and a bunch of other stuff, depending on the particular biologist or creationist in question).

This is why the "just trust us" solution is no solution at all. Careening from one sometimes-untrustworthy source to another is no way to sort out which claims one can rely upon as truths.

Open access.

I've blogged about this once before, but Michael Geist is at it again. Here he gives two proposals for enhancing "open access" to research, one sensible.

The sensible suggestion is that "raw, scientific data currently under [government] control" should be available freely. That makes a certain amount of sense to me. When public agencies generate data -- say (his example), Natural Resources Canada creating topographic maps -- then there's no good argument that I can see for not making the data publicly available.

Here's the other suggestion: "Ottawa must pressure the three federal research granting institutions to build open access requirements into their research mandates." It's not totally clear what he means by this. It could mean that NSERC, SSHRC and CIHR should encourage (whatever "encourage" means in this context) grant recipients to publish in open-access journals. This would create an annoying hurdle for those, like me, who aren't big fans of across-the-board open access, but there's already enough annoying hurdles in the grant process that one more won't make a significant difference. So, on the whole, that would be a rather benign circumstance.

More dangerously, though, it could also mean that grant recipients are required to publish in open-access journals. There's two problems with that as a policy. First, cutting off access to traditional journals and the like, at this stage of the game, shoots the value of the publication to hell. Indeed, why bother applying for a grant when its eventual product will have to be published in some journal no-one's heard of and no-one reads? Keep in mind that the point of publication, at least in the early stages of an academic career, is tenure, and low-profile publications matter less than high-profile ones.

Second, it misconstrues the relationship between researchers and grant-giving agencies. Researchers don't work for the grant agencies. If I'm hired as a researcher by some private institution, then there's a prima facie argument to be made that the work I produce is work-for-hire for that employer. But even if I'm such a private researcher, I don't owe the grant agency my work, as they don't employ me. If I'm at a university, then there's even less of an argument for someone other than me owning my work. So, research publications are a form of created work which is not under the control of the grant agency. Which means it is, initially, up to the creator(s) how they are distributed, and even if they are distributed at all. (I note with interest that there is no discussion in Geist's short post of CCA grants for artistic works. Is the creative control of an artist over his work more important than that of a researcher over his?)

Furthermore, the point of giving grants to researchers is not to make them into employees of the state, it's to encourage them to continue to research, and give them some level of autonomy in doing so (e.g., profs get a slight break from teaching, private researchers may be able to take a leave of absence, etc. and all can concentrate on their preferred work). This would be completely undercut by requiring grant recipients to publish in ways dictated by the grant agency. Indeed, again, why bother applying for a grant if you'll just end up working for someone else?

Finally, if the grant agencies try to dictate where research must be published, is there anything to stop them from dictating which research must be performed? I don't see why the descent down the slippery slope wouldn't happen.

Politics and law.

I saw this a little bit ago, and had to comment. The basic complaint is an old one, familiar to both left and right, that courts should not be "politicized". The short version of what I have to say is that it's pure bullshit. There's no hard line between law and politics (nor between law and morality).

The long version works like this. The standard story in philosophy of law, from H.L.A. Hart, is that law is a system of two broad kinds of rules. Primary rules are social standards (implicit or explicit) which prohibit, permit and require certain behaviours. "Don't sleep in the park" is a primary rule, as is "hold the door if someone's following behind you". A system solely of primary rules has some problems, though: you have no mechanism for determining precisely when a rule has been violated; you have no mechanism for changing or identifying rules; and you have no mechanism for determining when someone's punishment for breaking a rule was or was not sufficient. In short, it's a pretty vague and fuzzy system.

So, you add on a second level of rules: rules about the primary rules, which we may thus call secondary rules. Secondary rules tell you how to recognize a primary rule and how to enforce and change primary rules. A system of primary and secondary rules is a system of law.

There's a problem, though, when you try to pry this system of primary and secondary rules off from politics or morality, namely that there's no non-question-begging, non-arbitrary, non-ad hoc way to distinguish primary rules of law from rules of morality or rules of politics. For the sake of introducing some terms, you can't distinguish primary rules from principles (what is good/right or bad/wrong) or policies (instantiating what is good/right or bad/wrong in social context) or procedures (how to enact the policies). The problem with reducing law to rules is that politics and morality work on the basis of rules, too. Unless we want to introduce some strange hypothesis about how these rules differ in kind -- and I don't see what that would look like -- then it has to follow that morality, politics and law exist in the same "space". (I could probably throw etiquette in there, too.)

Now, this may be an objection to that model of law. But if it is, then what is the law supposed to look like such that moral and political commitments play no role in it? The only option I can see is a purely procedural one: law is just a matter of following strict rules. I'd call this the "chess model of law", as the rules of chess seem to work like this. That is, they're strict: the rules lay out exactly what a pawn can and can't do, and what a knight can and can't do, etc., and what constitutes a win, and what constitutes a loss, etc., and how to set up the board, etc, etc. Once we get a complete set of the rules of chess, we can, by following them precisely, play chess without ever having to introduce any non-chess-related considerations into the game. So, if law is a system of strict rules, it has to work like chess: once we know all the rules, we can just follow them precisely, play the "game" of law, and never have to inject any outside considerations.

This should be pretty obviously weird; here's why it's also wrong. Common law doesn't work like this. Common law is judge-made law (what the courts are supposed to be creating, at least in the Anglo-American tradition) and is contained in judicial decisions. But, as anyone whose had the (mis)fortune to read some judicial decisions knows, you can't just read off the common law from the decisions: you have to interpret them, "read between the lines", and such like. And what guides this process? There's really two options: the other source of law (statutes) or something extra-legal, like politics or morality (which would take us outside the strict rule understanding of law). (I'm omitting custom as a source of law as it seems to be of progressively less importance in contemporary legal systems. But the argument could easily be extended.)

So, let's look at statutes. Statutes are pretty direct, unlike judicial decisions: they actually say what the law is. The problem, though, is that the language they use is incapable of being as precise as it would have to be in order to constitute a set of strict rules. Strict rule have, as far as I can see, at least two necessary features: the concepts they use have clear necessary and sufficient conditions as their definitions, and the connections they draw between the concepts are expressible truth-functionally.

So, turning back to chess, we can define a pawn in terms of necessary and sufficient conditions: "a piece that occupies the second rank from the perspective of each player at the beginning of the game". That completely defines what a pawn is, but not what it does. So, we could introduce a rule defining a pawn's movement: "if there is a piece directly in front of a pawn, a pawn may not move; in all other cases, on the first move, a pawn may move two squares forward, on any other move, a pawn may move one square forward". These definitions would require augmentation with definitions of "rank" and "square" and what constituted "directly in front of", but these, I think, could also be provided.

The question is whether we could ever do that for statutory law. Let's take an easy example: "No dogs allowed." For this to be a strict rule, we would need a set of necessary and sufficient conditions for the concepts deployed, and we would have to be able to express the rule in formal truth-functional terms. So, we don't have to define what "no" means, as it can be represented as a truth-functional negation of "dogs allowed". This means we need necessary and sufficient conditions for "dogs" and "allowed". Good luck with that: even "dog" is incredibly difficult to capture in terms of necessary and sufficient conditions. Do dogs have to have four legs? Must they have fur? Do they need to have tails? If we took a dog and swapped 1% of its DNA with DNA of a cat, would it still be a dog? How about 5%? 20%? What about chihuahuas, anyway -- aren't they just rats that make funny noises? And so on, and so forth.

If we can't get the kind of definitions of concepts we need for a simple rule like "no dogs allowed", then it's highly unlikely we can get the definitions we need for more complex rules like, oh, say, the tax code. Which means that the strict rule understanding of law is wrong: we have to go extra-legal in order to understand what the law is trying to tell us to do. The best and most obvious source for extra-legal material that would be relevant is -- surprise! -- one of politics, morality or etiquette. That is, the other sources of social rules.

Excluding politics from this list would just be arbitrary (especially given the close ties between politics and morality). So, on the whole, it looks like politics must be connected with law.

Hilariously bad ideas about immigration.

This is too funny. The UK government is apparently manufacturing a big crisis of non-Britishness amongst its immigrant population. (When your best evidence of a problem with immigrants is the WAR ON TERRAH !!! and the BNP, then you're pretty fucking clueless.)

To solve this non-problem, they want to (I swear, I couldn't make this up if I tried): a "citizenship pack" distributed to UK teenagers when they turn 18, including information on "democracy, volunteering and civic duties such as jury service"; requiring immigrants to "demonstrate good behaviour and a willingness to integrate" in order to become British citizens; and create a "Britain Day".

Actually, the median of those three may be a snark from the reporter. Lower in the article, there's mention of a points system whereby immigrants would earn points for time spent in the country, bringing in investment, following laws, etc. and lose points for criminality. There's also a good suggestion about local governments providing better access to English-language training and employment.

The former, though, is clearly stupid. How many 18-year-olds will bother to read, let alone understand, any information in a "citizenship pack"? Particularly when the information covers such a broad spectrum of issues, most of which probably won't be pressing when a given teen receives their citizenship pack. Wouldn't it make more sense to inform new voters about how voting works when there's, say, an election actually going on? And to inform teens about jury duty when they're actually called? That is, when the information might actually be useful and, hence, actually absorbed?

Britain Day is even stupider. The component countries of the UK -- Scotland, Ireland, England and Wales -- already have their own national holidays (St. Andrew's Day, St. Patrick's Day, St. George's Day, and Wales National Day, respectively). Granted, you probably won't get the day off for them, but is that really crucial? What do Canadians do on Canada Day (or Americans on Independence Day) that really requires time off from work? Further, the connection between a national holiday and furthering some sense of "community" or "nationhood" or what have you is extremely suspect. How does giving people time to avoid society at large help to integrate that society?