Advertisement

Seeing the Model Rather then the World

I was reading Dani Rodrik’s thoughts on “the sorry state of macroeconomics” and it reminded me of something Greg Mankiw wrote back in February listing a series of propositions on which economists generally agree. One of the items on his list was that:

“Cash payments increase the welfare of recipients to a greater degree than do transfers-in-kind of equal cash value.”

Those of us who majored in philosophy will immediately recognize that this proposition is meaningless without some consensus on the issue of what “welfare” is. And there’s definitely not going to be consensus on that. But I believe the way consensus is generated on this point is by an agreement among economists to decide that “welfare” means “preference-satisfaction” so that giving money to a nicotine addict with a vitamin c deficiency would do more to increase his welfare than would giving him an orange. With the money, he can buy cigarettes and he wants a smoke! This is, perhaps, something that you believe to be a good way of characterizing the concept of “welfare” (perhaps you think toddlers should be allowed to play unsupervised near the edge of steep cliffs, who knows) but nothing here of interest is hinging on economics.

Once upon a time a woman who lived on my block and had an EBT Card suggested that she should buy some groceries for me and I should give her cash in exchange. There’s an interesting issue in economics to be studied about what the cash value of an EBT dollar is. That, in turn, could be useful in informing our decisions about in-kind versus cash aid. But the fact that the definition of welfare that’s best-suited to making rational agent models look clean leads by definition to the conclusion that cash benefits are superior it not a very interesting conclusion at all.

Advertisement

This has, I’ll grant you, only a tenuous relationship to Rodrick’s post. But I think it’s a point about a general issue with resolving problems that are too difficult (“what is the nature of human well-being?”) by first adopting a model that rules the problem out by brute force, and then reading the model back onto the world forgetting that you deliberately decided to leave out an important question in order to make the model work.