Monday 14 February 2011

Rational Choice - full section.

My book Understanding Public Policy: Theories and Issues (Basingstoke: Palgrave) has a section entitled 'The Main Debates between Rational Choice Advocates and Critics'. For the sake of space, I shortened this section and referred the reader here for a fuller discussion. Well, here it is (see the book for the references) ...

The complication to rational choice defences is that they are often stated too provocatively by their advocates. For example, the ‘instrumentalist-empiricist’ (MacDonald, 2003: 553) claim that models should be assessed not by the realism of their assumptions but by the accuracy of their predictions is ripe for abuse and misinterpretation. Friedman (1953: 14-15) argues that the assumptions of hypotheses must be ‘descriptively false’ (because they assume, for the purposes of the inquiry, that other causal factors are unimportant) when he means that they are ‘good approximations’ whose worth should be determined by ‘one test’: whether the theory ‘yields sufficiently accurate predictions’. The aim of rational choice theory is not only to produce predictions that are consistent with the evidence but also to explain why; to argue that things happen because people act in the way described (Laver, 1997: 5). Therefore, much extraneous debate could be avoided if we stuck to describing assumptions as parsimonious rather than unrealistic; as an extraction of one essence of individual behaviour (Tsebelis, 1990: 32 uses the term ‘subset’; compare with Hindess, 1988: 113). As Ward (2002: 69) argues, models are ‘simplified representations of reality constructed with a view to improving our understanding’. Similarly, one aim of modelling is to explore the consequences of behaviour under particular conditions that may not have occurred but could occur in the real world (Laver, 1997: 4-5). This is not a claim that the assumptions of a model are deliberately unrealistic in the way we would commonly understand the phrase.

Relatedly, Shepsle and Bonchek’s (1997: 16-7) argument that we do not need to know why individuals hold particular preferences (instead, we entertain ‘hunches and intuitions about that person’s motives’) requires qualification. The argument is not that such things are unimportant, but that their explanation is problematic when we analyse the ‘behaviour of large numbers of people’ (Elster, 1986: 16). There are two main solutions. The first is practical: set up a division of labour, with some models treating preferences as exogenous (caused by factors outwith the realms of the model) complemented by other models that explain preference formation (Dowding and King, 1995: 5; Parsons, 2005: 8-9; although Hampsher-Monk and Hindmoor, 2010 suggest that attempts to link the two are rare). The second is methodological: highlight the difference between ‘stated’ and ‘revealed’ preferences and argue that it is at least as legitimate to observe behaviour and impute motivations as it is to ask people why they behaved as they do. Since both methods are problematic on their own (people lie or may be mistaken when they explain their motives; people may act strategically and misrepresent their preferences when acting), neither should be relied on exclusively (Dowding and James, 2004: 188; Hampsher-Monk and Hindmoor, 2010: 57).

Finally, we may see unconvincing responses when substantive models are assessed for their usefulness. In particular, Green and Shapiro (1994: 34) identify a tendency in the literature for ‘post hoc theorizing’ when the limits of models’ predictions become clear. While the modification of original models can be legitimate in the light of empirical evidence, this should be done in a particular way: the modified model should explain everything the old model explains and more, and/ or produce new predictions ‘at variance’ with the old model (Dowding and James, 2004: 189; see box 7.4.2 and 7.4.3). One of the most striking attempts is the concept of ‘nested games’ (box 7.3).

No comments:

Post a Comment