What I hear you cry, “single user studies can’t be valid, even ethnography’s have more than one user”. Well that’s what I was saying before reading Dix 2010  which I covered last week. The critical thing that Dix sees as different is that – and I’m paraphrasing and using my own terms here – single user studies can be used to scope extent as opposed to our normal desire to support a point via a measure of magnitude of similarity across users; as a way of discovering out-layers as opposed to those which look like harmonise sample data; and as a way of disproving the rule which all the other sample data seems to support.
This discussion of single users data represents Dix’s main point – which is that we need new or modified Human Factors methodologies which are different from those borrowed from other domains, and which are valid and accepted within our field. We aren’t immature anymore, if we want to be treated as a grown up field then it is time we acted like it. I’ve spoken about the need for this before certainly as a journal publishing new and novel methodologies as well as datasets and validated results from multiple sources.
Dix also notes that when we do get it right, we ignore our fields previous work and start to reinvent the wheel:
…when one builds the justification of why something should work, the argument will not be watertight in the way that a mathematical argument can be. The data on which we build our justification has been obtained under particular circumstances that may be different from our own, we may be bringing things together in new ways and making uncertain extrapolations or deductions. Some parts of our argument may be strong and we would be very surprised if actual use showed otherwise, but some parts of the argument may involve more uncertain data, a greater degree of extrapolation or even pure guesswork. These weaker parts of the argument are the ideal candidates for focusing our efforts in evaluation. Why waste effort on the things we know anyway; instead use those precious empirical resources (our own time and that of our participants) to examine the things we understand least well. This was precisely the approach taken by the designers of the Xerox Star. There were many design decisions, too many to test individually, let alone in combinations. Only when aspects of the design were problematic, or unclear, did they perform targeted user studies. One example of this was the direction of scroll buttons: should pressing the ‘up’ button make the text go up (moving the page), or the text go down (moving the view)? If there were only one interpretation it would not be a problem, but because there was not a clear justification this was one of the places where the Star team did empirical evaluation … it is a pity that the wrong answer was used in subsequent Lisa design and carried forward to this day… 
So we don’t have novel methods, and when we produce reproducible results we just don’t use them.
- Dix, A. (2010). Human–computer interaction: A stable discipline, a nascent science, and the growth of the long tail Interacting with Computers, 22 (1), 13-27 DOI: 10.1016/j.intcom.2009.11.007