Tuesday, March 14, 2006

Diagnosis of Inferior Social Proclivity Disorder in Young Adult Patients: A Case Study

Rodgers N. Hart, F. Sinatra, and E. Fitzgerald, Lorenz Institute for the Advancement of Clinical Psychology

Note: This paper has also been accepted for publication in the Annals of reformat_songs.

Introduction

Inferior social proclivity disorder, or “trampiness”, is commonly mistaken for adjustment disorder not otherwise specified.1 However, this condition is surprisingly common in early post-adolescent patients, especially females.2 We examine the diagnosis and treatment of one patient, who we shall refer to as Lady. Lady, when she began treatment, was a 24-year-old who referred herself to our private practice. She had become increasingly concerned over her difficulty in forming social relationships at her place of employment, a finishing school.

Initial Work

We spent several sessions simply becoming familiar with the patient3 and allowing the therapeutic relationship to coalesce, and listening to the cognitive-behavioral paradigms4 which the patient used to self-describe the internalities5 of her situation. Lady seemed to view herself through a neo-behavioralist6 lens, and attempted to leverage this paradigm to assert control over her situation. She would often attempt to defer meals until excessively late hours, although these control attempts were never successfully realized due to her inability to stave off her hunger. Peculiarly, she was unusually consistent in her failures; she routinely ate dinner at exactly 7:55 in the evening. This led us to suspect a possible anorexia nervosa (restricting type) in conjunction with obsessive-compulsive personality disorder.7 Her consistent timeliness at cultural events — she was a regular patron of the theatre — reinforced this notion.8 However, our experiences with disorders of these spectra suggested that it would be premature to form anything more than a tentative diagnosis at this point.9 Using a hybrid talk therapy approach,10 we probed further.

Contraindications for Obsessive-Compulsive Personality Disorder

Further work with Lady led to the discovery that she exhibited several behaviors which contraindicated OCPD. First and foremost amongst these was a strong revulsion to gambling and excessive personal grooming.11 Two contexts in which her coworkers often socialized were informal gambling nights with members of the local political establishment and outings to nightclubs with rigorous formal dress codes. Lady claimed that she felt excluded from these events due to her aversion to these activities. In addition to serving as social bonding rituals, her coworkers used these occasions to undertake the exchange of critical back-channel social collateral, or “gossip”.12

Contraindications for Anorexia Nervosa

We also found evidence that she did not have anorexia nervosa, or any other eating disorder. Eating disorders are typically characterized by a need by the patient for control over his or her environment, actualized by control over the frequency and manner of dietary events.13 It is expected, in cases of these disorders, to find, upon a closer examination, a pattern of control mechanisms. However, Lady did not seem to have any extra-dietary retentiveness behaviors. She was almost alarmingly nonchalant about upcoming major life events and her financial situation. She hoped to leave California (her state of residence) at some point, stating a preference for a warmer, more arid climate, but neither had nor desired strategies for attaining this goal. On a smaller scale, she would often arrive for appointments with her hair in a state of disarray, claiming (when prompted) that it had been disturbed by the wind on the drive over, but making no attempt to correct it.

Diagnosis of Inferior Social Proclivity Disorder

We concluded that Lady was probably not suffering from OCPD or anorexia nervosa. We considered a diagnosis of general social anxiety disorder, but she genuinely did seem to desire to connect with her coworkers, and she was quite active in other social circles. Then, in one session, Lady revealed a key piece of information. She said that her avoidance of the contexts in which her coworkers preferred to socialize was probably a good thing, because her financial situation did not permit her to engage in the expense of attending such nights on the town. She felt that her non-luxury automobile and other secondary socioeconomic characteristics placed her in a position of inferiority, and that she would be taken advantage of by the sophisticated and (in her view) unsavory characters who would often accompany her coworkers on these social outings. She wished to pursue a deeper connection with her coworkers, but she characterized their other associates as “sharpies” and “frauds.”

We then asked how her coworkers could maintain such extravagant lifestyles while she, in a similar job at the same place of employment, could not. Her response to this was the final piece of the puzzle. This reinforces the critical importance of a close reading of responses to even innocuous questions in talk therapy.14 She said that she had been offered many increases in salary, but had repeatedly turned them down because she “didn’t want the hassle.” This was a clear-cut case of ISPD. The patient was intentionally holding herself to an “inferior” social position, had difficulty functioning because of it, and did not perceive of her assumed position as problematic.15

Motivating Factor Analysis

At this point we had diagnosed Lady, but this only really told us the “how” of her “trampiness”. Although it is often difficult or impossible to do so successfully,16 we elected to explore the motivating factors behind her disorder (the “why” of her “trampiness.”) Such analysis often reveals additional disorders, or at least provides information which may prove invaluable in treatment. This analysis is still ongoing, and we do not have any results yet.

Treatment Plan

Treatment of Lady is currently ongoing. We are continuing talk therapy, both for its own merits, and as a component of the aforementioned motivating factor analysis. We are also attempting to use a combination of cognitive behavioral therapy and desensitization to address some of her avoidance issues.17 We have had some preliminary success in exposing her to fast food sprayed with a solution which will cause it to induce greater than normal levels of nausea when consumed, and we have instructed her to bring gradually larger amounts of cash with her on her visits to our office. We hope to discuss the efficacy of these techniques in a future publication.


  1. A. Hasapemapetalan, B. F. Goodwrench; Misdiagnosis of Social Proclivity Disorders; Annals of the Bowling University Watercooler; 1973. 

  2. D. Sedaris, T. Mobile; Covariant Statistical Analysis via Modified Stochastic ANOVA of ISPD Demographics; Quarterly Christian Statistical Review; 2001. 

  3. F. Vuzayloya, R. Nachlin; Look Who’s Talking: Techniques for Patient-Therapist Acclimation; Proceedings of the Windsor University Conference on Clinical Techniques; 1999. 

  4. J. Evans, B. Wilson; Quantum Entanglement and the Cognitive-Behavioral Paradigm; Psychological Humourism; 1273. 

  5. B. Allen, M. Davis, L. Fracalossi, M. Sue; Internalities: A New Paradigm for Patient Perception Analysis, and its Applications for the Treatment of Inferior Fictive Disorder; Psychology Fortnight; 1999. 

  6. K. Reeves, A. Wachowski, L. Wachowski; A New Kind of Behavioralism; Zion Review of Psychology; 2235. 

  7. M. Tee, S. L. Jackson; Foolish Diagnoses: A Case Study of an Aviaphobic-Ophidiophobic Complex; Scientific Moldovan; 2004. 

  8. I. Asimov; The Endochronic Properties of Resublimated Thiotimoline; Astounding Science Fiction; 1948. 

  9. D. Savage, D. Iskowitz; It Happens To All Therapists: On The Avoidance And The Acceptance Of Premature Diagnosis; Journal of the Association for Computing Machinery; 2003. 

  10. P. Hanks, J. Pusteyevski, R. Jakenduf; Semantic Metrics for Evaluation of Talk Therapy Approaches; Psychological Linguistics; 2006. 

  11. U. Ulrich, D. Davidson, L. Richards, L. Rudolfo, B. Abrams; Coded Contraindications; Proceedings of the 30th Annual Hashimoto University Conference on Psychological Methodologies; 1986. 

  12. D. Wikiberg, S. Bunan; Byzantine Generals In Space: Network Theory, Social Dynamics, and Back-Channel Communications; RISKS Digest; 1997. 

  13. M. Powers; Controlling Massively Parallel High-Resolution Event Timers in Low-Memory Environments; Nature; 2000. 

  14. H. P. Grice; Logic and Conversation; Syntax and Semactics, Vol. iii; 1975. 

  15. T. Geisel (ed.); The Delightful Diagnostic Dictionary; Scholastic Books; 1960. 

  16. S. Hill, P. Graves, et. al.; Administrative Disavowment in High-Stress Environments; Organizational Psychology; 1966. 

  17. C. Thulhu, Y. Sothoth, S. Niggurath; Inspiring Fear in Humankind; Applied Noneuclidianism Review; 1986. 

Wednesday, March 8, 2006

Introduction to Unit Testing

Notes for a lecture given to Brandeis University’s COSI 22a.

What Is Unit Testing, and Why Should I Care?

Unit testing is the process of writing tests for individual bits of your program, in isolation. A “bit” is a small piece of functionality. We’ll discuss how small later. How can you know whether or not your program works if you don’t test it? If you’ve ever lost points on a programming assignment because something didn’t work right, you could’ve saved yourself from that by testing your program.

If you go on to take COSI 31a, you will do better on the programming assignments if you write tests! More importantly, it’s a good habit to get into as a programmer. Having tests for your code turns programming from an art — “gee, it looks right and seems to work, I think I’m done” — to a science —; “this is the evidence I have to support the claim that my program is behaving correctly.”

Unit testing is one of the easier ways to get into all the nooks and crannies of your code and make sure it’s doing the right thing. The act of writing tests often helps reveal areas where it isn’t clear what it means to do “the right thing.”

What to Test

To figure out what to test, start by thinking about what it means for your program to work. If you have a formal specification, that’s a great place to start. For your homework assignments, you’ve had such a specification, the Java API reference for whichever class you were supposed to be implementing.

You should also think about what all the different parts of the task are. You want at least one test for every public method in every public class. One way to measure the quality of unit tests is a metric called coverage. Coverage measures how much of your code is hit when you run your tests. Consider the following code for the function isNegative:

if(n > 0)
    return false;
else
    return true; 

If you wrote one test for this function, which tested n = -5, you would only have 50% coverage, because two of the four lines are hit by that test (the first two are never executed.) To achieve complete coverage, you also need a test for a positive n, say n = 5. Conceptually, you’re not fully testing the function if you only test that it returns true for negative numbers, you also need to test that it returns false for positive numbers; otherwise, it could be replaced by a function that always returned true and your test suite (the collection of all of your tests) would have no idea! This is a common error I saw in the homeworks. A lot of people were doing things like only testing isEmpty() on an empty list.

There’s one trap I should mention here. If you’re writing your test suite and thinking about how to achieve maximum coverage, one way to do it is to look at the source for your class while you’re writing the test suite and go through every method and branch. The problem with this is that it ties your test suite to implementation details of your code. It’s important to think about the logical cases of the underlying problem you’re solving. Consider the isNegative example. What does it return for n = 0? According to a mechanical coverage check, you don’t need to add a test for that, since you’ve already test both cases in the code. The zero case is something that it’s easy to get wrong, though. It’s the boundary between negative and positive. A good rule of thumb is to always write specific tests for boundary conditions. The isNegative above does the wrong thing, and it’s very easy to miss unless you explicitly check isNegative(0). The way to figure out where the boundary cases, the corner cases, the weird inputs which will give you problems are is to have a detailed mental picture of what a particular method is supposed to. If you understand what it really means to test whether a number is negative, it should occur to you that 0 is an interesting case to check. Think about ways to implement the functionality, and ways to implement it incorrectly. When comparing the size of two lists, you should probably test not only cases like {1, 2, 3} == {1, 2, 3, 4}, but also {1, 2, 3, 4} == {1, 2, 3}, because catching one but not the other is a common mistake to make. Figuring out what the easy mistakes are is hard. Of course, figuring out the hard mistakes is harder.

Also make sure to test the side effects and error conditions. If a method is supposed to throw particular exceptions on particular invalid inputs, does it? If LinkedList.addAll(Collection) is supposed to return true to indicate that the list was modified, does it return false when the collection is empty? A well-written spec makes this job a lot easier. Look at the documentation for the method and make sure you’re testing that it does everything that the documentation specifies, and exactly what the documentation specifies.

Another source of tests is bugs. When you find a bug, it indicates something that you forgot to test. When this happens, write a test case for it. You should do this before fixing the bug to verify that the test case fails when the bug is present. Then fix the bug and make sure that the test case starts passing. Things that you got wrong once are things that you’re liable to get wrong again as things change. These sorts of tests are called regression tests, because they’re testing that your quality is always moving forward and never regressing.

How to Test It

Take a look at the included PizzaTest class and Pizza documentation. I’ve written a package, Pizza, for determining a set of toppings that will make a group of people happy when they’re trying to order a pizza. Full source code for Pizza is on the web, see below for the URL.

The test suite is structured into groups of tests which test units of functionality. The simple classes, Topping and ToppingConstraint, have one group for each class. Pizza has a few different groups. I isolated each group so that it doesn’t depend on anything done in any of the other groups. Each group that needs to construct a Pizza initializes its own topping list. This way the test groups aren’t dependent on each other and a failure in one small area of the test suite won’t randomly break a bunch of tests that should work. In order for a test suite to be useful, you want it to help you figure out exactly what is failing. There are trade-offs, though. I use Topping.equals(Object), even in tests for completely unrelated things. These tests will break if Topping.equals is broken. It would be a lot of extra busywork to avoid using Topping.equals, and it couldn’t be done without tying myself to the internal makeup of Topping. I shouldn’t need to rewrite the entire test suite if another attribute is added to toppings! One solution to this would be to indicate in some fashion that some of the other groups of tests, such as the applyConstraints() tests, depend on the toppings() tests, and we shouldn’t even bother running the applyConstraints() tests if the toppings() tests fail. There are frameworks to help you write unit tests, such as JUnit, which allow you to express this.

The first test group, mustSetToppings(), is testing that an error condition is generated under circumstances that it should be and not generated under other circumstances. It’s also a good example of how to test whether or not an exception is thrown.

The second test group, toppings(), tests the Topping class. It’s a fairly trivial class, but we test it anyway. It’s nice to not have to worry about whether or not it’s working. The test suite can get things wrong, of course, so don’t get overconfident. Note that the way equality of toppings is defined, they must have both the same name and the same type, so the tests for Topping.equals(Object) test cases where they have the same name but different types and vice versa, not just a case where they’re completely different and a case where they’re completely identical. We also test the case of them being completely different. This way, if, say, the name equality test is broken, we will know exactly what went wrong, because the “same names, different types” test will fail if the name test is broken to return false negatives, and the “different names, same types” test will fail if the name test is broken to return false positives.

applyConstraints() is the most complicated test group. This makes sense, it’s testing the really hard bit of Pizza. The individual tests are straightforward, the tricky part was figuring out which tests there should be. To come up with those test cases, I spent a lot of time thinking about the different ways in which this could go wrong. I intentionally picked a loosely-specified problem to make this job more interesting. The problem that Pizza is attempting to solve, how it’s supposed to work, what sorts of results it should return… these are all somewhat open to interpretation. That’s often what you have to do when you’re programming. A lot of times, you’ll get a vague problem, and you have to figure out how to solve it. Sometimes these are “business requirements” handed to you by your boss, sometimes it’s you thinking that it would be cool to do “foo”. The homeworks and labs have been based on fairly detailed specifications, and there have still been ambiguities! It took me around four hours to write all of this code, Pizza and PizzaTest and PizzaMain, and at least an hour, maybe two hours, was writing the applyConstraints() tests. Most of that time was figuring out what tests I needed to write!

Further Resources

These lecture notes and all associated source code are in the public domain.

Monday, February 13, 2006

The Design of Laptop "fn" Keys

On every PC laptop made in the past 5+ (10+ ?) years, many of the “F1” (F2, F3, …) keys, and sometimes some of the other keys (the arrow keys in particular) serve two purposes. When pressed normally, they act as their respective key — F1 acts as F1, etc. However, when pressed in conjunction with the “Fn” key, they perform a special function indicated by an icon on the key. Usually both the icon and the label on the Fn key will be blue (whereas the other key labels are white.) For instance:

Demonstration of Fn-modifiable keycaps

Today, one of my professors tried to hook up his laptop to the projector and was befuddled when it didn’t work. As soon as I saw him struggling, I knew that the problem was that he had to turn on the external video out. PC laptops typically have three display output modes: internal LCD only, external (VGA, or sometimes DVI these days) connector only, or both internal and external simultaneously. In order to change the mode, one typically has to either use the Fn function of one of the F keys (typically F5, F6, or F7.) Sometimes it can also be done through some buried option in the Display control panel.

The reason I knew that this was a problem is because almost every single professor who I’ve seen hook a laptop up to a projector has had to do this and had no idea what they had to do or how they were supposed to do it. The notion of hitting Fn in conjunction with some other key didn’t even seem to occur to them. Here’s something that’s a common thing to need to do, and laptop designers have tried to come up with a design that affords doing it (Fn is always next to Ctrl, so it should be natural to interpret it as a modifier key, and the color labels reinforce hitting Fn in conjunction with specific keys), but their design has failed, even after it’s been around for so many years and people have had a chance to get accustomed to it. Why doesn’t their design work? (And why do they keep using it?)

One problem with the “Fn+blue” design is that the labels on the keys are almost always terrible. Okay, the volume labels — often the Fn+arrow keys will control the volume or generate page up/page down/home/end — are pretty recognizable, but it’s easy enough to control that from within Windows, and a dark blue label on a black background doesn’t stand out (although some keyboards are much worse about this than others, I’ve seen ones where you need to really squint to see that there’s anything there at all), so that one doesn’t tend to get noticed or remembered. Also, many professors, the group of users who I’ve had the most opportunities to observe trying to hook up a laptop to a projector, don’t use sound on their laptops, so they’ve never needed to control the volume. The label for toggling the display mode is either a cryptic image (A keycap), a slightly less cryptic image (Another Fn keycap), or a confusing text label (Yet another Fn keycap — even those few users who know what a CRT is are unlikely to associate it with trying to use a projector, since a projector is not a CRT).

Another problem is that users don’t expect to have to turn on the VGA output. It doesn’t match any of their experiences with plugging things in (most of them haven’t plugged in digital audio cables to their sound cards or receivers), and it doesn’t even match their experiences in plugging in monitors to desktops. It also isn’t very consistent. Sometimes it does just work, because they happen to have been in “internal+external” mode, and then for no apparent reason, they’ll have gotten switched to internal-only the next day.

Finally, I don’t think that users conceptualize Ctrl, Alt, etc. as modifier keys; that is, keys which, when pressed in conjunction with another key, change the behavior of that key in a predictable way. I think they get conceptualized as chording keys, or parts of a two-key combination. Ctrl-S doesn’t get conceptualized as “like pressing S, but I’m also pressing Ctrl so it will behave differently than the way I normally press S.” Instead, Ctrl-S turns into “pressing these two keys together to act as a different key entirely.” Users are right. The change in behavior produced by modifier keys is so rarely systematic (what does it mean to “Ctrl” something? To “Alt” it?), and the behavior produced by the combination bears so little resemblance to the normal behavior of the base key (the act of saving has nothing whatsoever to do with the act of producing the letter ‘s’), that there’s no reason to expect people to think of Ctrl/Alt/Fn as modifiers. It’s even hard to say what “Ctrl” means as an independent concept. Ctrl (in Windows) means “do something”, which is pretty meaningless. Alt (again, in Windows) seems to mean “shortcut to menus”, but most users don’t know about that either. The consequence of this is that users don’t feel that the behavior of chording keys is something they can predict. If Foo-S has nothing to do with either Foo or S, but does something completely novel, why should one be able to intuit what Bar-S might do? Fn actually does have a meaning — manipulate system hardware functionality in the way stated by the blue labels on the key caps — but meaning is not something that users expect to find.

The way Apple does things is different in a revealing way. First, Apple doesn’t use color to differentiate between the Fn function of its keys and the standalone behavior, everything is the same shade of gray. They use position (unmodified behavior on the left side of the key cap, modified behavior on the right.) Second, the modified and unmodified behavior are the inverse of PCs. Color would probably be better. F4 by itself decreases the volume, and Fn+F4 produces F4. (The exception is the arrow keys, which are arrow keys unmodified and page up/page down/home/end in conjunction with Fn.) The only time I’ve ever had to use one of the F keys on a Mac is F5 to bring up autocomplete in Xcode, and F12 for Dashboard, so the fact that needing to press Fn+F4 to get F4 doesn’t leap out at a naiive user isn’t all that critical, the F keys get used a lot more in Windows (application menu items are often bound to F keys by default.) Third, it’s much easier to find the software display controls in Mac OS X — they’re obvious in the Displays system preferences panel, and if it’s something you do often, you can check a box in Displays to have it right in your menu bar. Finally, it’s much more likely to just work — automatically detect that you’ve plugged something in and start displaying to it — on a PowerBook than on any PC laptop I’ve seen. This is borne out by my experiences observing professors, PowerBook users are much less likely to need to do anything and much more likely to be able to figure out what to do when they do.

Powerbook function keycaps

Interestingly, many PC desktop keyboards are emulating the Mac model these days. The current generation of PC keyboards with “media” keys, e.g. a dedicated key to change to the next track in WMP or open Internet Explorer, typically make the F keys serve double-duty as the extended keys, and have a “Fn lock” which defaults to on and must be turned off to get the standard F behavior. Oddly, most of those keyboards don’t also have an Fn key, which makes them a pain in the butt.