Extra topping? Black olives please!

Everyone has their own personal favorite straw man argument. Some realize that its a straw man, most don't. Here's mine.

Human-Computer Interaction is all about intelligent interfaces that feel "natural" to a Human user. We would want the computer to figure out what we want to do and do it with minimal fuss. For instance, I would want the "Computer" to be able to understand exactly what I mean when I say "I think I want to watch a movie". It would also be really nice, if the "Computer" would guess exactly what mood I'm in, figure out what movie I'd like to watch right now and play it right away. While we're at it, figure out that I'd like to eat a personal pan pizza from Pizza Hut and order it online.

Notice something missing from the above story? There was the "Human" (me), the "Computer" was right there too. Oh that's right, no "Interaction" ! But the "Computer" did guess what you wanted and did it too, you say.

The "Interaction", that seems implicit in the above story, really is a one way street. The subservient "Computer" does what it been statistically trained to do. In technical terms, the "Human" emitted a signal, which was processed by the "Computer", interpolated it to figure out what movie I wanted to watch and extrapolated it to figure out that I might want a pizza and ordered it. Notice the absence of any affirmation or negation. The "Computer" has just been transformed to a fancy glorified signal processor.

Interaction is a two way street. Interaction , in the technical sense requires more than one participant (entity) , to receive, process, understand and emit signals. Imagine the same scenario as before, but replace the "Computer" with another "Human". The outcome, is no longer influenced just by the sheer observation of one participant : "I think I want to watch a movie". The outcome, is now the result of an Interaction, between the two participants. This Interaction, will likely entail speech, hand gestures, facial expressions, discussion of similar events from the past ("Oh that Spielberg guy can't direct for nuts, remember ET?") and possibly, a compromise on choices. It is very possible, that at the end of this scenario, both our participants decide to ditch the idea of watching a movie and play a game or go out on a drive.

In the first example, one participant generated a single verbal signal while the other participant was only passively involved in the Interaction. The passive participant only processed the only signal received and acted. In the second example, both participants were involved in active interaction. Both participants generated a wide range of signals, processed each others signals, analyzed them and emitted a reply signal.

Clearly, the second form of Interaction transferred a lot more information. We could model two-participant Interaction to be a control process (Disclaimer: I don't know enough of control theory or signal processing to be certain though). It could be said, that the choices and opinions of both participants in the second example, were different at the end of the Interaction than before. Some would even use the adjective "enriched".

In either case, it can be certainly expected, that this Interaction would have a very strong influence on future Interactions. This is a critical difference between modeling HCI as signal processing, versus modeling HCI as a control system (see disclaimer above). The essence I'm trying to capture, is that for HCI to be successful, we should expect and _demand_ that the "Human" actually interact with the "Computer". Every software that incorporates AI in some form, has a "training phase", where the explicit interaction is the "Human" teaching the "Computer". I would argue, that this really isn't HCI.

[ I work on "intelligent" [1] Sketch recognition systems, with a special interest in collaborative sketch interfaces. I'm certainly no expert though.]

Brandon Paulson and Tracy Hammond of the Sketch Recognition Lab at Texas A&M argue in their paper, [2] that forcing users to learn the system, rather than the system learn the user's intentions places constraints on the user. I would think that both extremes are undesirable, but a balance between the two is important. When both the user and the system are able to understand each other, the equilibrium achieved is termed "natural", in the sense used in the very first line of this post.

Do I have a straw man? I don't know.

[1] I wish I knew what "intelligent" meant, with respect to sketch recognition. I suspect no one knows either.
[2] This isn't an attack on their work, but an example of the sentiment I seem to encounter in many HCI publications. To add, the SRL at Texas A&M produces some awesome sketch recognition work.

Coercion, bribe or ... election promises?

A big problem, in any election (electronic or off line), is the problem of coerced or bought-out votes. In general, no elections system or electoral authority can completely eliminate voter-coercion. If a voter chooses to sell his vote and actually carry out his end of the deal, then it must be considered part of the due process of election and his vote must be honored. If on the other hand, the agent of coercion is unable to convince the voter , then a forced vote is anomalous in the process.

Although not evident at first, the line between a "valid and freely chosen vote" and "invalid and coerced vote" is very thin. Finally, how does one different between coercion and campaigning? Try the following two statements:

"If you vote for me, I will pay you $100."
"If you vote for me, I will cut your taxes by $100."

Are they different? If so, how?