home
events
synthetic zero
contact
   
about

 
November 28th, 2009

I ran into Darcy Dahl today and we chatted a bit about my last post about Google; he expressed his disagreement with my take on the iPhone and with Wave and made some good points, and I wanted to respond a bit to what I understood of what he had to say (and, Darcy, feel free to post your thoughts in your own words, below. Also I’ll note, as background, that Darcy is a really interesting multimedia/video artist.) As I heard it, Darcy was saying his biggest objection to Apple’s design philosophy is that it is difficult, in his words, to get “lost” — which I took to mean Apple tries to anticipate what users want to do, and makes those tasks easier, cutting a “groove” so to speak for those tasks. The interface is so fluid that it disappears, but thereby, as I understand his objection, it also obscures the ability for people to feel uncertain, to not know where and what they want to do, where they want to go, and presumably to be able to go in new and different directions from where they thought they wanted to go already.

I have a wide variety of responses to this — and again, I’m not sure I’m capturing the full extent of Darcy’s thoughts here, but just this raises a host of interesting issues.

First of all I agree with the importance of getting lost — the idea of getting lost, not knowing your bearings, having to figure things out for yourself and move forward — I think this is very important and powerful. I’m reminded of a story one of my old math professors told me, about two professors he used to work with; one always gave brilliant lectures and the other always seemed confused and uncertain, though he produced perfectly good work; but the interesting thing was, the one who gave the brilliant lectures didn’t seem to produce very successful graduate students, but the one who was confused and uncertain produced a lot of great graduate students. My professor’s theory was the uncertain professor forced his students to think for themselves, and gave his students the confidence that they, too, could do math, since it wasn’t always so pat, so perfect, so cut and dried.

I think there’s a lot to be said for this idea, and it’s certainly true that Apple’s interfaces are slick, clean, almost liquid. They certainly do make it quite easy to do the things you want it to do. But I have to say I don’t think I agree that they thereby contract the space of possibility for their users relative to, say, an interface like Android’s, and I’ll try to explain why.

The big revolution in interface design in recent years has been user-centered design; that is to say, rather than thinking in terms of program features, functions, engineering considerations, database structure, and so on, you think about how people, human beings, live, in their full contexts — what their metaphors are, how they are situated in the world (not just how people are situated with respect to the computer, but how they are in the world as a whole, their relationships, the things they want to do, the people they interact with, the tasks they want to accomplish, and so on), and you design with that in mind.

Now, it’s quite possible to build interfaces which simply channel users into narrow grooves, as Darcy put it; i.e., to build an interface which only allows certain operations and not others, which assume the user only wants to do a narrow set of tasks and either doesn’t allow or doesn’t support other tasks, and this is a danger one has to guard against in any design.

But what I would argue is that, while I completely agree that keeping that openness to unanticipated use is crucial, the “old” way of doing design (i.e., designing with the “thing”, i.e., the computer or the data at the center) is not the way to do it. What happens with engineer-driven approaches is not that you get an interface without “grooves” which allows people to use the device in unanticipated ways; what happens instead is you end up with an interface which has plenty of grooves, except they are unintentional, hard edges pushing you in directions which are often dictated by random side effects of engineering decisions or other unintended consequences of design choices made by people who haven’t spent a lot of time thinking through their design from a user point of view. In other words, you don’t end up with an open plain where users can launch off in any direction they want; you get a strange, alien landscape filled with unexpected barriers, walls, potholes, and chasms.

Google is by no means a terrible offender in this regard; as I mentioned in my post they do at least do extensive user testing. However, I don’t believe user testing alone can produce great interfaces; it merely produces adequate interfaces. You need that magic ingredient, the ineffable design intuition, which can only come from lots of hard work and lots of talented designers working on your interfaces. Google doesn’t have this and probably never will; I was told many times from my first day at Google that Google simply doesn’t place that much emphasis on design.

To come back to a concrete case: Android versus the iPhone. I have owned and used nearly every portable device and operating system that has ever been conceived; starting with the Wizard way back in the day, Windows CE, Symbian in many incarnations, Windows Mobile, Palm OS, Android, and the iPhone. I’ve written software for some of these devices and used them for a vast array of different purposes and I still do, and I even used to write a column about them. I have used them for everything from the standard address book/notepad/todo list organizer functionality to spreadsheets, word processing, GPS navigation, internet access for my laptop, scheduling, finances, email, music, video, photography, audio and video recording, and on and on. I have done many of these things long before it was easy or convenient to do so. And, quite frankly, every single thing I’ve ever used any mobile device to do I can do on the iPhone more easily, more fluidly, with less effort and, what’s most important — more flexibly. I think it’s hard to really appreciate how usable Apple’s interfaces are unless you’ve spent a significant amount of time trying to use a less well-designed interface; Android (which is the best of all the non-iPhone interfaces) doesn’t open up new spaces for exploration, it simply puts up awkward roadblocks that force you to think the way the engineers who built the product think — something I can do fairly easily since I am an engineer, but not something that gives you more degrees of freedom.

This isn’t to say that I disagree with Darcy’s point about getting lost — it may well be that there’s a design principle there which Apple doesn’t sufficiently respect; perhaps there are ways of opening up any interface to a larger space of possibility which hasn’t yet been explored sufficiently by the designers at Apple. But the answer to this isn’t Android and it certainly isn’t Windows Mobile, Symbian, or any of these other mobile platforms. How one can “design for the unknown” is a fascinating idea, and something that I think should be part of any designer’s thinking.

But in practice I believe the iPhone actually does a better job in this dimension than the alternatives. The space of possibility of any set of affordances is effectively the cross product of all of the affordances taken together, minus any limitations imposed by the collective interface (one could speak of the affordances as the vocabulary of the interface, and the space of possible combined interactions as the language of the interface). The very fact that it is intuitive and simple to do most of the operations on the iPhone, with minimal fuss, means that the language of the interface is highly expressive. Yes, there are grooves, in a sense: each affordance in the interface is easy to get to, but there aren’t grooves in the sense that it is possible to combine the use of these affordances together in powerful and largely unrestricted ways. I.e., I often use my iPhone to look something up online, buy tickets, switch to an app to post something on Twitter or Facebook, switch to Yelp to look up a restaurant, go to Maps to find a route, switch back to Facebook to tell someone where we’re going to meet, check my email in the email app, copy something from an email message into a note, etc. (copy/paste was a late and welcome addition to the iPhone OS which was one of its only weaknesses), shoot a short video, upload it, share it, etc. Just like Twitter, simplicity actually expands the space of possible use.

Furthermore, I’d argue that the iPhone has inspired a vast array of people to create new potential uses for the iPhone via the App Store — though everyone would have to agree that having to live with Apple’s somewhat arbitrary approval policies for the store is highly problematic. Despite this, however, the App Store remains by far the most vibrant and successful mobile application source, and there are a dizzying array of applications.

Now, let’s consider Wave. Wave is a new way of doing collaborative conversations — but structurally it retains many of the odd properties of email, as I mentioned before. You have to start a wave by sending it to a single person or adding people one by one; it doesn’t support easy hyperlinking, it doesn’t support groups of users easily, and so forth. There are lots of little things about the user interface which aren’t intuitive or are repetitive or confusing. I do like the basic design conceit of the collaborative editing of a multimedia document, as I said before — but all of these other structural assumptions, the underlying email addressing scheme, and so forth, combine, in my mind, to form a series of hidden assumptions and barriers to precisely what I interpreted Darcy to say was his value: unanticipated use.

In other words, Wave imposes a set of structural assumptions which restrict its utility greatly. You can’t use it as a collaborative wiki, the fact that it is always real time makes a whole slew of potential uses impossible (there are lots of people who won’t want to use a tool that records every little typo and withdrawn idea they type, it’s hard to look quickly through versions or look at diffs because you have to sit and wait while it “plays back” the wave), it’s difficult to manage group permissions, waves are either public or managed in a very fine-grained way via individual addressing, you can’t use it like Twitter or Facebook for that reason, and so forth. Rather than opening up many possibilities, the very complexity of Wave, and its underlying architecture, limits it to a very narrow subset of possible uses. In other words, rather than opening up the space of use, the fact that the designers of Wave really seemed to have been thinking more about features than users means the space of possible uses of wave is actually far more restricted than it otherwise would be. User-centered design would have, I believe, vastly improved Wave not only in terms of fluidity but in terms of its utility.

But I started out this post with the intention of writing about what I think is right with Google: Even though I admire Apple’s devotion to thinking of users as people, embedded in a human context, I find Apple’s devotion to secrecy and closed systems to be both unnecessary and very unfortunate. Google is an open company, as I mentioned before, and it’s very good for the world that they released Android, clunky though it is relative to Apple’s product. The world needs a company like Google and the world needs technology built and driven by a more open philosophy. What I would love to see, and where I would want to work, is a place that has Google’s openness, its commitment to good engineering, but one which borrows Apple’s devotion to design. They believe in design, hire good designers (another problem with Google: they don’t necessarily hire the best designers, and there aren’t enough of them), and they think of users as people living in the world with other people, not merely users of software sitting in front of a computer. Such a combined company would be the best of both worlds, I think, and it has yet to be born, I believe. I’m working on it.

permalink |

comment trackback


3 responses to this post:
  1. Lawrence Wang says:

    Uncertainty: Getting lost is what should happen in your work, not in your tools. Having an intuitive email client doesn’t constrain the process of figuring out what to say…

    Wave: Wave is a crazy mess, yeah. But the key thing to keep in mind is that Wave is not finished; in fact, it’s only just getting started. This is the open source development process: let the public bang on it and expose its flaws; roll out improvements; iterate.

    I predict that if Wave ever takes off, it will be because people outside of Google with great design sense build an interface on top of it focused around a cohesive vision of user experience.

    “I’m working on it”: Do tell!

    November 29th, 2009 at 6:33 pm
  2. mitsu says:

    I agree it’s a prototype, but the problem is the prototype already has built into it a ton of limitations. The problem isn’t just the UI, it’s the entire conceptualization of the tool, the metaphors used, the structural assumptions. I think if there is going to be something great coming out of Wave it will simply be the idea of real-time multimedia document editing, transferred to a radically different context, with different addressing, different rules. I.e., the very “playback” concept itself, while clever, itself imposes severe restrictions on its utility… I’m not sure how a reskin could ever fix these problems.

    Someday, someone might make something cool with the underlying Wave API, but likely as not someone will just start over and make the next really cool thing.

    November 29th, 2009 at 6:48 pm
  3. mitsu says:

    (As for the working on it: I’ll be talking about that soon when we’re a bit farther along…)

    November 29th, 2009 at 7:10 pm

leave comment

 

synthetic zero is powered by WordPress

posts(rss) . comments(rss)