“Distinguish Navigation From Content”

In my UI manifesto I mentioned a rule of good design: “distinguish navigation from content.” That means that for all information your application gives, it should be immediately and unmistakably apparent whether the information is situational data (i.e. content such as a particular webpage) or a application functionality (i.e. navigation, like the address bar). This should be apparent from the very first use without having to try it out.

Metaphorically speaking, if you run Safeway, don’t make an employee uniform that is simply a red shirt; if you do that then anybody who walks in with a red shirt could be mistaken as an employee. Make the distinguishing feature something that cannot be mimicked.

Screenshot from youtube, yellow circle added obviously.
Screenshot from youtube, yellow circle added obviously.
Screenshot from youtube
Notice how there is no visual change whatsoever as the mouse covers the x button.

 

 

A real life example from just this week. Youtube failed to do this. Notice this particular ad shows as “x” button in the uppper-right that indicates the ad can be closed. How do I, as a user, know that this is a real X button made by youtube, and not a drawn x in the ad image itself that will trick me into clicking the ad? I don’t, because the the X acts identical to the rest of the ad, the hover icon is the same, no css changes, the two possibilities are indistinguishable until I actually try it out.

How could you fix this? Make the X button partially leave the ad box in the upper right (an area that I can infer the ad doesn’t have permission to draw to) and have it highlight its border yellow when it is hovered (this assumes that the youtube ads are not mouse-responsive, which is true at the time of writing).

How I propose youtube mid-reel ads should behave
How I propose youtube mid-reel ads should behave

What’s the big deal?

So, what is the big deal? Afterall, clicking the X in the youtube ad does end up closing it, seems pretty minor.

It’s a big deal because it’s a systemic problem following a very basic formula that is easy to fix. The general principle behind “distinguish navigation from content” is that the user always needs to be aware of who they are communicating with. 

An old virus trick, back when viruses spread across file-sharing networks, was the someMusic.exe trick, where a trojan would be crafted that would have the default icon of windows media player. Hence, if a user had file-extensions turned off on a PC computer (the default), the file would appear exactly like a music file and would be double-clicked readily. This is again an example of a blurring between communication; the operating system isn’t communicating unambiguously if the icon you are seeing is a certification of file type or is content picked by the application’s creator.

Can you figure out what's going on here? It's all about distinguishing the source of the content.
Can you figure out what’s going on here? It’s all about distinguishing the source of the content.

Thoughts on the equation of automated testing

Like any other principle, types of testing justify their existence in the time they save you. That is, presuming that software quality is ensured by manual testing in the absence of automated testing, the criteria for telling if adding a test is appropriate is whether the time taken to write the test is less than the expected time saved by the test.

I want to touch on the notion of testing everything. Aside from being inefficient, I’m confident it’s not realistically possible. That is: I’d wager that for any real, used codebase with a set of tests one can write an alteration that will break the codebase while keeping all the tests passing.

If you accept the conclusion that testing everything isn’t plausible (and no, 100% code coverage certainly isn’t testing everything), then the question becomes when/where do we test? Well, obviously in cases when the time saved exceeds the time spent implementing the tests. But let’s break this formula down a little. What factors influence the time saved and inform the decision of which things to test?

  1. The probability that code will break (and the number of times)
  2. The difficulty of manual testing
  3. The probability that the test will catch the code breaking

This is a start, let’s break it down further:

  1. The probability that code will break (and the number of times)
    • Amount of change to be made to code
    • Difficulty of code
    • Team Size (i.e. people working on code who don’t understand it)
    • Intensity of external factors/dependencies (environments)
  2. The difficulty of manual testing
    • The variety of different cases in the piece of code (“branchiness”)
    • The visibility of a problem when present
  3. The probability that the test will catch the code breaking

Based on this list, a hypothetical case where implementing a unit test might be the most valuable:  You have a complicated algorithm that uses advanced mathematics you don’t entirely understand which you will need to optimize several times for speed. Moreover, other team members will be working on it too. It relies on special GPU processing which has varying logic for different hardware. It also has branching logic that requires at least 8 test cases to ensure accuracy. Because the math is so complex, determining if the function has answered correctly requires looking at every digit of a 15 digit number.

A hypothetical case where implementing a unit test might be the least valuable:  You are putting one line of code into your app’s startup to set the background color to steel-grey. It’s a basic 1-liner. Nobody else will touch your code, and you know for a fact from your waterfall documentation that this requirement will never be altered. Your app only runs on a very specific device on one OS. There is no special logic involved. Every time you startup your app (which takes .01 seconds) you’ll immediately notice whether or not the background color is right. 

I believe an expert engineer will always be doing this math with all code he writes, while keeping in mind a margin of error in his estimations. And I think any engineer who opts for a simpler equation is being simplistic and in that respect, the less advanced engineer.

The [limited] Role of Principles

A programming principle should only be followed when it provides more long-term benefit than any alternative. This may seem self-evident; it should be apparent from the fact that a principle that doesn’t provide long-term benefit obviously isn’t a good principle.

This is something that mediocre problem-solvers don’t always understand: deviation from generally accepted principle can be a sign of weakness or, contrarily, a sign of strength.

To take an analogy, in chess it is said that a rook is worth five, a bishop three, and a pawn one and so on. These facts aren’t anywhere in the rules of chess, they are just very good approximations, overall, that emerge from the game. About 90% of the time, you can use math based on these numbers to make good decisions.

However, 10% of the time other situation factors supersede this principle. Though there is an appeal to such simplicity, as one becomes closer to a master of chess he must let go of these numbers and move to a more advanced situation calculus. The same applies to coding.

Even the purest of coding principles have exceptions: though you’ve probably never considered it, every time you copy-paste you participate in a violation of DRY.

I’d like to reiterate, every single (non-trivial) coding principle necessarily has an exception. Why? Because every principle incurs a cost of a situation benefit. And for any such principle we can imagine a scenario, however unlikely, where the situationality of that benefit is totally avoided.

There is only one universal principle of problem-solving: the cost-benefit analysis. Every other principles only offer value as heuristic approximations. This is the calculus by which an expert must weigh the relevance and generality of any other principle.

UI / UX Manifesto (part 1)

Every single application I use fails UI/UX in simple, fixable ways.

Part 1: An Analogy 

You travel 200 years into the future. And walk into a bank. Fortunately everything is in english. You recall having some savings that should have accumulated some interest by now, so you wan to withdraw. The first thing you do is look around the bank nothing is familiar 1. So you look for somebody to ask for help, though there are people, you don’t know who is an employee and who is not an employee since there are no consistent uniforms and there is no “counter” that would designate employee-only zone 2. You walk up to a girl in grey and ask “Hi can you help me, I’m quite new here and am totally lost!”

The attractive young girl looks back and says, “Open all bookmarked accounts or Open all bookmarked accounts in table view?” You look at her blankly and she returns the blank stare before repeating herself exactly.

You realize she is an android. “Help?” You ask her “Can I speak to your supervisor?” Nothing. You realize she is unable to communicate with you and also unable to help you find anybody who can 3. You decide you need to leave the bank and get help from outside. As you start to walk out the door, a guard blocks your path and asks, “Are you sure you want to terminate transaction initiation?” You have no idea what this is but you’re uncomfortable and say yes and hustle away.

Perhaps you come away blaming yourself for not understanding. Even though you know that with time you could become an expert in such a place, you leave frustrated and preferring not to return.

Part 2: The Alternative

You walk into a futuristic bank. Their advanced technology recognizes you have never been here before, but doesn’t aggressively nag you about it. You see several clearly distinct counters, each with one person in a special red outfit, you infer those are the employees. You walk up to the nearest station and say “Hi, I’m new here I need to check if I have an account.” Though full language comprehension technology isn’t around, the android recognizes you are a new person and that what you said was very far from an appropriate action. “I didn’t understand that. I am an automated account-opener robot. I recognize the commands ‘open account’, ‘close account’, and ‘help’. You can also always find help at our help desk.” She gestures.

You see that one of the stations had a HUGE sign labeled HELP hanging over it, and every desk has a sign over it4. You feel a bit silly for not noticing it sooner. You go to the help desk and soon are getting futuristic video tutorials.

Part 3: The Lessons

  1. Follow a convention for design (preferably a common one).

    This is actually a point I have seen championed before in UI, yet it just doesn’t seem to be followed. It’s like going into a house and not realizing there was a door somewhere because the nob was so elaborate (or out-of-place) it was unrecognizable.

    Chrome failing to follow any convention in its UI

    I’m going to pick on chrome because it’s one of my favorite applications. Look at the debug toolbar. Could you realistically expect somebody to know how to disable cache with that? Do you even remember how (perhaps you didn’t know it’s possible). Turns out the gear in the lower right (I don’t know why it’s separated from the other buttons) is effectively a menu (despite no visual indication of this).

  2. Distinguish navigation from content. 

    It should always be apparent to the user when something is System-wide navigation, App-wide-navigation, or custom-content. This is usually done quite well and isn’t often a problem. But look at the picture below:

    Don’t you agree that uninstall button is weirdly placed? Look at it for a while, and you may realize that it’s simply a bookmark to a page entitled “Uninstall Chrome,” but there’s no systematic way to know this.

    Chrome slightly blurring the line between content and menus

  3. All errors should give you unambiguous directions to their solution.Directing to a intuitive help interface is sufficient, the point is that somebody who’s never used your system should at the very least be able to systematically find their way to the information on how to use your system.
    Dos prompt failing to direct toward help on an error

    Case Study: Dos Prompt. I remember as a lad of 9 being at the magical black terminal that responded to secret passwords (“called commands”) consisting of unguessable combinations of characters. It was reminiscent of a haunted mansion in scooby doo where the only way to get to a room would be by playing the secret piano keys. I would type almost anything except the commands I knew (qbasic, dir, cd) and would get back an ominous “Unknown command or filename.” Until one day, in an act of annoyance I typed help (I meant it as more of an imperative at the machine than anything). All of the bad command or filename messages should have mentioned this command! There was an amazing help tutorial back then in dos (not like now) that taught piping and everything with enough examples for me to learn at that age with limited vocabulary.

    I want to take a second here and hammer this point. Qbasic was one of those “magic passwords” that I learned only through my brother’s friend, my whole foray into coding was almost stymied by terrible UI.

  4. Enumerate all available action. 

    This may be the biggest failing of modern UI. Going back to the chrome example above, would you believe I didn’t know about that lower right gear for months of using chrome’s debug menu (and I certainly wasn’t the only one)? With almost any application, I routinely discover that there are hidden features available that I wish I had known existed sooner. Did you know you can drag a window to the left edge of your screen in Windows 7? Did you know you can drag tabs not only in one browser window but between browser windows/instances in chrome? Did you know that you move a window with the keyboard by pressing alt-space  (then down, then enter, then the arrow keys?)?  Did you know you can see memory usage of tabs in chrome by pressing shift-escape? And don’t get me started on gmail, office, or mac. This point is so in-depth and I have so much to say on it that it will take an entire post of its own to cover. Stay tuned for more.

 

Give it a ReST.

One term that comes up significantly too much in interviews is the highly applauded and highly misunderstood [1] [2] [3] ReST.

I’m not sure why it comes up in interviews, but here are a few guesses:

  1.  As an indirect measure of how one stays up-to-date with the developer community.
  2. The interviewer is actually caught up in the trend himself, and that this “paradigm shift” is a crucial practice that can’t be learned in a matter of hours.
  3. They don’t know what else to ask, or they were asked it when they interviewed.

I think this particular question, and this type of question (even though I now know ReST so well I nail it every time) are a bad idea.

  1. By sheer virtue of the fact that it comes up so much, it can simply indicate the candidate has done so many interviews that they just happened to look it up the particular idea after stumbling when hit by it before.
  2. It’s not a direct measure of a candidate’s skill. Just as using browser version to infer a browser’s capabilities is inferior to simply directly testing the browser’s capabilities, such indirect measures of people are more error-prone.
  3. Staying absolutely up to the minute with the developer community isn’t important. In fact, by lagging behind a little (not too much) you miss out on a lot of flash-trends. Even if these trends do stick around (RoR, node.js) early adoption comes at a steep price.
  4. Knowing acronyms isn’t necessarily the best measure of being up-to-date. I think a question I hear too little, which is more important, is What’s your process for debugging an application?

So I recommend ditching the ReST question, and really any question on specific acronyms or designs that could be learned in a matter of hours.