Simple Proposal to reduce the threat of buffer-overflows

The buffer overflow is a comedy of errors. Though multiple different exploits arise from overflowed buffers, the majority rely on overwriting the return address that is stored on the stack for each function call. I consider this a fundamental problem.

Could we keep the return address away from these messy variables that have been overflowing for decades?

Yes. Why not simply have a second stack for return addresses?


Sounds great in theory, except that’s not how it works.

That’s how it could work. In fact it’s fairly easy to make it work that way. Knowing the standard calling convention , we only need to amend a few things to keep a separation.

  1. We need somewhere else to store this second stack
  2. We need to maintain a pointer (an ESP) to this other stack
  3. We need to swap ESP to point to this other stack immediate before Call and Retn and swap it back immediately after.

It’s fairly easy to create a proof-of-concept function.

Okay, except I’m not gonna rewrite all C with custom assembler in it. 

The beauty of this solution is that it’s simple enough that it should have no meaningful side effects on normal programs. Thus, it could be simply implemented by adding a new calling convention to the GCC compiler and a switch to set it as default. This would allow you to rebuild your existing programs (e.g. making php from source) with this safety feature enabled.

WHY I love PHP Storm

It’s rare I go out of my way to just celebrate something, I usually try to have a balance and nuanced view and don’t like the mindset of attachment. However, I wish somebody had introduced me to this tool sooner. Thus this article is on WHY I love php storm (not that I love php storm).

Saying good things about the IDE is worthless to me if you don’t explain why, and nobody has ever explained WHY php storm is so much better than the alternatives.

Just to help you make an informed decision on how much/little my word is worth, these are the IDEs I’ve used enough to compare against for PHP&JS development:

Aptana, DreamWeaver, Eclipse, Netbeans, Notepad++, Zend Studio

 

The why:

  • Go to file quickly (without using the mouse), using partial search with wildcards, with an instantaneous dropdown. So I can hit command-shift-N, type public/*user*.js and see a list of matching files, hit down to the correct one, press enter, and have it open, all with negligible latency.
  • Other shortcuts are very useful, easily customizable, like back and forward.
  • I can search my workspace without using the mouse, and it’s very quick, and intelligently stops if it finds >1000 matches.
  • Spell-checks w/ the ability to add to dictionary. Because I’m a good speller this functions as both an error-catch (mistyped a variable name?) as well as a way to make team interactions better (how annoying to use somebody else’s misnamed function every time).
  • The refactors are good.
  • It nails both PHP and also Javascript, in one IDE.
  • It doesn’t crash. I leave it running for weeks, literally. I don’t close the app when I put my computer to sleep, even over the weekend often. Even after weeks it uses less than a gig of ram (no memory leaks), and is still fast.
  • Intelligently looks at doc blocks to infer return type.
  • Can reliably “go to definition” (by keyboard shortcut too) across documents in both PHP and JS.Javascript:
  • Validates JS. Can understand strict mode and ensure your code follows it if used.
  • Understands JSON correctly and validates on the fly.
  • Monitors for global variables and unused variables.
  • Perfectly understand prototypal inheritance for its autocomplete etc.

php_storm

“Distinguish Navigation From Content”

In my UI manifesto I mentioned a rule of good design: “distinguish navigation from content.” That means that for all information your application gives, it should be immediately and unmistakably apparent whether the information is situational data (i.e. content such as a particular webpage) or a application functionality (i.e. navigation, like the address bar). This should be apparent from the very first use without having to try it out.

Metaphorically speaking, if you run Safeway, don’t make an employee uniform that is simply a red shirt; if you do that then anybody who walks in with a red shirt could be mistaken as an employee. Make the distinguishing feature something that cannot be mimicked.

Screenshot from youtube, yellow circle added obviously.

Screenshot from youtube, yellow circle added obviously.

Screenshot from youtube

Notice how there is no visual change whatsoever as the mouse covers the x button.

 

 

A real life example from just this week. Youtube failed to do this. Notice this particular ad shows as “x” button in the uppper-right that indicates the ad can be closed. How do I, as a user, know that this is a real X button made by youtube, and not a drawn x in the ad image itself that will trick me into clicking the ad? I don’t, because the the X acts identical to the rest of the ad, the hover icon is the same, no css changes, the two possibilities are indistinguishable until I actually try it out.

How could you fix this? Make the X button partially leave the ad box in the upper right (an area that I can infer the ad doesn’t have permission to draw to) and have it highlight its border yellow when it is hovered (this assumes that the youtube ads are not mouse-responsive, which is true at the time of writing).

How I propose youtube mid-reel ads should behave

How I propose youtube mid-reel ads should behave

What’s the big deal?

So, what is the big deal? Afterall, clicking the X in the youtube ad does end up closing it, seems pretty minor.

It’s a big deal because it’s a systemic problem following a very basic formula that is easy to fix. The general principle behind “distinguish navigation from content” is that the user always needs to be aware of who they are communicating with. 

An old virus trick, back when viruses spread across file-sharing networks, was the someMusic.exe trick, where a trojan would be crafted that would have the default icon of windows media player. Hence, if a user had file-extensions turned off on a PC computer (the default), the file would appear exactly like a music file and would be double-clicked readily. This is again an example of a blurring between communication; the operating system isn’t communicating unambiguously if the icon you are seeing is a certification of file type or is content picked by the application’s creator.

Can you figure out what's going on here? It's all about distinguishing the source of the content.

Can you figure out what’s going on here? It’s all about distinguishing the source of the content.

Simple Accessors and Mutators in Javascript

Good OOD calls for accessors and mutators, which will save you a lot of time in the long run by giving you hooks to run necessary actions on variable changes as well as abstraction. Sometimes though, it’s a pain in the ass to write accessors and mutators for variables on the off chance that we’ll ever need that abstraction. Here’s a way to make accessors and mutators painlessly:


function Accessor(name) { return function () { return this[name]; }; }
function Mutator(name) { return function (newValue) { this[name] = newValue; }; }

var MyClass = function() {
   this.someVariable = 10;
};

MyClass.prototype.getSomeVariable = Accessor("someVariable");
MyClass.prototype.setSomeVariable = Mutator("someVariable");

var example = new MyClass();
example.getSomeVariable();//10
example.setSomeVariable(12); 

We can even condense a little more:


//Assumes camel case convention
function AddAccessor(Obj, name) { 
 Obj.prototype["get" + ucfirst(name)] = Acessor(name);
 Obj.prototype["set" + ucfirst(name)] = Mutator(name);
}

AddAccessors(MyClass, "someVariable");

MyClass.setSomeVariable(15);
MyClass.getSomeVariable(); //15

And if you don’t mind “polluting” the object prototype it can get even tidier. However, this breaks jQuery (which you don’t want) and requires you to write good code (i.e. use hasOwnProperty) which I’m sure you do anyways if you read this blog.

Refactoring with Functional Programming Like a Boss in Javascript

An example case for refactoring with javascript with functional programming.

Let’s consider an issue that recently came up for a game I was working on. I wanted to effectively create a gradient across the time dimension (i.e. that is a smooth color transition over time). This can create a flashing effect or be used for a number of other cases. In order to make this versatile I decided I wanted a function that would take two colors, and a float (0 to 1) that would represent how much of color2 should be used in the mixture (the rest would be color 1). Thus at partial_mix(c1, c2, 0) we should get just c1, and at partial_mix(c1, c2, 1) we should get just c2.

A sample solution ***BAD CODE***:


/**
* A bad solution to this problem
*/
partial_mix: function(c1, c2, pct_b) {
var red1, blue1, green1, red2, blue2, green2; 
red1 = parseInt(c1.substr(0,2));
green1 = parseInt(c1.substr(2,2));
blue1 = parseInt(c1.substr(4,2));

red2 = parseInt(c2.substr(0,2));
green2 = parseInt(c2.substr(2,2));
blue2 = parseInt(c2.substr(4,2));

var mixture = mix(red1, red2, pct_b) + mix(green1, green2, pct_b) + mix(blue1, blue2, pct_b);
}

function mix(n1, n2, pct_b){
return (n1 + (n2 - n1) * pct_b).to2dhex();
}

Problem 1: Not dry across Red, Green, and Blue. We did the exact same thing to red, green, and blue, yet we copy pasted to handle the redundancy. Not good. Better to use arrays (with the added benefit of easing the process of adding an alpha color later):


var x, color1 = [], color2 = [], result=[];
for (x=0; x < 3; x++) {
color1[x] = parseInt(c1.substr(2*x, 2*(x+1)), 16 );
color2[x] = parseInt(c2.substr(2*x, 2*(x+1)), 16) ;
result[x] = mix(color1[x], color2[x], pct_b);
}
return result.join("");

Problem 2: Code assumes 6 digit format. What if we want to be able to pass 3 character formats too (for either, neither, or both parameters)?

It should be apparent pretty quickly that it’s not pretty to make the code we have versatile in this respect. However, by following the “componenent pattern” we can instead regularize our input so that it always is 6 digit then leave our code intact:


var doubleString = function(str) { return "" + str + str;}; 
var toSixDigitColor = function(color) { 
    if (color.length < 6) {
      return color.split("").map(doubleString).join("");
    }
   return color;
}

c1 = toSixDigitColor(c1);
c2 = toSixDigitColor(c2);
.
.
.
}

Problem 3: Still duplicates the code performed on color1 and color2:

mix_colors: function(c1, c2, pct_b) {
var mix = function (c1, c2) { ... };
var doubleString = function (str) {...}; 
var toSixDigitColor = function (color) {...};

var shades = [c1,c2].map(function(color) {
color = toSixDigitColor(color);
return color.split("").chunk(2).map(function(a) { return parseInt(a.join(""),16); }); /* convert to an array of 3 values */
});

return _.zip(shades[0],shades[1]).map2red(mix).join("");
}

And that’s what I call leveraging functional programming in javascript. We know it’s well-written code because when we ask what we have to do to add an alpha color, we realize the answer is nothing, it’s versatile enough to do it automatically (even when we combine the requirements of allowing single-digit colors). Notice I pull in underscore for zip, and write two custom functions: map2red and chunk.


Number.prototype.to2dhex = function() { return ("0" + parseInt(this).toString(16)).substr(-2); };

// More on a more general solution to this later
Array.prototype.map2red = function(func) { return this.map(function(v) { return v.reduce(func); }) };

// Breaks an array into sub arrays of length = size
//this is begging to be rewritten as a right apply on divide composed with parseInt passed to _.groupBy
Array.prototype.chunk = function(size) { 
var tmp = [], x; 
for (x=0; x < this.length; x++) {
  tmp[parseInt(x/size)] = tmp[parseInt(x/size)] || []; 
  tmp[parseInt(x/size)].push(this[x]);
} 
return tmp; 
};

Thoughts on the equation of automated testing

Like any other principle, types of testing justify their existence in the time they save you. That is, presuming that software quality is ensured by manual testing in the absence of automated testing, the criteria for telling if adding a test is appropriate is whether the time taken to write the test is less than the expected time saved by the test.

I want to touch on the notion of testing everything. Aside from being inefficient, I’m confident it’s not realistically possible. That is: I’d wager that for any real, used codebase with a set of tests one can write an alteration that will break the codebase while keeping all the tests passing.

If you accept the conclusion that testing everything isn’t plausible (and no, 100% code coverage certainly isn’t testing everything), then the question becomes when/where do we test? Well, obviously in cases when the time saved exceeds the time spent implementing the tests. But let’s break this formula down a little. What factors influence the time saved and inform the decision of which things to test?

  1. The probability that code will break (and the number of times)
  2. The difficulty of manual testing
  3. The probability that the test will catch the code breaking

This is a start, let’s break it down further:

  1. The probability that code will break (and the number of times)
    • Amount of change to be made to code
    • Difficulty of code
    • Team Size (i.e. people working on code who don’t understand it)
    • Intensity of external factors/dependencies (environments)
  2. The difficulty of manual testing
    • The variety of different cases in the piece of code (“branchiness”)
    • The visibility of a problem when present
  3. The probability that the test will catch the code breaking

Based on this list, a hypothetical case where implementing a unit test might be the most valuable:  You have a complicated algorithm that uses advanced mathematics you don’t entirely understand which you will need to optimize several times for speed. Moreover, other team members will be working on it too. It relies on special GPU processing which has varying logic for different hardware. It also has branching logic that requires at least 8 test cases to ensure accuracy. Because the math is so complex, determining if the function has answered correctly requires looking at every digit of a 15 digit number.

A hypothetical case where implementing a unit test might be the least valuable:  You are putting one line of code into your app’s startup to set the background color to steel-grey. It’s a basic 1-liner. Nobody else will touch your code, and you know for a fact from your waterfall documentation that this requirement will never be altered. Your app only runs on a very specific device on one OS. There is no special logic involved. Every time you startup your app (which takes .01 seconds) you’ll immediately notice whether or not the background color is right. 

I believe an expert engineer will always be doing this math with all code he writes, while keeping in mind a margin of error in his estimations. And I think any engineer who opts for a simpler equation is being simplistic and in that respect, the less advanced engineer.

The Role of Principles

A programming principle should only be followed when it provides more long-term benefit than any alternative. This may seem self-evident; it should be apparent from the fact that a principle that doesn’t provide long-term benefit obviously isn’t a good principle.

This is something that mediocre problem-solvers don’t always understand: deviation from generally accepted principle can be a sign of weakness or, contrarily, a sign of strength.

To take an analogy, in chess it is said that a rook is worth five, a bishop three, and a pawn one and so on. These facts aren’t anywhere in the rules of chess, they are just very good approximations, overall, that emerge from the game. About 90% of the time, you can use math based on these numbers to make good decisions.

However, 10% of the time other situation factors superceed this principle. Though there is an appeal to such simplicity, as one becomes closer to a master of chess he must let go of these numbers and move to a more advanced situation calculus. The same applies to coding.

Even the purest of coding principles have exceptions: though you’ve probably never considered it, every time you copy-paste you participate in a violation of DRY.

I’d like to reiterate, every single (non-trivial) coding principle necessarily has an exception. Why? Because every principle incurs a cost of a situation benefit. And for any such principle we can imagine a scenario, however unlikely, where the situationality of that benefit is totally avoided.

There is only one universal principle of problem-solving: the cost-benefit analysis. Every other principles only offer value as heuristic approximations. This is the calculus by which an expert must weigh the relevance and generality of any other principle.

A UI Case study: Chrome Developer Tools

(All pictures taken at the time of writing, no doubt features and interface will change with time)

For this case study I’ll pick on chrome because it’s hands-down my favorite browser, and because I want to elaborate on the criticism I made in my UI manifesto. It’s excellent in many ways, particularly the developer tools. However, I periodically still learn about cool hidden features it has.

Case in point: http://www.igvita.com/slides/2012/devtools-tips-and-tricks/ . It’s a mixed compliment to have an article written about your software that entitled, “Wait, [your software] can do that?” It’s great to impress with your feature set, but the fact that such an article needs to be written to unearth these capabilities is indicative of underlying unintuitive UI.

My thoughts below:

How chrome dev tools looked at the time this post was authored.

 

My Complaints

 

A mockup of a redesign of chrome dev tools UI that fixes many of the issues