The Cult of the Fine Arts

When I first went to college my beliefs matched what I was told: “College is a place to learn facts, skills, and critical thinking.”

That seemed believable. Some subjects really matched that, description, like mathematics. My first sociology class was not like that. Sure the topics were different, but also the entire approach was different. There was no presupposition that truth is inevitable by staying rational and objectively measuring. Rather he kept putting moral and emotional intonation as he opined on justice and progress.

Philosophy class had a certain weirdness too. We’d be presented with theories from philosophers ages past (“Mr ABC first theorized a smallest indivisible unit of matter, termed the atom”, “Kant’s moral theory was…”) and most of the student’s reactions were “Why are you telling me this? Only physicists can answer about subdividing matter,” and “Kant’s moral theory leads to obvious contradictions.”

Philosophy majors and teachers really didn’t like hearing that. They never really disagreed that it was true. They more called it “hubris.” Of course as far as I know, what is hubris isn’t a concern of philosophy, only what is true. Yet clearly these people had a very emotional relationship with “their” subject.

It’d be easy to just say those were weird professors, but then again in english class, and even film class it was “It is crucial to study the great original films like Citizen Kane” and in music “Understanding classical from which all other music derives is the primary focus.”

When taken all together, it’s pretty plain to see that soft disciplines have this cult-ish tendency to put historical figures (and their works) on a pedestal. Part of that is “The canon,” this collection of books that nobody enjoys reading (or paintings that nobody enjoys looking at, or classical songs that few people listen to, or moral philosophies than nobody lives by).

I guess that’s fine and harmless. Until people start believing it. The problem with believing that Herman Melville was a “great author” is that “great” is one of those fluffy words that people say loudly and get duped into caring about. Moby Dick wasn’t widely appreciated during Melville’s lifetime, his subsequent books also were harshly criticized. And I guarantee you if that book were attempted to be published for the first time today it wouldn’t have a shot, the publishing industry is very return-on-investment focused. Writing a book “like Moby Dick” would all-around be a huge mistake.

My point is, the classics “cult” gives you this lie that if you want to be a “great writer” you need to study and emulate these past figures who have zero market potential today. This is true again of philosophy. And again of film. And sociology.

Let’s spell out the lies of “Classics”:

  • That you must study and revere those who came before you to succeed.
  • That there is this thing “greatness” that exists that only academics can declare, pay no mind to books that grip a whole generation (e.g. Harry Potter).
  • That someday there will be a “next” “great american novel” and that a student today could write it. (In my estimation people are more likely to start talking about “Great American Videogames” rather than novels, because they are the new most relatable artform).

I’m sure I’m not the only one to see this. What strikes me as most odd is the amount of people who never enjoyed a classic who seem content to defend the status quo (and send their kids to schools where they are forced to endure it).

The “ABC” Problem

Definition: The problem that technologies tend to do overlapping chunks of a process but do not interoperate.

Tool 1 does ABC (competitor tool also does ABC but uses YAML and stateless philosophy)
Tool 2 does BCD (competitor tool also does BCD, but has a preferable UI, but no role-based permissioning)
Tool 3 does CDE (competitor tool also does CDE, but also enables Foo but can’t support Bar)

But I just want ABCDE, with the minimal number of surprises. Unfortunately there’s no easy way to get it, and lots of complex ways to try to make it happen.

There are many combinations of tools that fit this description (e.g. Jenkins, Spinnaker, Kubernetes).

This isn’t something that’s easy to fix, but it’s worth naming.

Never solve hard problems

Turn hard problems into easy problems, and solve easy problems.

This is a quote I use a lot. I think it’s a quote that seems brilliant when you already understand it, and seems confusing at best when you don’t (and the people who already understand it weren’t the ones who needed to hear it). So let’s unpack it.

A lot of times a problem seems very tricky, let’s take a toy example “A number multiplied by 31 then decreased by 19 then increased by the original number is 107, what is the number?” At first it may seem to be a problem for a math whiz, but once you know algebra any 8th grader can setup and solve: x * 13 – 19 + x = 107 thus 9″

If you can be either the kind of person who does this in his/her head, or the kind of person who knows how to turn the statement into a basic formula and solve the formula, then be the second one. Every time.

Okay but why?

Well let’s move the metaphor from math to software. Would you rather be the genius engineer who can build a whole system in his head with perfect recall, thus you never need a comment, a test, or design doc? Or would you rather be the engineer who instead has only an average memory, but writes great tests, comments, and design docs? Again strive to be the second one, every time.

For one, almost all engineering problems get more complex with time. Even if your software sandcastle is easy to wrap your head around today (or your algebra problem can be solved in your head), there will come a point as it grows that it will move beyond the scope that any one person can understand it all without a reference. Once this happens the “manageable” system will suddenly be a completely unreadable horrorscape and anybody approaching it for the first time from the outside will want to stay miles away from it, while the person who dealt with the complexity piece by piece over months or years may be blind to it.

Then of course, there’s everybody else who ever has to work with your code. If you turn a hard problem into easy problems, those easy problems are can be verified or solved by those around you too. Suppose somebody else needs to add a change to a codebase, is it better to change a perfectly-efficient but completely undocumented codebase, or simpler documented codebase? The latter codebase is often more valuable.

Additionally, a service that is composed of simple reusable, well-defined parts can often share those reusable parts (e.g. caching service) with other tools, or often simply already exist as industry-standard tools. We live in a software era where there are many, many giants whose shoulders you can stand on.

Why does it need to be said?

Well, truth is culturally we always tell stories about the “inscrutable genius” who can “solve things in their head.” The Sherlock who makes a huge show out of guessing what somebody ate for breakfast last Tuesday based on what they’re wearing and makes a whole show out of it. We conflate hard-to-understand things with problems that require a great intellect.

Culturally, many people are more impressed by that kid who sits in the back of class, pretends not to try, and gets a perfect score showing no work than the one who is methodical. That kid, who secretly loves to show off how smart they are, thrives on hard problems, because without hard problems they can’t show off their gift. They may even seek-out, or create hard-problems (from games and puzzels, to overwrought solutions). Hence they are more likely to write a codebase nobody else can touch. But the kid in the back of the class will never accomplish certain things with that approach — you’ll never build a bridge without taking care.


Strolling through my suburban neighborhood is a pleasant experience. The air is clean, friendly-looking people walk their dogs, the fall trees are changing, lawns are clean. Perfect.

At least that’s how I felt the first half-dozen times. Though pretty soon it still seemed very nice, but perhaps a bit predictable. Everything was kinda standard, there wasn’t much variety in the houses.

And it struck perhaps that was telling me something. That for the people owning these houses, a clean lawn, a small tree with its leaves swept, and shuttered windows was something they thought was worth aspiring to. They were striving to be as tidy and inoffensive as their neighbors.

It occurred to me then what may be obvious. That the compulsion to fit in must be born of the fear that one is less than normal. Why else would one strive for regular?

But there is another option… being more than regular.

Perhaps it seems like hubris to not ask “Why aren’t I more normal?” and instead ask “Why isn’t normal more me?” I hope not. I hope for you that seems like a perfectly reasonable question.

“Ticket Monkey”

tl; dr – I hypothesize the physical and emotional distance of ticketing systems reduces job satisfaction and propose solutions.

Bear with me.

I remember when I was a kid I knew how to code. I didn’t get as good grades as I could have, I didn’t shine in a few other ways, but despite this I knew I could do something special. I could build interesting things and I could fix problems.

When it came to my first job I felt the same way. At the time I’d say I was still a bit “rough around the edges” but I could fix problems, sometimes hard ones, and that made me useful. It felt great to be needed using the one way I knew I was good at. There was nothing like somebody walking over to my desk to tell me about a problem that was making their lives hard, and to be able to fix it for them.

Pretty soon though a number of more senior engineers started speaking up. “We can’t have interruptions like this. We need to be more structured. We need to track this work. We need to put to asks in JIRA.” I had the same reaction a lot of people do, which was “Why do we need to track this? Do we not trust everybody is working to the best of their ability?”. I got the usual answers about the importance of estimation, planning, etc, and didn’t entirely understand, but accepted.

Years later, senior myself, I would insist – every code change must have an associated ticket. That isn’t to say I’m some kind of agile evangelist, but I certainly found ticketing systems indispensable, even for the smallest of teams. Somewhere along the way I had forgotten that I had ever felt otherwise.

Next role I became the main SRE for a chaotic startup. Some people would hate it, but I loved it. The buzz and thrill and of a war-room was incomparable to the transactional feature development that would eventually lead to an A/B release that would then possibly lead to a small % improvement in a metric. I loved fixing things for other engineers who I saw in person, who were happy and impressed that I would fix their issue and improved the whole system to make the issue impossible in the future.

What I loved wasn’t inherent to being an SRE itself. Though a high-stakes role, it can also be as isolating as any role. It can be waking up to an alarm at night to login in by oneself and fix an issue, file a ticket, and have nobody notice.

What I loved was face-to-face interaction and a shared emotional narrative.

From another angle – when I go to whole foods and watch the cashier scan my overpriced items, I wonder what they think. Though small talk with strangers is far from a strength for me, I struggle anyways to connect in whatever small way I can. I don’t think I could force a joke and trust the timing to line up with the variable checkout time. I try to be nice without being stiffly polite. It may mean bagging my own groceries, or avoiding the instinct to stare at my phone. I can’t even say for sure that cashiers notice or like it. I simply presume that if I were in their shoes I wouldn’t want to feel like a machine.

The Answer

Well first, the question: How do we reconcile our desire to feel individually noticed and human with the business needs of standardization? I think this question is the most important piece of this article. I’ll make a proposal or two, to get the ball rolling, but the topic is wide-open.

  • What if tickets are filed normally through jira, but then the filer must also come over to the desk of the person operating on the ticket and explain the issue face-to-face?
  • What if tickets are described in a way that stresses the emotional importance to the user of the feature?
  • What if end-user satisfaction and wishes, as captured by PMs who communicate with users, was shared with engineers?

How would you achieve it?

Scala Wishlist

I’d call myself a down-to-earth scala engineer. By that I mean that I never say words like “covariant” unless I absolutely have to. I see programming languages as a tool to make my life as easy as possible – I chose scala because it lets me get the most output for the least input. I think this is worth stating up-front, because my initial impression of scala was that it was daunting and geared toward academics, and I admit it does have some up-front costs.

What’s cool about scala – things it makes easy


def showOff(): Unit = {
   val someList = List(1,1,2,5,5,5,7,8,25,31,231,2335)
   val edgeCase = List()
   val uniqueCount = someList.unique.length
   val secondToLast = someList.dropRight(1).lastOption
   val secondToLast2 = edgeCase.dropRight(1).lastOption // None
   val res = // find index of 7 in logN time = Found(6)
   val res2 = => x*x).search(100)
 // in logN time, find where 100 stacks up against this list, while only squaring logN items = InsertionPoint(8)
  val prefixSums = someList.scan(0)(_ + _) // 0, 1, 2, 4, 9...
  val groups = someList.groupBy(_.toString.length) // map into a list of 1-digit nums, 2 digit nums, 3 digit nums
  val lookup = someList.zipWithIndex.toMap // build a HashMap lookup of int -> index of last occurrence
  val lookupFirst = someList.zipWithIndex.distinctBy(_._1).toMap // build a HashMap of int -> index of first occurrence

  val countBy = someList.groupBy(identity).mapValues(_.length) // build a lookup of value -> count of occurrences  
  val matrix = Array.tabulate(10,10)((y, x) => y*10+x) // creates a matrix of nums 0->99
  val rowIsOdd = matrix(3).forall(_ % 2 == 1) // false
  val colIsOdd = (0 to 9).map(i => matrix(i)(1)).forall(_ % 2 == 1) //true

  val lookup = collection.mutable.Map[Double, Double]()
  def memoizedSqrt = (i) => lookup.getOrElseUpdate(i, Math.sqrt(i))

The above is just showing that scala has a bunch of helper functions that may seem trivial at first, but with mastery are outstanding building-blocks for all sorts of problems. Of course lodash achieves similar, for example.

Type Safety

Next the type-system. I think scala’s type system (particularly around Options and Futures) prevents a huge swath of bugs around edge-cases and concurrency by pointing out logic gaps at compile time. It takes longer to get scala code to run, but once it does it’s exponentially more likely to work correctly than node.

What I wish were changed

The gimmes

A for loop that functions like the c-style for-loop. Many linked-list and tree operations I write (i.e. leetcode) end up having to use whiles, which is blah.

An ability for a collection to have more than Int.MaxValue items. This is again something that comes up more from leetcode, but given that some collections (like Ranges and views) don’t (or at least needn’t) allocate that memory, it’s actually quite reasonable. This issue sprouts its head when you want to .search a long for example.

The culture

After the initial step of seeing map and reduce and thinking that “cool that looks handy” some of us have run into people peddling a “necessary next step” of learning exhausting thesaurus of interconnected academic buzzwords (covariant, functor, monoid…) that are really offputting. It’s been my experience that learning what a monad is has had zero effect on my productivity, and is at best a neat philosophical abstraction. Part of this complexity originates from scala’s type system. But the other half comes across as elitism and cargo-culting, fair or not.


This is one of the real killers. I had a junior coworker who spent several hours trying to get his program to run, all while getting an illegible error. It turned out a brace was missing somewhere (or some other common mistake) which resulted in the compiler running for minutes then failing. Theoretically scala offers differential compiles, but if they are working they aren’t fast enough. Improving this alone by 75% would be more important than any and all of the fixes in scala 3.0.

Overcomplicated defaults (including libraries)

Again this is a sad one, because it would have been so easy for the normal libraries to cater toward beginners and only slowly expose complexity. I’ve seen tragic bugs (e.g. a super-fast language on a super-powerful machine misconfigured to only run 1 web-request at a time) that negate the entire value-promise of scala when put in the hands of non-experts. I’ve seen startups with hundreds of requests a day create a whole nightmare-to-debug actor-powered service because Akka. Documentation, simplicity, and guiding novices away from power-tools could have a much higher payoff than oddities like LazyList.

Also, real-world concerns (cross-future UUIDs, seeing memory usage per-thread) seem to be an afterthought, though this is true of most languages too.

The emotional tax of bad recommendations

tl; dr: I find some web 2.0 content so draining it undermines the experience of entire platforms

I think we can all relate to those moments in life when you feel overwhelmed — beset on all sides by obligations or problems. In such times I try to turn to humor, distractions, and escape.

For over a year now I’ve known that certain social-media platforms I was trying to “escape” to were actually promoting stress-inducing content that was having an entirely inverse effect. “This is my problem,” I told myself, “I’m the one choosing to open these sites.”

I never even touched Twitter. Then some news sites. Then linkedIn, nextdoor. The last stragglers are reddit, youtube, and news.ycombinator.

Reddit is the simplest illustration of this phenomenon. I have an exhausting meeting coming up, I need a 10 minute breather to clear my head, I turn on reddit for cat pictures and boom suddenly I’m witnessing a cop battering a protestor. My body has an immediate visceral response. So caught up am I in righteous indignation, that I have completely forgotten in the moment about how my goal was to relax. “I need to care about this!” screams my body.

I pruned my reddit feed, and now I’m down to “r/madeMeSmile” and “r/HighQualityGifs.” Unfortunately, if I ever log out of the site I get this default feed, which is about 25% outrage-bait (e.g. a video of a group of people confronting a car driving on the sidewalk) (e.g. “Controversial law allows police to seize and sell cars of non-lawbreakers, keeping the proceeds”).

The math here is working against me. One cute picture of a cat does not negate one horrifying video of somebody slipping on the stairs and hurting themselves. Or in the case of HN, one intense rant is not negated by one thoughtful comment.

Youtube though is the platform I’d miss the most. I seem to have no control over my youtube feed, but ideally it’d be this “Nothing political, nothing with the words ‘destroyed’/’owned’/’idiot’. Nothing about millennials/boomers. Nothing where the thumbnail is a face-palm. Nothing with the phrase ‘You wont believe'”.

I go to youtube for things like lockpicking videos, gameboy repair, primitive survival. Unfortunately, somehow I seem to get recommended a ton of Joe Rogan clips, unwanted fringe political videos, and a mix of other neutral unwanted content. And the comments are the worst.

Unfortunately what youtube’s algorithm doesn’t understand is linking one single inflammatory bad video recommendation (even if I don’t click it) may make my entire youtube experience negative. Most topics have an emotional impact on people, and web 2.0 needs to start regulating for this, or I suspect others will eventually find themselves booting platforms wholesale too.

If this were a youtube video, this is the point where I’d be telling you that this is some crisis to freak out about. But honestly it’s not really an urgent problem. I think there’s an opportunity for us as individuals to become increasingly aware of when we’re being baited/provoked and avoid platforms. I also think there’s an opportunity for new platforms or algorithms to form that prioritize giving the user the emotional experience that they are seeking.

A different github design

Earlier today HN reviewed a proposed github redesign and largely didn’t favor it.

I figured I’d take a stab at it

See the proposed before-and-after pictures above.
Here are some principles that seem intuitive to me, and maybe designers might consider them too.

  1. The visibility of an item should be proportional to its usefulness. This includes size, placement, brightness/colorfulness.
  2. There is always a flexible solution which caters to both experts and novices simultaneously.
  3. Hierarchy of UI should reflect conceptual hierarchy.

In practice here are the things I changed in my mockup, based on those 3 rules:

  • Create new file has no business being on the same line as “Clone repo.” It is a branch-specific operation next to a repo-specific operation.
  • The current-branch dropdown/button should connected-to the file-list widget. The file-list widget is showing files of that branch. The two are logically interdependent but visually separated.
  • Wiki and Insights are features I have never used on github and may never use. They should be hidden by default. They can intelligently show for repos that have ever once used those features.
  • The repo description shouldn’t be in the code tab. The repo description is an attribute to the repo itself

Good luck on your design journeys.

My Brainf Quine

A quine is something simple to describe yet surprisingly challenging to implement– a program which outputs exactly its own source-code. It is something of a rite-of-passage for an engineering afficianado. For those that consider ourselves one level beyond afficianado we always are looking to up the ante. I took two years exploratory years off after high school, and remember them fondly. Those were the days I could explore anything I wanted. Time was so abundant and problems were so scarce that I’d take on challenges like quines recreationally.

It’s a magical place to be in, when you any path feels possible and no obligations feels mandatory. It’s a time when one’s world-view is fully open, and interesting opportunities seem everywhere. It’s a time before traditional adulthood, where one can feel exhausted by unending obligations (cable bill, health insurance, change my oil, arrange my 401k, excercise more, sleep more, read more, relax more, setup dentist appointment, pickup groceries, return that item, answer those emails to those family members).

Once we’re in the “real world” it can be a challenge to remember that initial feeling of possibility. Once the lionshare of our time is spoken for, one may switch modes from expanding exploration to reduction. A mode where we filter our world into a functional place of checklists and routines to optimize staying afloat when our time, attention, and concern run short and we must ration them.

Anyways, I reminisce. But back in that era, one thing my friends and I would do is make coding challenges for each other. After a friend introduced me to “brainfuck,” a language with only 6 commands all represented as single characters, I challenged him to write a quine in brainfuck. I can see by googling that many other people like us are out there, who, like us, have been to that place where we are hungry for the next challenge to create for ourselves.

Recently I found my quine from back then it brought back memories.




Sopping Wet — Today’s Software Ecosystem Isn’t DRY

Tl; Dr:

  • Everyone seems to understand DRY is good at the program level, but they don’t seem to understand it at the community level.
  • Examples of useless duplication include many programming languages, libraries, package managers, data-stores, tools
  • This community duplication reduces interoperability and slows productivity across the board

Section 1: Some examples

1. Why is there more than one unix/linux package manager? Do we really need a different package manager with the same commands but renamed for each programming language? Why do we need a distinct package manager for each distro?

2. Nobody seems to admit it, but Php, Ruby, Python, and Javascript are the same language, with a little sugar added here or there and different libraries. More formally, I’d say that for 99% of lines of code (written) there’s a 1:1 translation between each of these languages (on a per-line basis). I get differences like curly braces vs indenting, but it does strike me as wet that each language has rebuilt so much core functionality (date parsing, database connectivity with the use of NoSQL database solutions, html parsing, regex, etc). Wrapping libcurl, for example, is a great way to stay dry.

This leads to a scenario where “learning a language” is more about learning the libraries than anything else (e.g. “How do timezones work again in PHP?”)

3. Did MongoDB really need to exist as as a standalone application? What if MongoDB had simply been a storage engine? The concept of a datastore that adapts its schema on-the-fly and drops relations for speed is okay, but does that justify creating an entirely new data-storage technology to that end? This means millions of engineers learning a new query syntax for a potentially temporary technology. Same with the security policy, all the DB drivers. There’s no reason all the tools to get visibility (sql pro) and backup the database need to be reinvented. Plus, if it were just a storage engine, migrating tables to InnoDB would be easier.

The same point holds for cassandra (which is basically mysql with sharding and more sophisticated replication built in), elastic search, and even kafka (basically just WAL of mysql without columns). For example, a kafka topic could be seen as a table with the columns: offset, value. Remember storage engines can process different variations on SQL to handle any special functionality or performance characteristics as-needed (I do recognize what I’m describing is easier said than done, but recommend it nonetheless).

4. Overly-specialized technologies should not exist (unless built directly around a general technology). You ever see a fancy dinner-set, where for “convenience” people are offered 5 forks and spoons, each one meant to be used slightly differently for a slightly different task? That’s how I feel about overly-specialized technologies. For example, people seem to love job queues. What if all job queues were implemented on top of a SQL backend so that engineers get the normal benefits:

  1. engineers know how to diagnose the system if it fails because it’s a known system (e.g. performance issues, permissions)
  2. engineers can always query the system to see what’s happening because it’s using a standardized query language
  3. engineers can modify the system if necessary because it provides visibility into its workings
  4. engineers can use existing backup, replication, monitoring, and other technologies to store/distribute the queue (giving interoperability)

Section 2: What’s the result of all this?

  • Every time a brand-new hype technology is introduced senior Engineers are all set back years relative to junior ones (which is bad for senior engineers, good for junior engineers)
  • The ecosystem is set back as a whole (all tools, libraries that interact with the old technology are rebuilt for the new one)
  • Company is placed in an uncomfortable position because it now only has junior engineers in the given technology. When I was junior, I worked at a startup that accidentally lost most of their customers’ phone numbers because their PHP driver for mongo would convert numeric strings to numbers, and phone numbers would overflow the default integer, resulting in no fatal errors but simply negative phone numbers.
  • The company runs the risk of being saddled with a technology that will be dropped (e.g. couchdb, backbone) and will require a rewrite back to a standard technology or be perceived as behind-the-times.
  • Slow-learning / part-time engineers must keep pace with the changing landscape or face irrelevance. Those that can’t learn several technologies a year will stumble.
  • Fast paced-engineers will lose half of their learning capacity on trivialities and gotchas of each technology’s idiosyncrasies (e.g. why can’t apache configs and nginx configs bare any resemblance to each other?). Once these technologies are phased out (e.g. now it’s mostly cloud ELBs) all of that memorization is for naught. It’s a treadmill effect – engineers have to sprint (keep learning new technologies) to move forward at all, walk just to stay in place, and if you can’t keep pace with the treadmill you fall behind.Quick aside – I think things moving to the cloud is probably one of the most outstanding dev benefits I’ve seen in my life. However it continues to illustrate the point in that if different cloud providers don’t standardize then the whole development community is slowed down.

Section 3: The exceptions

There are a few exceptions I can think of when a complete rebuild from scratch was an improvement. One would be Git. In a few months, one of the most prominent software geniuses of our era invented a source-control system so superior to everything else that it has been adopted universally in a few years, despite the intimidating interface.

The times a rebuild is justified seem to be when many of these criteria apply:

  • You’re a known and well-respected name that people trust so much the community might standardize on what you make (e.g. Linus Torvalds, Google)
  • The existing systems are all awful in fundamental ways, not simple in easily-patchable ways. You’ve got the ability, time [and we’re talking at least a decade of support], money to dedicate yourself to this project (git, aws, gmail, jquery in 2006)
  • You can make your system backward compatible (e.g. C++ allows C, C allows assembler, Scala allows Java, many game systems and storage devices can read previous-generation media) and thus can reuse existing knowledge, libraries, and tools
  • You’re so smart and not-average that your system isn’t going to have the myriad of unanticipated flaws that most software systems you want to replace will. For example, angular, backbone, nosql, are all things that might, in hindsight, have been not worth learning. Of the current high-buzz languages as of me writing this (Go, Clojure, Haskell, Ruby) it’s open to speculation which will stand the test of time.
  • Your system is already built-in-to or easily-integrated-with existing systems (e.g. JSON being interpretable in all browsers automatically, moving your service to the web where it will work cross-platform and be accessible without installation)

Section 4: What can one do?

  1. Learn the technologies that have stood the test of time: linux cli, c++/java, javascript, SQL
  2. Wait years before adopting a technology in professional use for a major use-case– let other companies be the guinea pig
  3. Be judicious in usage of new technologies. For whatever reason, it’s culturally “cool” to know about the “next big thing,” but it’s better to be late and right than early and wrong.