Seth posted a short, great article about how to completely win your space. It totally transformed the way I think about marketing. It's just a few paragraphs -- go read it and meet me back here.
A shallow interpretation of Seth’s point is that "service makes the difference," but that just leads to the usual phrases that have been repeated so many times they’ve lost meaning. "We put customers first." "Except the best, both before and after the sale." "We believe our role is to serve our customers." "Our customers are buying a solution, not a technology."
Blah blah. Everyone says that. What does it really mean?
Let's take that last vapid statement and make it concrete. It's true that no one wakes up in the morning and says, "Gee, I wish I could purchase, install, learn, and train my people on a new software tool!" Tools suck – they're confusing, they rarely do exactly what you want, and tech support is often a nightmare.
In our case, our customers have one of several specific problems that they'd like to make go away. Perhaps they spend 20% of their time in code review on boring, wasteful chores like collecting and sending diffs around or scheduling meetings. If it's something this specific, perhaps the tool can sell itself –- if it clearly removes the drudgery and during the trial the developers feel it saves more time than it wastes with its own idiosyncrasies, then it's worth it.
But usually our customers are in a more difficult spot. They need to reduce bugs or work with offshore teams, so they know code review is necessary. But how to do it?
This question -– how to do it -– is what keeps them up at night. They know they could easily blow 30% of their developers' time on this process. What if it doesn't result in reducing bugs? What if their developers hate it so much they revolt? How can they tell whether it's really working?
In fact, most people rightly believe the hardest thing about code review isn't the process, isn't metrics, isn’t reports, isn’t communication –- it's about (a) how do we make sure we're not wasting time and (b) how do we deal with social effects of ego-full, sensitive geeks critiquing each others' work?
This is where your "customer service" comes in. At this point it's not about tech support, it's not about what features your tool has, and it’s not how good your salesguy is at "closing."
I often sit down with a customer for a few hours. I help them as them the right questions of themselves. I help them determine which code they'll get the most bang-for-the-buck on so they can see some immediate results. I give them stories and even presentations about how gratifying code review can actually be, how to foster an environment where code review is genuinely about mentoring, learning, and getting rid of bugs together.
No tool will do that, and I also think no on-line form, Wikipedia page, or case study will do that either. Yes, our tool is important in removing drudgery and making reports, but it’s a means to an end.
The interesting part is all this mentoring stuff we do completely for free. If you tried to objectively quantify the true value to our customers of this advice and direction next to the tool, the advice is probably more valuable.
Maybe that's why it's the reason we consistently win over all other tools in the market. Because we're the only ones that can provide that service, and we do it for free. It tangibly demonstrates that we really do want our customers to succeed, not just buy some seats.
And of course when they do succeed, why go purchase something else? The free advice alone is worth any price difference.
Thursday, November 29, 2007
How to be the most expensive product on the market
Categories: code review, selling software
Monday, November 5, 2007
How to build a checklist
CM Crossroads just published my article on how to build code review checklists. Half of which is, how not to build one.
Categories: code review, software development
It's not about in-scope/out-scope
The software project is behind schedule. The deadline is looming and everything finally acknowledges -- too late -- that we have to start making concessions.
So there are two choices, right? Release later, or reduce scope by throwing out features.
Not so fast.
"Remove from Scope" is not the answer
Too often we developers are stuck on this "remove it from scope" mantra. Yes, completely throwing out features is a way to get to the release date, but this isn't necessarily the best thing for the product or the customers.
Ultimately our job as engineers and not just worker-bees is to make trade-offs. Not just in design or algorithms, but in the end user use-case and, yes, even to support marketing and sales with their promises and messages.
Not cutting, but changing
Five years ago I was on a project with an impossible deadline. We had to get a complete custom reporting system integrated into a project for their v3.0 release, and for uninteresting reasons we had just four months to do it. There was a spec that couldn't possibly be completed in time, but the deadline was firm because a customer contract was in place.
The project ended up a complete success. The system was demonstrated to the customer on time and was accepted. It was so popular, it became the "big finale" for product demos.
The reason it was a success is because we never posed the question: "What parts of the spec do you want to throw out?" Instead we would ask: "How can we allow the user to accomplish what the original spec wanted him to accomplish, but in a different, simpler way?"
Sure that means less "stuff" -- fewer absolute number of features and fewer choices. But it doesn't mean tossing items J-M -- it means reevaluating the entire spec from the point of view of the use-cases and marketing requirements.
Happy days
There's another benefit to this approach.
We're all used to the tension between development and sales. Sales wants more, faster. Requirements pushed from the outside are ambiguous, and when you deliver it's never the right thing.
The normal reaction from development is: "You're changing the rules, therefore we simply refuse to give in. Either stop changing the spec or change the deadline. It's a zero-sum game, and we won't be responsible if you make the sum larger."
But that's not helping sales OR the hapless customer. (Customers don't have hap, apparently.)
What helps is arriving at a better solution. The project drivers will be more willing to negotiate if you're going to meet them halfway. They know they've added to your schedule. If you show the genuine desire to get their root problems solved, they'll be more willing to bend on exactly how you're going to solve their problems.
Besides, we can all agree that "Customer Success" is the ultimate goal, and clearly negotiating for their success is better than cutting them off.
Categories: software development
Friday, October 26, 2007
Software is NOT like 6-Sigma
Six-Sigma, ISO 9001, and the like are processes used in mass-production lines to ensure vanishingly small errors.
It makes sense, if you're Dunkin Doughnuts producing 350,000 doughnuts per day. The problem is that some people insist that these processes should be applicable to software development.

"Why can't software development be more like electronics development?" "Six-Sigma means almost zero defects on massive scales, so why can't we use these techniques in software?" These are the kinds of questions you get.
And of course some people actually try. Defense contractors, CMMI devotees, and behemoth government-like organizations try.
But software development isn't like making 350,000 doughnuts. With doughnuts the thing you're making is completely understood and measurable, and the problems come in copying the Platonic form zillions of times without error. Difficult, I agree.
But in software you're making ONE, massive, complex, changing, ill-understood thing, and making it ONCE. When we ship copies, they are perfect copies. Unless you're burning DVD's, in which case the act of burning might be best handled under Six-Sigma.
The problems are completely different.
For example, our software interfaces with other tools. When those tool vendors release new versions, it's possible that they break our interface with them. Therefore, even with perfect copying and no existing bugs, our software will STILL BREAK!
For example, in software requirements are supposed to change. Possibly during development, definitely afterwards. Once it's decided that a bridge is supposed to go from A to B, it's done. Even if that decision turns out to be wrong, everyone accepts that either we should nuke the bridge or finish it anyway -- there's no concept that we should rebuild the bridge in mid-air and still make schedule. Not so in software -- this happens all the time.
For example, software's purpose changes over time. A doughnut is always supposed to be a doughnut. A home Wifi router gets small firmware updates, but no one expects it to do more than what it did on the they day bought it. Most software projects change radically over their lifecycle. Windows 1.0 barely solved any problems; Windows 95 changed everything, and Vista barely looks like it came from the same company that released XP. Even Google Search -- the algorithms, advertising, and company goals are nothing like they were when it first became well known. But Dunkin still primarily makes doughnuts (and coffee), and we expect nothing now from 50-year-old bridges than what we originally expected.
There's more. Software developers themselves are a different breed than workers on the doughnut assembly line floor. The world-view and skills required for software development turns over at least every 10 years (more often if you want to change jobs). And sometimes a new feature really is more useful to a customer than a bug-fix, though rarely does anyone admit it.
The point isn't to say that it's OK that we have bugs in software, or that we can't do anything about, or that we shouldn't try. It's just that blindly applying process from other disciplines is not the answer.
Categories: software development
Bing! You've got Cash!
The worst ATM I've used is at the Austin airport.
That in itself is a weird statement. How different can (should?) an ATM be? Just give me cash fast, right?
First it asks what language I want. Fine.
Then it confirmed: "You have selected English. Is this correct?" Yes.
But choosing "Yes" made a sound that is exactly the Windows "Cannot click there" sound. Like when you click outside a modal dialog. Turns out it used that sound every time you selected something properly, and no sound if you made a mistake -- precisely the opposite of my Pavlovian reaction. The whole experience was unnerving.
So then it asks whether I want "Fast Cash" or "Other Transactions." I chose fast. I got the usual list of $20, $40, ..., $100. But I needed $150 or $200. Not an option. No way to say "Enter a number." Only other option is "Cancel." So I hit "Cancel."
Not so fast! Now I am prompted: Do I want to "Exit" or "Start Over." Start over of course. Oh wait, you need to swipe your card again and select your language. So "Exit" and "Start Over" do exactly the same thing. Great.
This time it's "Other Transactions." The following screen was identical to the "Fast Cash" screen, except now there's also an option to type a different amount of cash.
Sigh.
Does your application have more choices than it needs? Does your application ask questions it doesn't need to?
Sometimes it's more subtle. For example, maybe you have 10 independent user settings, but there are really just 3 combinations of those 10 which 90% of users really need. Why not provide one option with 3 choices instead of making the user figure that out?
Or take a middle road: Provide 3 choices, but also have an expandable "Advanced" or "Details" view where power users can tweak further.
Not only will this cut down on your tech support, it also helps new users get going in the system with something consistent and sensible for them, which means your trials might go more smoothly.
So now that you've reached the end of this blog entry, would you like to Exit, or Start Over?
Categories: software development
December Dumbass
Don't be a schmuck.
This is Jackie Mason's advice about financial investments. Or how to buy a suit. Or how to treat your mother. I think it's his advice about everything.
He's right. Especially about software release dates.
In my infinite wisdom, last year I decided we needed to release Code Collaborator v2.0 at "the end of the year." This was brilliant because: Since everyone is on vacation at the end of the year, the office will be relatively quiet and we can get the release done.
What I didn't consider was that we would be on vacation too. And that the customers who we imagined were anxiously awaiting our release were also on vacation.
So after killing ourselves, and slipping to Jan 20 anyway, turned out no one cared. We cared about our deadlines much more than anyone else.
Even when you have people waiting for a release, remember that they'll upgrade when it makes sense for them. If you're in the middle of a release cycle, you probably won't spend the time, take the risk, and get retrained on an upgrade. We're the same way -- a new version of the profiler we use came out during our 2.0 crunch, but there's no way we had the time to mess with it then. Even if it's better.
So this time we released v4.0 beta on Oct 1, and we expect to exit beta before the end of the year. But if we don't, we don't care!
Categories: day in the life, software development
Thursday, October 4, 2007
800lb Gorrillas Don't Care
NetFlix just posted their first ever decline in subscribers. They attributed this to Blockbuster -- the company some people thought they would put out of business.
A year ago Blockbuster responded to NetFlix's success with a similar program of their own, except that you could trade in DVD's at the store which meant more immediate turn-around time. It's a great way to leverage their advantage -- physical stores -- against a seemingly unstoppable business model.
And it worked. Or did it? Blockbuster reported an $82 million loss for the first half of 2007, attributing this to costs associated with the new plan. So they slowed NetFlix, but it's hardly a success.
Small business operators worry about this sort of thing, especially when evaluating ideas for new ventures. What if the 800lb gorrilla wakes up and decides to compete with you? They have zillions of dollars to throw at it, and they probably don't even have to make a profit. How can anyone compete with that?
Stop worrying. In my opinion, this should never enter your mind, and here's why:
First, I'm assuming that you have your own niche, your own market, your own game. Something no one else -- certainly no one with a large, established business -- is doing. You're not going to go head-to-head with them out of the gate.
So assuming you're skirting the big guys, your worry is that they'll notice your activities and squash you.
First of all, your market ought to be so small that a big company wouldn't care. Even a $500 million market is too small for a mega-corporation to attack -- even owning 50% of it wouldn't account for the amount it would cost to own it. As much as you might think this is personal, it isn't. They don't care about squashing you, they just want to make more money.
If you do grow to such a size that you've validated this (new?) market, perhaps proving it's better than anticipated or that there's buzz to be had or whatever, to the point where the Goliath is willing to spend 100's of 1,000,000's of $$, you've already won. You're already wildly successful, more than you ever thought possible.
So don't worry. The elephant won't swat the fly. They don't get it, and that's in your favor!
Categories: selling software
