Search
What I Think About
Subscribe To The Salient Voice Journal

Entries in Software (2)

Sunday
Nov032013

The U.S. Government is Developing A Useful Software Engineering Management Case Study


Obamacare’s tech rollout has crashed. As a software CEO with plenty of software background and absolutely no insight into the actual Obamacare project, I’m regardless noticing a couple quick lessons peeking out from the media noise. They largely point to some failures in project governance. So in that spirit, here’s a couple tests from which to assess the software projects that are presented to you. Actually these tests derive from very old lessons in building software systems … this is just a new chance to air them out.

1. Dates without technical analysis. It would appear that lawmakers set policy and software delivery dates without a functional scope or design of the software system. Further, over the first couple years of the software effort, lawmakers were still changing policy, the functional requirements, and other externalities regarding the system. You can’t set believable goals for engineering-dependent policies without a sufficiently precise analysis of the software engineering scope and effort. Obvious.  So management vision has once again has met engineering realities. And once again engineering realities have won. This mistake has been the end of more than a few MBA's and strategy consultants (and now lawyers). 

 

2. Change without costs. The project was in flux but the dates were not. Functional requirements can and do change. Software design is by its nature iterative learning and ongoing adaptation. This is unremarkable. But there are always costs to changes. These costs show up as (pick one or more): mid-project feature and software usability compromises (also known as unhappy software users), software performance and scalability problems from quick fixing internal design assumptions (no time to start over!), quality problems from increased fragmentation of software design and its testing (or from the minimization of testing to meet delivery deadlines) and productivity reductions caused by all this software refactoring and regression retesting. If your system requirements are “evolving” and costs and dates are not moving, you should worry because one, more, or all of the above are in your future.

 

3. Resources growth is not acceleration. Massive resources don’t necessarily yield proportional increases in software development outcomes. This Obamacare dev effort has been remarkably expensive and resource intensive. Here is something that has been long understood in software development. Fredrick Brooks defined it. He developed the idea of The Mythical Man Month (The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition). Metaphorically it states as untrue the following: if one woman can make a baby in 9 months, then nine women should be able to make a baby in one month. On a more practical level, the deployment of people on your software projects at some point will increase your coordination costs without accelerating the project (it can even slow it down). The trick is how to figure out that tipping point. There was an optimal resource model and deliverables time frame by which to build this health care software platform. I wonder what it was.

 

4. Complex dynamic systems and the number and diversity of stakeholders. It’s hard to model economic systems in software. These systems and their actors (users) continuously evolve and their actions vary widely.  But software cannot have infinite and changing design use cases. Even if your software could be all things to all people, this won't all be in Release 1.0. So you must constrain the software. It must do less. But alas, if you do less, the software will naturally constrain choices within the economic system and thus make it less dynamic, less innovative, and, as a consequence, less adopted - if not actually avoided and circumvented. You must whittle down the core stakeholders and use cases to the critical core for a first phase project. This health care systems project has done the opposite with all care to all people. It's a governmental version of the 1990’s mega-ERP software projects. This era saw massive corporate spending on meta enterprise resource planning systems and related process reengineering (along with lots of consultants of course!). Then one day it became apparent that a focus on integrations and middleware between disparate smaller best-of-breed systems was better than deployment and adaptation (constraint?) to large monolithic platforms ... and these systems started to become diluted. An interesting software design question: if you build software that constrains users, will they stay constrained? 

 

5. Staged Rollouts. They just threw the big red switch to "on" for zillions of people. Very brave. In the real world, you stage rollouts of software to mitigate risks. If your development team goes for an all or nothing deployment, it's important to remember that one bug delivered to 310 million peoples is, well, millions of customer support phone calls (and then lots and lots and lots of marketing to overcome the ill will). Your simple management rule is this: the cost of bugs grow exponentially after they get deployed to customers. Look carefully at the scope of your team's testing and the open bugs before software rollouts. And regardless, stage software rollouts to an initially small but expanding group of customers, while incrementally fixing and extending the software features.

6. Scalability and performance. It’s clear that server capacity planning and volume stress testing was insufficient for Obamacare (or alternatively and less generously the planning was sufficient, the problems were indicated, and it was rolled it out anyway). More generally, when a system cannot handle user transaction volume, there are several possible reasons; the first involves the technology and tool stack used to build the software, the second is the software and data base design itself, and the third is network and server environment. Working backwards, money solves network and server problem easily, then software re-engineering will slowly (but at some expense) improve performance (especially if structural software rewrites are required), and a bad software development tech stack decision is a disaster and points to a redo (bad choices include wrong programming languages, wrong data base technology, tool sets, etc).

 

I’m sure there’s more. What’s interesting about these tests is that any world class software engineering manager knows them. So the troubles mean poor choices in hiring for project governance, or weak organizational design such that project management could not control the processes and resources (and therefore the outcomes), or weak development methods and processes, or political management and goals simply overruling the dev teams. Taxpayer dollars are helping write a great software project management case study.

 

 

Thursday
Nov182010

Designing a Technology Product? Consider Its Return on Attention.

I just noticed Tom Davenport's book, The Attention Economy, sitting on my bookshelf. I read it a couple years ago and I flipped through it again.

Attention is still a cool topic.

Telecommunications software and services rule my world today. But a few years back in Ernst & Young's consulting practice, I directed a strategy program for CIO's called "Navigating the New Technology Landscape". I ran it out of E&Y's Center for Business Innovation in Cambridge, MA. My focus on "attention" started with a series of NNTL conference discussions around "information overload" and the overload's challenges to internet infrastructure. In those conversations, software product managers and their customers were heavily absorbed in matters of transaction volumes, data base storage capacity, network capacity, and generally managing data quantity (scale) and quality. But it also became clear to me that these problems were really just short term technology constraints.

My sense was that human attention (not disk space and bandwidth) was the permanent scarce resource and as a result I shifted toward the idea that human attention should be the foremost driver of software product design.

We set down a basic principle. Value (pick your own definition) was to be found by focusing information technology design and development around helping people efficiently get something from their invested attention. Ever willing to over-apply math within stochastic settings, I carried around a little formula in my notebook at the time: "a return on invested attention ("RoA") equals the value of acquired knowledge divided by the intellectual energy required to acquire it".

RoA = Knowledge Value / Energy Invested

This is all pretty obvious but it was certainly not explicit for software design at the time. Instead developers operated with a goal of alignment to business requirements and software usability. Usability as then practiced, was screen-based Taylorism (time and motion analysis) applied to software interfaces. It was necessary but was itself not sufficient. We argued that software products which delivered a low attention requirement (and presumably high return) had the best shot at success. Of course, me being so smart, did I invest in Google in those early years? Ahh. No. Apple? No. Do as I say, and not as I do.

Tom Portante and I ultimately wrote a quick piece for the early (and very cool!) Wired Magazine. We also presented the subject at a couple tech conferences before moving on. So it turns out that Davenport, a friend from those days, with co-author John C. Beck, picked up the subject and ultimately wrote a book about it called "The Attention Economy" (By the way, a very belated thanks for the acknowledgement Tom). Tom was, at the time, E&Y's thought leader in knowledge management and process reengineering and was one of re-engineering's intellectual fathers worldwide. The book took a run at quantifying attention's value, establishing its place within the range of human (and business) activities, and it nodded toward the idea of structuring and directing (actually scripting) attention. His work significantly broadened my narrow product manager's view of the subject.

The problem with conceptual frameworks is that they are, well, conceptual. They therefore frame thinking more than they direct action. As such, attention still frames much of what's happening today but it still won't tell you how to make something. But hey those frames are interesting! Consider these questions: What is the comparative return on attention ("RoA") for a student's college experience versus online learning? What is the return on attention for a social network versus a friendly hello by phone or e-mail? Does Google Reader and Twitter have a higher return on attention than subscriber lists and web surfing? For highest RoA, should I watch the news on TV or over Google news?

So what's my point? If you're building information technology products, return on attention remains a durable (though amorphous) design consideration. But the subject ought to at least be the basis for a product manager stepping back, gathering all the features, functions, and user interactions, and then wondering; "Am I increasing or reducing the intellectual clutter in someone's day?" And; "How do I actually analyze that question, or answer that question, and more specifically how do I adjust my product design?"

I wonder if my blog has a positive return on attention (Resist your tempation to write me a clever e-mail).