The Prototype Maturity Model




"software evolving like darwin evolution of robot pencil sketch"

Moving Fast, Taking Shortcuts

Not all ideas are good ideas.  In fact, the majority of your ideas will not be good.  When it comes to building software, it doesn't make much sense to build out a prototype of a new idea with all the best practices a full fledged software product requires.   This is why we rapidly prototype software for a new feature, or product.  We want to validate the idea as quickly as possible. This is the whole philosophy behind the minimal viable product or MVP that start ups are always talking about.  But MVP doesn't just apply to startups.  There are ideas that can be prototyped in the biggest companies that could save the company employees hundreds of hours of time, or millions of dollars.  

So, you decide your idea might have some merit. You meet with the team, and list out the core, minimal features.  Everyone is on board, and your team has the green light to build a prototype.   Now, the hard decisions start to pile up.


software dna watercolor

It's tempting to over-engineer.  Our developer DNA is encoded to produce secure, scalable, and performant software.  When trying to get a prototype out the door, there will be many decisions where we seemingly go against our better judgement.   This is expected, but these decisions are not typically documented and revisited.  

You take shortcuts in some areas.  Maybe you are storing data on disk, and not backing it up. Maybe the UI is bare bones.  Maybe there is one hard coded user that uses basic authentication.  It doesn't matter what the shortcuts are in each specific case.  The prototype has been released into the wild at a fraction of the time and cost it would have been to develop it to your highest standards. You keep iterating, and your prototype starts to resonate with the intended audience.  This is a good thing.  The idea is validated, and your team prepares to launch the prototype as an internal tool, or maybe a customer facing feature or product.

You may lose control of your prototype. It may get co-opted or inherited by another team.  The team may start to receive pressure from management or sales to push a product out the door faster than was originally expected. This is still a good thing.  

With your prototype getting more eyeballs and use, questions start to appear.  Why did you choose that datatype?  Why didn't you follow our best practice guide for logging and observability?  Why is there no admin interface for the support team? After you've been through this once or twice, you start to pad your estimates, and defensively code.  The time to get an MVP out the door gets longer, and your ability to iterate quickly suffers.  This is a defensive CYA (cover your ass) approach.  If your prototype was inherited by another group, and they question your decisions, you could gain a reputation for writing poor code, or making poor design decisions.  You may even get defensive about your decisions and find yourself explaining the history of the project over and over, to various people in different roles. This is a bad thing.

Communication

After seeing situations like this arise in various companies of various sizes, I drafted the Prototype Maturity Model.  You can think of it like a grading rubric that you might have received in primary school.  The twist is that there are no "bad" grades.   The model is simply a tool that you and your team use to stay on the same page.  When you are in the initial phases of developing an MVP, you can look at the rubric and decide which areas you want to focus on, and which areas you can skimp on.  Every few weeks, you and the team can grade your prototype.  This gives you a snapshot of the prototype maturity over time.

Now, assume your prototype has been handed off to another team without much notice.  You can point them to the model, and your ongoing snapshots of its maturity.  This allows them to understand the decisions you made without necessarily listing each and every one. 

The PMM also works great as a way to discuss risk tolerance with managers.  The model brings to light the gaps a software product may have.  It also plainly lays out the path to all stakeholders of what must be done in order to get the prototype within everyone's risk tolerance before a full launch.   



Take a look a the aspects and ratings below, or view it in Google Docs in matrix form. (Please, feel free to copy, and modify as you see fit!)

One or more engineers can give their opinion, and reasons for their rating should be detailed.  This will help them know what rating to give when revisiting on a regular basis, or if another team picks up the project.


Let me know what you think!  If you have any questions about the model, or thoughts / insights / shared frustration with this sort of thing, let me know!

- Andy Glassman

The Prototype Maturity Model (PMM)

The PMM is a matrix of aspects and ratings.  The aspects are broad categories of concerns that you typically encounter when developing software.  Ratings are a number from 0-3, with a short description of what the rating means.   The aspects and ratings can be determined by you and your team, there is no "one size fits all" model.

Here are the aspects and ratings that I have personally used for various projects:

Note - I prefer to use 0-3, but it was just easier to used a numbered list :)

Security

  1. Security of the application has not been considered.
  2. Security of the application has been considered, but there are significant gaps identified to bring it in line with industry standards.
  3. Security of the application has been analyzed internally, and major gaps have been closed.
  4. Security of the application has been implemented to industry standards, and has been tested by a 3rd party.  Security testing by a 3rd party is done on a regular basis.

Observability

  1. Observability of the application has not been considered.
  2.  System is observable in manual, and tedious ways, such as remote shell sessions.  May require logging into a specific environment.
  3.  System is observable using ancillary tools such as aggregated logging, and it’s easy to search across services and environments.
  4. System is observable, and utilizes distributed tracing across services, and includes infrastructure, cloud based services.

Audit

  1. Auditability of the application has not been considered.
  2.  Key aspects of the system have been identified for auditability, but may require manual data querying.
  3. Key aspects of the system have been identified for auditability, and there is an easy / secure way for internal employees to access the audit information.
  4. Key aspects of the system are auditable, and there is an easy / secure way to access the records.  Records have a retention policy identified, and the archival process is automated. Retrieval of archived records is possible.

Support

  1. Support of the application has not been considered.
  2.  Support is mostly done by engineers in an ad-hoc manner.  May require direct access to server instances, and direct credentials to the database.
  3. Key customer facing support interactions have been identified, and there is a documented process to perform them.  Engineers not the only people who can perform the support. 
  4. Key customer facing support interactions can be easily performed by support employee.  Does not require direct access to servers, or database.  Modifications done on behalf of another user are recorded for auditability.

Support Monitoring

  1. Support Monitoring for the application has not been considered.
  2. The application is mainly monitored ad-hoc by engineers (checking logs / QA)
  3. Issues are automatically captured and reported by a tool.
  4. Issues are automatically captured, and engineers are alerted when an issue arises.  

Performance

  1. Performance of the application has not been considered.
  2. The projected usage of the system has been documented, but no performance / load testing have been completed.
  3. A performance / load test has been established, and has been performed.
  4. Performance / load testing is part of regression testing. SLAs on performance have been established, and are actively enforced.

Performance Monitoring

  1. Performance Monitoring of the application has not been considered
  2. Key performance metrics have been identified, but no way to effectively measure them has been implemented.  (or vice versa)
  3. Capability exists to monitor performance but key metrics have not been identified.
  4. Capability to monitor key metrics exists, and key metrics are captured and available (dashboard / alerting)

Availability / Scale

  1. Availability / Scale of the application has not been considered.
  2. Availability and scaling SLAs have been identified, but not yet implemented.
  3. Availability  SLAs have been created, and are actively monitored.  Scaling approaches have been identified.
  4. Availability is actively monitored, and application can scale automatically under high demand (or has enough overhead to meet expected peak demand).

Customer Data

  1. How sensitive customer data is handled has not been considered.
  2.  Sensitive customer data has been identified.
  3. Sensitive customer data has been identified, and policies / practices have been implemented to keep it secure.
  4. Processes have been documented for what to do if there is a leak of customer data.  

Compliance

  1. Compliance requirements have not been considered.
  2. Compliance research has been done, and possible compliance work has been identified.
  3. The app complies with most necessary compliance.  Any compliance gaps are documented and exist in work backlog.
  4. App is compliant with all all mandatory policies.  Process is in place to ensure continued compliance with those policies.

Infrastructure - Environments

  1. Infrastructure has not been considered.
  2. App is deployable to one or more non-local environment.
  3. Application is deployable to production.
  4. New environments can be stood up quickly, and in an automated / repeatable fashion.

Infrastructure - Deployments

  1. Deployment has not been considered.
  2. App is manually deployable to an environment.
  3. App is automatically deployable to production, and a rollback process has been identified and exercised.  If a deploy causes downtime or interruptions to end users, these must be documented.
  4. App is able to be deployed with zero downtime or interruption to end users.

Infrastructure - Security

  1. Security has not been considered.
  2. Basic security has been considered, and implemented. (port blocking, IAM role evaluations)
  3. Process for continual evaluation of infrastructure security is created, and performed.  VMs and libraries are patched in a timely manner if exploits are identified.
  4. Automated testing/monitoring of infrastructure level security is implemented.3rd party evaluation of security is routinely done.  Software to monitor and alert on intrusions, or exploitable images / libraries is used.

Data - Backup / Restore

  1. Backup and restore of data has not been considered.
  2. Manual backups of data are available in a secure manner.
  3. Automated backups are run in production, and a restore process has been documented.
  4. Automated backups are available, and the restore process has been documented, and executed in a test environment.

UI / UX

  1. The user interface, and user experience have not been considered.
  2. UI / UX has been initially designed.
  3. UI / UX has had end user feed back, and some has been implemented.  Accessibility has been considered, and meets minimum WCAG guidelines.
  4. A process for continuous UI/UX improvement exists, and is exercised.  WCAGs are met, and continually re-evaluated by a 3rd party.

Quality Assurance

  1. The quality of the application has not been considered.
  2. Basic unit / integration tests exist. 
  3. Majority of QA is manually performed by engineers / other employees.
  4. Some QA is automated, mostly happy path flows. Manual QA is managed by a test case / test suite manager like Test Rail for repeatability. Regression and key user flows have automated test suites that alert / block deploys on failure.


Comments

Popular posts from this blog

Phoenix LiveView: Async Assign Pattern

Write Admin Tools From Day One

Extremely Fake Cheapskates! Season One Recap