As part of our ongoing promise to keep you in the loop with our ongoing involvement with the FDA Pre-Certification Pilot program, here's the next post on how we're thinking.
In a prior post, we shared some of the "Excellence Principles" and "Common Validating Principles" that the FDA asked us to think about as part of the program. There's some great stuff in there, but it's a little different than how we think about building software. Here's our take… What do you think?
101 questions to ask yourself — "Am I building high quality health software?"
Inspired by "The Joel Test," Joel on Software's 12 Steps to Better Code, here's a list of questions to ask yourself about how you are building your medical or healthcare software.
There is an assumption here that you are building medical software, meaning that if you screwed up badly in how you built that software, that someone might get hurt, or, at scale, public health would be compromised. This is a little different than Facebook's old "Move Fast and Break Things" mentality, which, to their credit, was later replaced with "Move Fast with Stable Infra." To paraphrase that, we might say "Move Fast while Ensuring Safety and Efficacy with Stable Infrastructure."
The underlying philosophy here is this: When it comes to software, iteration is best for the public health. There are of course limits - You wouldn't hook the first generation of an automated insulin delivery algorithm to someone without thoroughly testing it. Risk analysis is important, too - you need to have an understanding of the probability and potential impact of what might go wrong, and think through what you can do to mitigate the risk.
But the rest of this is all about building good software, and most of the stuff here is stuff that great software companies do routinely, even ones that aren't building medical software. The question for regulatory bodies like the FDA is: "How do companies that deliver great software measure themselves, and know that their systems are working?"
Here's what we think about at Tidepool. We hope it will help you write amazing software that enables people to live healthier and less burdensome lives. Please give us your feedback by email or just comment on this post! We are always learning. How do you build great medical software?
Cheers,Howard
Please note: This is also available here as a Google Doc. Feel free to suggest edits or comment on the doc with your thoughts, or make a copy and use it as you see fit.
How to use this survey: Check off the things you do well. Be honest with yourself - if you wouldn't proudly publish how you do it on the front page of Hacker News or Stack Overflow, then don't count it. Then count up the things you are doing well:
- 101: Perfect score. Please share what you are doing publicly so that we all can aspire to build awesome software like you do.
- 90–100: You are crushing it. You take software quality really seriously and likely understand what it take to deliver Class-III grade medical software that carries little risk to the public health.
- 80–89: Not too shabby! There are probably some things you should still work on, but unless your software carries with it risk of harming someone if something were to go wrong, you are probably in great shape.
- 70–79: Getting there, but keep at it. If you can show that your software won't cause harm even if things go terribly wrong, then this may be OK. But you better have a very good risk mitigation strategy to make sure that's the case.
- Below 70: Before you deliver your software for public use, you better fix up some things.
PS: At Tidepool, we don't claim to do all of this well. We are very proud of our software and development processes, but even more proud of being self-critical and adopting a mindset of continuous improvement. We will publish a version of this document where we answer these questions ourselves shortly.
PPS: I know, I know. It's not really 101 questions, and the point system is a little arbitrary. You get the idea. This will be a living document that evolves as others in the community share how they do things.
Writing Code and Building Software
Builds
- Do you have a repeatable build system? Can you internally replicate a build from any point in time (e.g. based on a user complaint with a specific version)?
- Do builds happen quickly and automatically with every check in, e.g. using a continuous integration system like Jenkins, Travis CI or CircleCI?
- Do you permanently store (and back up) your build artifacts, including dependencies?
Releases and deployment
- Can you easily tell what you released and when? Is a given build/release reconstructable?
- Do you use SemVer or similar mechanism for clearly identifying releases? If a user were to report a problem with a specific release, could you reconstruct it deterministically?
- Do you maintain release notes so you can easily see what changed in any given release?
- Can you release to a test environment that is separate from your production environment?
- Can you do A/B testing of new functionality? On multiple environments?
- Are your automated tests robust/complete enough that you can do Continuous Deployment?
Requirements and functionality traceability
- Do you clearly document the requirements or use cases for each piece of functionality that you write code for? Can you clearly trace code changes back to those requirements?
- How do you know that the code only does what it is supposed to do without side effects? Is the code clean and readable, written in a consistent style? Does it have unit / system / integration tests?
Code quality and code review
- Do your engineers perform peer reviews of each other's code, or do pair programming?
- Do you use coding standards? Are they documented somewhere where everyone can find them? Does everyone follow them?
Version control
- Do you use a software version control system (e.g. Git / GitHub, Subversion, Perforce Helix, Mercurial)? (Seriously? Subtract 10 points if you don't…)
- Do you use a clearly defined and documented branching strategy? Is all new functionality developed in a separate branch? Is the merge tested before before it is integrated with mainline?
Dependency management
- Dependencies, and dependency validation: You probably depend on a lot of other software. Would your tests catch if that underlying software changed in a way that could break your assumptions? For example, what if a math or date library changed?
- Do you have a way to reconstruct a build with external dependencies? Would you be able to reconstruct a build from a year ago if a dependency were not available?
- Do you manage dependencies in a repeatable, reproducible way so that you don't inadvertently get an update that you weren't expecting? (e.g., use of yarn.lock)
Listening to your users: Functionality, user experience, and usability
- Do you build working, functional prototypes?
- Do you test the prototypes with typical users and incorporate their feedback prior to delivery, e.g. during alpha and beta programs, if not ongoing?
- Do you do interviews with real users on a regular basis? Do you document the results of those interviews and collate the results back into your product requirements? (If you do more than 1 per week, on average, prove it and give yourself up to 5 points here.)
- Do you continue to test your software with real users after shipping to production? Do you incorporate feedback on a regular basis?
- Do you do "hallway" usability testing with real users?
- Do you do formal usability testing with real users?
Quality and testing
Automated testing
- Do you have an automated test harness that runs with every build?
- Do your automated tests run quickly enough that they are useful to developers during development iteration?
- Are you able to simulate scenarios (e.g. fake device input data) without involving real users?
- Can you automatically simulate use of your software/device without involving real users?
- When a bug occurs, do you ask yourself why an automated test didn't catch it, and if possible add a new test?
- Do you have a policy around what and how code gets tested? Unit tests? Integration tests? Functional tests?
Manual testing
- Are there documented manual tests for functionality that cannot be tested automatically?
- Do you conduct Alpha and Beta programs and document the results?
- When a new bug is found that could not have been caught by automated tests but could have been caught by manual tests, do you add a new manual test or review your testing process?
Bug tracking
- Do you have a bug tracking system and a single place where all bugs are tracked?
- Is your risk analysis process formally documented and does it get used for all bugs?
- Do you have a mechanism for prioritizing bug fixes along with new work?
- Do bugs from your Alpha and Beta program (pre-market) get documented, quantified and incorporated into your process?
- Do all bugs reported by your users (post-market) across all possible inbound systems (support desk, phone, social media) get documented, quantified and incorporated into your process?
Processes and continuous process improvement
Risk analysis
- Do you have a documented process for quantifying the risk for every feature, bug or complaint?
Corrective and preventive action
- Do you have regular reviews of your processes, e.g. sprint retrospectives or post-release retrospectives?
- Do you document the things in your process that could be improved?
- Do you implement those things, and then later measure their effectiveness?
- Do you have a mechanism for prioritizing process fixes amongst all of the other work that needs to happen?
Process documentation
- Is your software development process documented in such a way that a new person can come up to speed and follow it with minimal reliance on "tribal knowledge"? (Add 1 bonus point if you are an open source project and an outside developer can come up to speed and build your project with minimal help.)
- Is your documentation publicly available so anyone, even people outside your organization can inspect and comment on it?
Organizational empowerment
- If an employee or partner wanted to escalate an important issue internally, would they be welcomed? Is vocally raising issues internally encouraged? (If an employee would be shunned in any way, subtract 1.)
- Is it clear whom outside your organization an employee could escalate to, e.g., a board member or to the FDA?
- Are there clear mechanisms for employees and partners to raise issues? Do those issues get documented and prioritized against all other work?
Operational excellence
Cybersecurity
- Do you encrypt all data at rest and in transit?
- Are all secret keys stored in a protected place and is it easy to rotate them? Is the process documented?
- Is your software digitally signed? Do you have a mechanism for knowing if your software got tampered with, esp. for software running on devices?
- Do you offer 2-factor authentication for your users?
- Do all of your employees use 2-factor authentication for all activities?
- Do you use an external agency to do penetration testing?
- Do you have an active Responsible Disclosure Program?
- Is your code open source and available for public review?
- Do you maintain configuration info, including deployment keys, independent of source code? Is access to those keys limited to specific people who do software deployments?
- Are your servers locked down to configurations that keep ports and network access limited as much as possible?
- Do you review available security patches and update configurations on a regular basis?
- Do you know about https://www.owasp.org? Do you review this in relation to your software?
- Do you document and prioritize security issues based on a documented risk analysis process?
- Are all of your security issues documented in one place?
- Do you evaluate and prioritize security issues regularly along with all of the other work that needs to be done?
Continuity of operation
- Are your servers in multiple data centers or availability zones?
- Do you create regular backups and have you documented and tested the restoration process?
- Do you use a high-availability, fault-tolerant system like AWS, Google App Engine or Rackspace (as opposed to trying to build/host your own systems)?
- Do you use automated logging and alerting systems? Do you have a 24x7 ops team or an on-call rotation?
- Do you have multiple, fault-tolerant instances of your production environment? Will your app/service keep working fine if hardware goes down?
Ongoing feedback and post-market analysis (a.k.a. post-market surveillance)
Post-market quality analysis
- Do you have a system that allows your users to intuitively and easily submit issues, complaints or support tickets?
- Do you have an automated mechanism for collecting issues that your users are having (e.g. via logging or crash reporting)?
- Can users easily identify what version of your software they are running?
- Do you analyze those issues on a regular basis, including doing risk/hazard analysis?
- Do you create prioritize new tasks based on the analysis, including the risk?
- Does your software automatically report back when it encounters errors?
- Do you log issues with your software and analyze them regularly?
- Have you ever had a report of a critical, severe or catastrophic hazard due to your software? (Subtract 10 points or more for each one.)
Post-market outcomes-based analysis
- Do you claim specific outcomes based on your software? If so answer the following questions: (If not, give yourself 5 free points. This means that your software is, for example, a "Medical Data Display System" (MDDS) or Electronic Health Record System. You are just moving information around but not making any medical recommendations or claims).
- Can you reference published, peer-reviewed studies that show your claimed results?
- Could someone else run a study and replicate your results? (Be honest! If you prevent researchers from doing comparative studies of your product, subtract 2 points.)
- Has someone else replicated your results?
- Does your logging/metrics system gather data that allows you to validate your claimed results?
- Do you have a process for following up with the public if your software might not support your claims, e.g. via social media or email campaign?
- Do you make it easy for people to let you know how it's going with your product? Via email, social media?