The headlines were unanimous: The mobile app used for tallying results during the Iowa Democratic Caucus was an unmitigated failure. Not only did it delay the outcome of the vote count, it cast a shadow on the integrity of the voting process.
What went wrong? Could it be, as opined by Vox, that “using an app to tally election results wasn’t such a good idea”?
Or was this simply a case of great idea, poor execution?
In this piece, we’ll examine what went wrong with the Iowa Democratic Party’s app, what a good app would have looked like, and how government agencies, political bodies, and other high-pressure groups can avoid the same mistakes.
How Not to Develop and Deploy an App
The problem surfaced hours after the caucuses ended. The Iowa Democratic Party had not reported results, citing inconsistencies in the reporting data. Officials were quick to say the delay was not caused by a hack or intrusion.
Still, speculation surfaced about possible security problems with technology. On Twitter, stories raising concerns about the caucus app’s vulnerabilities resurfaced. One of the top concerns cited in those pieces centered on the plan for caucus volunteers to download the app directly to their phones, which made it difficult to ensure the safety of the devices.
As the hours ticked by, chaos ensued, with the campaigns of two candidates claiming victory as the field headed east for the New Hampshire Democratic Primary on February 11.
What went wrong? As it turns out, quite a lot:
- Caucus field staff claimed the app wasn’t working properly. Some could not download the app. Others couldn’t sign into it, and still others complained that the backup method, reporting by phone, wasn’t letting their calls through.
- Cybersecurity experts and academics said the app was not tested at statewide scale or vetted by the Department of Homeland Security’s cybersecurity agency.
- And even if the app was working, reports suggest, the roll out of the tool was so badly botched that those responsible for reporting via the app weren’t trained on how to use it.
- The app was not deployed through traditional app stores or even sideloaded using an enterprise certificate. Instead, it was distributed through mobile testing platforms, including Apple’s TestFlight and a similar platform that services both iOS and Android called TestFairy. App developers and large software makers typically use testing platforms for mobile apps that are still in beta (i.e., not quite finalized). Developers can use the testing platforms to distribute the beta version of the software without having to go through the rigorous App Store and Play Store review processes.
- The app was distributed using the TestFairy platform’s free tier and not its enterprise one. Developers didn’t even pay for the TestFairy plan that comes with single sign-on authentication, unlimited data retention, and end-to-end encryption. Instead, they used the version of TestFairy anyone can try for free. It deletes any app data after 30 days and limits the number of test users that can access the app to 200.
In short, it appears that the app was rushed into use long before the necessary testing, training, and due diligence had taken place.
Rules of the App Development Road
Hindsight being what it is, it’s easy for observers to shake their heads and say, “We would never do that.”
But, how can they make sure? The key is to know what is required to develop an app the right way:
- A rigorous, formal authorization process
- A formal risk model
- A formal threat model for the application
For example, developers working on federal websites must go through the security authority within their respective agency. That person confirms the application has gone through the FedRAMP certification process for cloud-hosted environments. The process confirms whether apps meet a standard set of 350-450 controls.
Some would argue there ought to be a similar formal process in place when software designed for government or public use is developed by external vendors. Unfortunately, the added work involved in meeting such standards would also add considerable cost to every project.
Fortunately, many developers still use those federal standards to establish clear frameworks while designing and testing their project. Working from the outside in, these developers consider such questions as “What boundary protections does this software need?” and “How does this software interface with other systems?”
Ready to Launch
State primaries and caucuses call for an app that is both secure and able to withstand rigorous load testing (i.e., the process of putting demands on a system and measuring its response). The app may work beautifully during standard testing but collapse completely once thousands of people start to use it.
As an example, Mobomo’s own load-testing procedures proved invaluable during the NASA.gov webcast of the “Great American Eclipse” in 2017. The NASA site typically has 10,000 – 50,000 simultaneous viewers. But on the day of the eclipse, NASA streamed the all-day event, which generated five to six times the streaming traffic of that year’s Super Bowl.
The NASA site performed superbly even under these conditions in part because it was designed to meet the FedRAMP continuity operation planning controls. These controls ensure that if one aspect of the software goes down, backup systems bypass the problem and the software continues to function with minimal interruption to the user.
And that’s the whole point of developing an app for a high-volume, high-pressure task like reporting data from caucus sites or primary voting stations: Using the best technology, the best processes, and the highest levels of expertise to make an app that works so smoothly and accurately, one would never realize the level of complexity and rigor that goes into developing it.