The Security Implications of Open Source Software (OSS)

The Open-Source Software (OSS) movement has its origins in the earliest days of computing, when it was the norm for pioneering programmers to collaborate freely, sharing their code in order to learn from each other. The advent of commercial SW in the 70’s and 80’s and the huge profits that companies like Microsoft were able to reap from “closed” SW eco-systems initially pushed OSS to the side. However, the publication in 1997 of Eric Raymond’s The Cathedral and the Bazaar led to a resurgence of the notion and was a factor in the release of Netscape Communicator as free software, the source-code of which became the basis for Mozilla Firefox and Thunderbird. Today, OSS is an extremely important component of many world-class software products and applications, but the road to wide-spread acceptance of Open-Source as an effective tool for developing commercial SW has not always been smooth. Obviously, licensing terms are frequently a barrier to adoption of OSS, and those concerns are valid, but gone are the days when most SW development shops forbade its use (when I worked for IBM in the early 90’s developers were not officially allowed even to use open-source editors such as GNU Emacs to write code due to concerns that the GPL license would “contaminate” the IBM SW products built on that source!)

But what about security you say? Is OSS a cure or a curse when it comes to software vulnerabilities? And since the source-code for OSS is by definition public, is there any point to applying app-hardening protections such as obfuscation? I’d like to address these questions in this posting.

One of the singular benefits of OSS (besides that it’s “free” of course) is that its source-code is subject to public scrutiny and review: Many eyeballs means fewer bugs, and results in higher quality. Basically, you’re crowd-sourcing your Dev and QA team to the entire world (assuming that the OSS is sufficiently important to attract that much interest). For critical security-related SW components, e.g. Crypto libraries like OpenSSL, this should mean that vulnerabilities in applications using those components are reduced, and when discovered should be quickly addressed by patches from the community of open-source developers. Conversely, proprietary commercial SW is only as good as the development team that creates it, and is subject to cost- and schedule-pressure, which often means that Quality (and Security!) can end up being neglected.

Unfortunately, that sword cuts both ways: Just as the open-source developers can be the source of fixes, they can also be the source of bugs. A good example of this is the so-called Heartbleed bug which was introduced into OpenSSL in 2012 by a Ph.D. student, the implications of which were not discovered for another 2 years. This serious vulnerability allowed attackers to access critical memory on both clients and servers that used the flawed library (as security guru Bruce Schneier said, “On a scale of 1 to 10, this is an 11”). Due to the ubiquity of OpenSSL, it is estimated that over 300,000 public servers were vulnerable to Heartbleed, and there are many examples of security breaches that resulted, including the theft of Social Insurance Numbers from the Canada Revenue Agency.

But Heartbleed highlights another problem with OSS: Even when vulnerabilities are identified and fixed, it can take years for the patched library to make its way into all the affected applications, at least in part because developers don’t apply a lot of scrutiny or effort to “free” SW and may neglect to pick up new versions. Three years after the Heatbleed fix was available, it was estimated that over half of vulnerable public servers were still running the flawed version of OpenSSL.

Although Heartbleed happened by accident (the author of the bug was of course mortified!), there’s also the possibility of intentionally introducing a vulnerability into OSS (so-called “supply chain poisoning”). Due to the sheer number of code-changes submitted for review, it is always within the realm of possibility that one of them is malicious, and the gate-keeping process can never be perfect.

Another question that often comes up: “Is there any point in applying application hardening techniques to OSS?” After all, the source-code is publicly available, so in principle the hardest part of reverse-engineering (recovering the original design) is already done. Certainly, some SW protection techniques such as Code Obfuscation may not be particularly useful with OSS, since they completely preserve the semantics of the original code. In the case of Heartbleed, the vulnerability was baked into the TLS protocol itself: Once known, these classes of vulnerabilities would still be present even if the code was obfuscated. However, SW protection can indeed improve the security of OSS application code by making it difficult to determine the corresponding binary code-location of specific source-code locations, which can protect against exploits that require modification of code or data. Also RASP (Runtime Application Self-Protection) is potentially even more important since it can prevent dynamic (runtime) tampering, and by detect and/or neutralize attack tooling such as hooking-frameworks, etc. It is also important to remember that OSS may be mixed with proprietary code, and protecting ALL of your code makes it difficult to distinguish the openly-available code from the rest. Thus, it is always best to apply code-obfuscation generally, even if you make use of OSS library code.

In conclusion, I would emphasize that use of OSS, particularly for security-centric code such as cryptography and communications protocols, is a great way to ensure that you’ve got a well-designed and thoroughly tested basis for your application code: You are able to leverage a global community of expert developers who really care about “getting it right”. That being said, when using OSS, it is very important to monitor for updated versions of the OSS, particularly patches for vulnerabilities as they are identified, and make sure you have an update strategy to push out new versions of your application. And finally, it is vital to make use of application hardening techniques including code obfuscation, data and resource encryption, and RASP, since anything you can do to make reverse-engineering and tampering more difficult will serve to make your application more secure.

2 Likes

This is a great write-up. I often wonder about obfuscating the open source libraries used in an application, in such a way that deters an attacker finding the specific version and then looking for CVEs (If the developers are not actively updating the library).

1 Like

Indeed, it would be feasible to scan apps for the signature of vulnerable OSS library-code as you suggest, and code obfuscation and string encryption would defeat those scans. There would likely be other telltale traces of a vulnerable library (such as version reporting features) which obfuscation would not remove, so honestly monitoring for CVEs in your OSS components and having (and using!) an update strategy is a must.

1 Like

BTW, very interesting post from the Cryptography Guru, Bruce Schneier on this topic:
https://www.schneier.com/blog/archives/2020/12/open-source-does-not-equal-secure.html

A couple of interesting quotations: “GitHub found that 94% of projects now rely on open source components, with close to 700 dependencies on average.”

And: “On average, vulnerabilities can go undetected for over four years in open source projects before disclosure. A fix is then usually available in just over a month”

This confirms that an effective vulnerability monitoring and update strategy is essential when using OSS

1 Like

Another interesting data-point from the RISKS Digest about vunerabilities in open-source TCP/IP stacks for embedded/IoT devices: Amnesia: Critical TCP/IP Flaws Affect Millions of IoT Devices