I initially wanted to talk about concepts related to my code only. How to secure it the best I can.
As strange as it may sound, the first thing I thought when I started typing was not related directly to the code I write but that which I use without questioning the source and the security. I was thinking more about the issues I have with external code. Who is responsible to handle them? How am I dealing with vulnerabilities? Which ones do I need to mitigate or sometimes to accept? And finally what is the easiest way for me to search for vulnerabilities? Sometime I feel more like a risk manager than a software architect! For this reason, I decided to elaborate on the word SECURE in the context of Software Development Life Cycle (SDLC) and one of its phases, Development/Coding.
Anyway, I don’t know how I got all those magical dependencies. But when I tried to remove them from the tree of folders to test my Single Page App (SPA), I got a surprise. I was unable to remove them because my Operating System was unable to manage them. One of the modules was written in a foreign language that I can’t read: (meaning it was not in French, English, Italian, and Spanish or, if one of my good friend gives me some help, German), and neither could my Operating System. Long story short, it worked after a while but I found a vulnerability on that framework. I couldn’t update the affected module because my Cloud Hosting Provider didn’t provide the update to his Virtual Machine packages. Honestly, my extraordinary 4-line application is still vulnerable precisely because I think it is secure.
I thought it was, before I knew it was vulnerable to something. Now, is it? Of course not if there is a known exploit (an exploit is code or tools that take advantage of a known vulnerability or bug on software). But if there is no exploit available, is it still secure? Now, I must decide what to do to manage the risk. The funny part here is that I am hearing in my head one of my good friends saying to me: “Hey Steph, maybe it’s because I am not an IT security expert but it seems to me that it was not safe and secure before if you know now it was vulnerable back then!”
It is more philosophical than technical in my humble opinion but that imaginary friend could be right and wrong. What I am saying is simple, if everybody knows that you have a lot of money in your safe but nobody knows that your code is 00000 (the default passcode) then who will try to steal your money? In that case, you are vulnerable but your exposure may be lower because probably the bad thieves will focus on an easier target. More likely a target who is telling everybody that a mattress is the best place to keep your money. But when nobody knows about a vulnerability in a software I am using… Can I say that I was vulnerable to an exploit that did not exist at that time?
A vulnerability without an exploit is still dangerous. Maybe, maybe a little bit, maybe not. We could start a big discussion on exposure but obviously, my code was not 100% secure.
I think we use some of those qualitative values because quantitative values are sometime too difficult to define in the scope of many projects based on limited time and resources. Honestly I don’t believe in 100% secure code anyway. Of course I could test my code (and I do) but I deeply believe that by definition, testing means testing what you know that you must test. But if a vulnerability is not known yet then you can’t test it! Right?
For me, I really like the short definition of Application Security (Secure Coding is a part of it) in Wikipedia: “Application security encompasses measures taken throughout the code's life-cycle to prevent gaps in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application”. I like it because it is based on the ISO/IEC 27034-1:2011. In that definition, measures are actions/controls we must do to achieve Secure Coding Principles. For me controls equal Risk Management.
The truth is that we are sometimes creating involuntarily risks by using external code like for instance open source frameworks, modules, web services, etc. (and the proprietary ones too!). Vulnerabilities do not take sides; they are not selfish. They love to snitch, hide and spread everywhere. There is no battle between open source and proprietary for them! The battle is based on keeping them outside of our realm. The risk is not only coming from external code dependencies. I could also add operating systems, build tools, external access, shell access, databases, scripts, access rights, etc.
Being a target for all of the above mentioned, then I minimally need to set, or help my team to set, the accepted level of security we can afford to live with. As a group, we need to ensure to do the maximum effort to validate the vulnerabilities of what we use and include until we reach the ROI acceptance level. This is when we define what is secure for the software or system we are building.
We spend a lot of time evaluating risks (well I hope some of us do at least) by performing code reviews, multiple analysis, best practices, security patterns, etc. We want to manage the risk at a certain level. But at which level? A good balance between being freaks and reading everything we use vs. doing nothing. This is another good place to decide what “secure” will mean for your project.
The question is: what should we do to avoid all of this work? To reach the level of security required? Should we reinvent the wheel each time? Maybe, recode our own programming language? Build every framework we use? Not sure that it will help the business and ensure that we keep our jobs.
A solution: We calculate ROI with the R&D Director. We do mitigation. We enforce due care and due diligence. That responsibility is on everyone’s shoulders, not only on one or two persons; we are all responsible for whom and what we let come into our home or in our code in the present case.
But times change. It is simplistic but it is so true. What was true 20 years ago is certainly not true today. I still remember a 1980s movie where Japan landline phones were scary to me because it was possible to see the people you were calling! I was so stressed that I could answer the phone in my birthday suit and it could be my mom on the other end!
Think about OpenSSH. Who would have believed that this monster of reliability, with a great reputation, well tested, analyzed, re-analyzed, re-re-analyzed by our best “cryptobrains” and deployed almost everywhere was vulnerable to a Plaintext recovery attack?
Can we say that all of the developers and engineers didn’t do their job to manage the risk? At some point, you can’t do everything. These are the side effects of living in reality with limited time and resources. People need to trust at a certain level what the others did. If not, then you are living in a bubble where there are no vulnerabilities but no interactions either. This is why we have a lot of security principles available for us.
It is always hard for me to trust what I didn’t own because it always reminds me “you can’t fix what you don’t understand”. But we need to let it go (and go to sleep) after a while. It’s after that acceptance period that I begin to trust, for good or bad reasons, but I forget that sometimes.
Although we can’t prevent all possible attacks or patch all imaginable vulnerabilities. I really suggest with some insistence to, at least, try to mitigate the known risks/vulnerabilities to your own acceptance level (nota bene: Risk avoidance is not a well-accepted risk management practice). Many tools are available to us. Since you know now it exists, you can’t avoid it anymore (It is like risk. I just wanted to write down the word risk two more times!).
During the planning of this article, I used a principle from the Secure Coding Principle section of the Open Web Application Security Project (OWASP) named: “Don’t trust services”! However, this principle is only one of many good practices, like minimizing attack surface area, failing securely, separating the duties, building security as a design task, etc. that we need to take care of.
If you are interested, here is a list of 3 websites I search on for possible vulnerabilities on software, tools and modules I use.
Please use those tools without frugality. Your software’s users will be very grateful!