Increasing Your Company’s Security by Encouraging Responsible Disclosures.

image

There’s always a gamble for security researchers when reporting vulnerability disclosures to companies. Is the company going to read your report? How will they react to me testing their security? Where can I send this report anyway? These are some of the questions the researcher is going to contemplate before making a decision that can financially impact the company they are trying to help.

When a researcher dons their white hat and assumes the role of ethical hacker, it is their duty to report an issue regardless of how they think the company will react. Sometimes, a researcher cannot find a way to get in touch with the right person at a company; they may also believe that the company is not going to acknowledge their report. For these reasons, a researcher may forget about the vulnerability entirely and not follow-up with seeing it fixed. While unclear if the researcher or the company is to blame for this, the next person to stumble upon the vulnerability may have more malicious intentions and this could have been prevented.

When you have millions of people viewing your website, you are going to have at least a couple thousand people (not counting the bots and scanners blindly trolling the web for common vulnerabilities) poking and prodding at it for security issues. You have to accept that there are known vulnerabilities for your website out in the wild and that some of these vulnerabilities are actively being abused. These same vulnerabilities may very well have been discovered by researchers but never reported. This begs the question: how do you convince researchers to responsibly disclose the exploits they discover instead of moving on or abusing them?

People sometimes mess with security for fun, but others just want to see the world burn. Targeted attacks are to be expected, but I have found that many web security vulnerabilities are opportunistic and only be abused when an attacker has something to gain from it. If the person has no interest in exploiting it, they will either report it or forget about it. If they have malicious intent, everyone has a price and sometimes paying them to not attack your website is cheaper than cleaning up the mess of their abuse. Either way, monetary incentives can encourage people to responsibly disclose security issues whether they were going to abuse them or not.

It’s no surprise that a week after Twitter announced a no reward responsible disclosure program on HackerOne that a cross-site scripting vulnerability on TweetDeck was used to attack Twitter. Twitter encouraged security researchers and hackers to look at their systems, but gave little to no incentive to actually report what was discovered. This attack could have had far worse consequences and cost Twitter financially. Remember when Fox News’ twitter account got hijacked and the account was posting false stores about Obama? It caused economic damage and temporarily hurt the stock market.

If there was a monetary incentive, the attacker may have thought twice about abusing the vulnerability discovered on TweetDeck. People turn their heads and question why a company would pay a few thousand dollars for a simple XSS vulnerability. The short answer is that a cross-site scripting vulnerability on a popular social media website has the potential to cause millions of dollars in damage and impact the global economy. It is cheaper to spend a few thousand dollars to efficiently fix an issue than pay a lot more having employees clean up the mess and forcing engineers to drop what they are doing to code and deploy a hotfix.

A lot of smaller companies simply do not have money to throw at security. It is hard enough to survive as a start-up or small business. Having to throw non-existent time and money at security may not seem like an option. It is a tough reality to face, but not securing your company is a gamble in fate and a roll of dice that could cost you financially. Smaller companies have had their reputations destroyed or have even gone financially bankrupt because of security flaws that they could have eliminated. This article is not about threat models and how to secure your company, but putting a little bit of effort to show that you care about security and appreciate responsible disclosures can help put that roll of the dice in your favor.

So what steps can be taken to fix this?

Companies:

Get a bounty program

It’s been close to 10 years since bug bounty programs were introduced to the World Wide Web. There are many platforms you can use to get started such as HackerOne and BugCrowd. Have the money? Create your own platform like Facebook, Github, Microsoft, and Google did. But if you do, make sure you do it right, because having a bounty program can actually be more harmful to your company if you don’t run it efficiently.

Not all people that get involved in bounty programs are your media defined “bad guy” hackers. Sometimes they are employees of companies that are using your product. Some of them are aspiring security researchers trying to get a professional career. Offer to increase the bounty reward if the bounty is donated to charity. This will encourage security professionals that are not looking for money to take a look at your products. It makes you and the researcher look good. It also helps support charity!

Don’t want a bounty program or can’t afford to organize one? Have a section of your website dedicated to security.

If I find vulnerability on your application and want to report it, I need to be able to find where to report it and I want to make sure that you are going to react with gratitude instead of hunting me down. Dedicate an entire page to explain responsible disclosure and where to report the vulnerability to in order to show that your company genuinely cares about security.

There is usually very little for someone to gain from reporting a security issue. It does not hurt to throw the researcher a little bit of swag, even a t-shirt, as a thank you for better securing your company. A security researcher that feels appreciated is going to spread the message to his friends, followers, and community. This goes a long way towards building trust with the public.

Security Researchers:

Do your best to clearly communicate the issue discovered and make sure it is getting to the right people.

If you discover a security issue and report it to a company’s marketing or art department, do you really think they are going to know what to do with your e-mail? Even customer support may have no idea what you are saying. I am being facetious, but the point is that there are a lot of people that do not know application security.

If you have no choice but to report it to a non-technical department or employee, you need to be concise on what you are reporting. Explain what you are reporting, where the message should be forwarded, and include a detailed report on what is vulnerable. A bonus is to include a non-malicious demonstration or proof-of-concept of the vulnerability you are claiming exists.

These reports will eventually get forwarded to an engineer. Nothing is worse than an engineer receiving a report that their code is vulnerable but they have no idea what you are trying to communicate. If they cannot figure it out, the response you are going to get is not going to be what you are looking for.

Don’t publicly disclose an issue because the company won’t respond; you haven’t tried hard enough to get it fixed and abusing it is not the appropriate next step.

This happens all the time and is the direct opposite of responsibly disclosing a security issue. It is especially a problem with bigger companies running monetary bounty programs and have response times that can take months. If someone feels that the vulnerability they reported is not being taken seriously, they may abuse it to get a message to the company that it exists.

This completely defeats the purpose of a bounty program; simply put, do not do it. You are not going to get a bounty reward and you are causing additional problems for the company.

Very rarely do I go against this rule and say that you should abuse the vulnerability in order to get it fixed. More times than not your bounty report is a small vulnerability and the public does not know about it. If you have a severe vulnerability such as a remote code execution, can read their entire customer database, etc. and the company is not reacting to it within a week or two, it is very likely that you have not communicated the issue properly or the information was not forwarded to the right people. Try to find another way to reach out to the company that is not going to incite a social media or news inferno that will damage the company.

 

Take Away

If there is one thing you take from this article, remember that not all “script kiddies”, “hackers”, or “security researchers” are clearly defined by their labels. The people who report issues in your products may even be your customers. It is up to you, the company, to encourage those people to report it if you care about the security of your product and customers. It is up to you, the security researcher, to act like a professional and understand the consequences of your actions.