Persistent XSS is Not Self-XSS

Participating in bounty programs the past few years I have seen a lot of discrimination against what has been dubbed Self Cross-Site Scripting (XSS). This is a version of XSS that can only be exploited by the victim due to either protection by the server or the method of attack is strictly client-side with no way for an attacker to force a victim to execute.

Lately I have seen programs state that they do not accept any form of self-XSS. I will give some scenarios to explain the various types of self-XSS, their impacts, and how they can be exploited to hopefully debunk some misconceptions that these are not vulnerabilities.

Scenario 1: DOM Based Self-XSS

DOM Based XSS is when you have the ability to execute JavaScript by only using JavaScript. This is entirely clientside and in some cases may never be sent to the server.


DOM Self-XSS: A text input that executes JavaScript and never gets sent to the server because it’s not attached to a request. An attacker cannot force you to load this without clickjacking.


function setName() {
    inputTxt = document.getElementsByName('firstName');
    nameElem = document.getElementById('name');
    nameElem.innerHTML = "Hello, " + inputTxt[0].value;

What is your first name? <input type="text" name="firstName" /> <input type="button" onclick="setName()" value="Set Name" />

<div id="name"></div>



You can see that code running here:

As you can see, there is no way to interact with it other than to put the payload in yourself. Even with clickjacking you are manually drag-n-dropping the text into the input in a hidden iframe. The chances of this being exploited are pretty low. Malicious attackers are going to move on and find an attack that is easier to work with.

When is this considered self-XSS? If X-Frame-Options is not being used correctly and the website can be placed into an iframe, it is considered a clickjacking vulnerability that can be chained into XSS. If X-Frame-Options is set correctly, this is considered self-XSS as there is no way for an attacker to force a victim to execute the XSS vulnerability.

Regular DOM XSS: Here is an example of a non-self DOM based XSS. Text is parsed from the #hash part of a URL which does not get attached to the request sent to the server. An attacker can still force you to load a URL with a payload in it.

function getHashes() {
    aURL = window.location.href;
    var vars = {};
    var hashes = aURL.slice(aURL.indexOf('#') + 1).split('&');
    for(var i = 0; i < hashes.length; i++) {
        var hash = hashes[i].split('=');
            if(hash.length > 1) {
                vars[hash[0]] = hash[1];
                vars[hash[0]] = null;
    return vars;

var hashes = getHashes(), redirect;

if(hashes["r"]) {
    redirect = hashes["r"];
    redirect = "";

window.location = redirect;

In this example, you are still able to force a user to execute it by loading a URL. It parses the URL, looks for #r=, and will redirect to the text specified in the r variable if it is set. You can exploit this by putting: #r=javascript:alert(1);. There are a couple different ways you could force someone to execute this, the most common being to hide it in an iframe:

<iframe src="http://fakedomain/#r=javascript:[payload here]"></iframe>

As you can see there are two drastically differing impacts for the DOM self-XSS and regular DOM XSS. One requires a lot of complexity and the likelihood of it being exploited is extremely low. If X-Frame-Options is enabled, it may prevent it from being exploitable entirely. In the regular DOM XSS example, you have your cookie-cutter XSS that can be exploited a bunch of different ways.

Scenario 2: Stored (or Persistent) Self-XSS

Stored XSS is when a user sends data to a server, the server saves data, and delivers that data back to clients in another request. There may be a request to update data that only an authenticated user will see, such as a profile setting that can only be seen in an edit profile page. Even if an attacker modifies their own edit profile page, they have no way of forcing the victim to view it. If the victim views the edit profile page, they will see data for their own account rather than the attackers.

It is fairly common to see these vulnerabilities because most stored XSS are sent to the server in state-changing requests. These requests are usually protected against Cross-Site Request Forgery (CSRF) attacks. That means many of the requests to inject an XSS payload are protected with a randomized token that an attacker needs to know in order to get a victim to execute the request.


Lets say there is a profile page that has a location input. The developers never exposed the profile location anywhere on the website except in the edit profile page. This input value gets set to whatever you send to the server, but you are the only person who is able to read that value.




<h1>Edit Profile</h1>

<form method="post" action="/update">
<input type="hidden" name="csrftoken" value="e9c196c01a40916a122584a14a68caa2" />

<p>Username: <input type="text" name="username" value="ziot" /></p>
<p>Email: <input type="text" value="" /></p>
<p>Location: <input type="text" value="California" /></p>

<p><input type="submit" value="Update Profile" /></p>

When you update your profile, you can put an XSS payload into it. There is no way for you to force anyone else to view your edit profile page without making them log into your account. Your goal is to send an XSS payload and use their account, so making them use your account defeats the purpose of the exploit.

Most people will take this scenario and say: make them send the POST request to update their own account and set the location value. After it is set, you redirect them to the edit profile page and it will execute. E.g.

<form method="POST" action="/update" id="csrf-form" encType="application/x-www-form-urlencoded"><input type="text" name="username" value="foo" /><input type="text" name="email" value="" /><input type="text" name="location" value='"><script>alert(1);</script>' /></form><script>document.getElementById("csrf-form").submit();</script>

Why this doesn't work: The request is protected against Cross-Site Request Forgery (CSRF) and requires you to know the csrftoken set for the authenticated users session.

Without having the csrftoken for the victim, the attacker cannot force them to execute that request. This is an example of stored self-XSS that many programs will reject. You must first have access to the victims account in order to exploit it. That makes sense, right? If you already have the victims account, you don't need to use XSS to force them to execute an action as you can already log into it.

Consider this example:

You found another vulnerability but it is a Reflected XSS attack. That means you need to force a user to load a malicious URL that contains the XSS payload inside of it. This is what it looks like:




<div class="profile">
<p>Username: "><script>alert(1);</script></p>

Now that you have a Reflected XSS vector, you can use this to inject a payload to escalate to Stored XSS. The "self-XSS" is no longer self because you can force a user to execute the request and gain persistence. How do we do this? Inject JavaScript to hijack the users csrftoken and execute the POST request to their account.


$.get("/edit-profile", function(data) {
    var csrf = $("input[name='csrftoken']", data).val();
    var user = $("input[name='username']", data).val();
    var email = $("input[name='email']", data).val();
    var xss = '"><script>alert(1);</script>';
    $.post( "/update", { username: user, email: email, location: xss, csrftoken: csrf } );





You now have a persistent attack against a user and can force them to execute the payload by visiting the /edit-profile URL.

Lets say you report both of these to a bounty program but the Stored XSS is set to invalid. They decide only to fix the Reflected XSS attack. The argument is generally that the root vulnerability is the Reflected attack because the Stored will no longer be exploitable once the Reflected vulnerability is fixed. All the while they are ignoring the fact that the user is still affected by the Stored "self-XSS" because they decided not to fix it.

Why? Just fix it.

I guarantee almost every major bug bounty program out there such as Facebook, Twitter, Uber, Google, etc. have received at least one Reflected XSS report. Not a single company is going to say they are 100% secure against XSS attacks. There is a precedent set that a company can be vulnerable to XSS, therefore a Stored "self-XSS" may actually be exploitable when chained with another XSS vulnerability. This is also only presenting one example. Consider these other scenarios:

  • An admin account is compromised but the attacker wants to retain access to it. The attacker finds a persistent XSS vulnerability and stores the payload on the account. The company resets the admins account and the attacker loses control over it. Every time an admin is logged into that account and the XSS payload fires, the attacker is able to force the admin to execute any admin action he wants.
  • There is a csrftoken leak in one of the requests that gets sent to any website the attacker specifies. He forces an admin user to leak their csrftoken out and is able to force the admin into sending a state-changing POST request with the persistent XSS vulnerability.
  • etc.

What I think we should as a community should collectively consider:

  • Cross-Site Scripting is bad - from both a security and engineering perspective. The amount of self-XSS reports a company will receive should be minimal enough that they should just fix the problem. If they appreciate the time the researcher spent hunting for vulnerabilities and reporting an issue, they should just pay the researcher.
  • It's a real vulnerability with real impact even if the reality is that it probably won't be exploited. If bounty programs paid based on attack probability, the payouts would be a lot lower across the board. Not all SQL injections will have critical impact, but they're almost always paid as a Critical/P1 vulnerability.
  • The same way that program owners will not pay for clickjacking without demonstrating a vulnerability, they should probably also require the researcher demonstrate that the Stored "self-xss" is exploitable. If the researcher has already reported at least one Reflected XSS or csrftoken leak, you shouldn't require them to do it every single time. The precedent has already been set -- the researcher (or malicious attacker) would have sat on that vulnerability and been able to chain them together.