Skip navigation
green binary code with mans face blurry in background

Top .NET Web App Security Threats: A Conversation with Troy Hunt

Editor's Note: Welcome to .NET Rocks Conversations, excerpts from the .NET Rocks! weekly Internet audio talk show. This month's excerpt is from show 735, with Troy Hunt, a software architect and Microsoft MVP for developer security, author of OWASP Top 10 for .NET Developers, and creator of ASafaWeb -- the Automated Security Analyzer for ASP.NET Websites. Troy joins Carl and Richard in a discussion of the most prevalent security vulnerabilities that .NET and ASP.NET developers must deal with in their applications.

Carl Franklin: So, OWASP Top 10 for .NET Developers was your landmark series. Tell us about this.

Troy Hunt: OWASP is the Open Web Application Security Project. These guys have put together an open not-for-profit foundation to talk about web security in a very generic technology-agnostic fashion. They've laid down a bunch of risks; I call these the top 10 application security risks. This is a really fantastic resource, but I wanted to get something that was a little bit more specific to the .NET community.

CF: Sure.

TH: I really wanted to get thorough and create a resource that people could use that was very practical. I also wanted to just get under the covers and see how these things work -- so we know, for example, passwords that are only hashed and not salted or risk could break for us, but how do you do that? What does it look like? Same for something like intercepting unencrypted traffic. How do you actually do it? I want to go through and actually break the thing and then fix it and show everybody, OK, well, this is how in our favorite programming language we actually mitigate against these risks. And that's where the series came from.

CF: Can you give us a walk-through of the topics?

TH: Sure. There are 10 different vulnerabilities. We start out with injection. Injection may not only mean SQL injection for us. It might mean LDAP injection or any sort of query language injection where we can actually manipulate the execution of the query in a malicious way. So in a .NET world, we're looking at things like parameterized SQL. We're looking at Entity Framework or any other sort of ORM that breaks things down in parameters rather than allowing an attack that's like concatenating a query.

So number two, we go on to cross-site scripting. I think most people are probably pretty familiar with that. Keep in mind, these are in order of prevalence as well. So cross-site scripting is pretty prevalent.

CF: Cross-site scripting is what happens when somebody gets ahold of the JavaScript that's running in your browser when you click link x where you think it's going to take you, I don't know, to the next page or to the thing that you're reading, and it takes you to some website where some crazy stuff is installed. That happens mostly when you don't have a firewall. Can a firewall prevent cross-site scripting, or does it go through firewalls?

TH: No, it's not really a firewall thing. Cross-site scripting will hit you in a couple of different ways. So there's reflective cross-site scripting, where somebody will give you a link, and in sort of the classic cross-site scripting example, we say that link might have some JavaScript, which will execute on the browser or pop up in Outlook or something. That's just embedded in the URL simply because the application is taking that query string from the URL and writing it directly to the page, so an attack can actually start to control the markup or the JavaScript on the page.

CF: So it basically happens when a website gets hacked?

TH: Yeah, you can consider it a hack, but that particular cross-site scripting example doesn't necessarily mean any files on the server are being compromised or anything at all throughout that's being accessed. In fact, the app is really just doing what it was designed to do. It's taking input in the URL and putting it on the page. The other form of cross-site scripting is what you call a persistent cross-site scripting attack, where an attack actually gets marked up into the database -- which can then do a similar thing.

So number three out of the top 10 is a little bit generic: broken authentication and session management. Keeping in mind again that OWASP is pretty technology agnostic, if we put this in a .NET context, the sort of things that might lead us to broken authentication and session management might be things like rolling your own authentication system and not using the built-in membership provider. So a lot of the times when people try to create their own security controls, particularly when they're redundant with the really good ones that are built into the framework -- that can actually present a risk of a session being hijacked and other nasty things happening in the end. So what I've talked about in my writing is, hey, look, let's try and use as much of the goodness that's in the framework as possible. The membership provider gives us straight out of the box the ability to register, the ability to secure passwords as salted hashes, logon password raised. All this stuff is in there already.

CF: Right.

TH: The great thing about it is, not only is this a nice thing for security, it saves you a lot of time. You're in there, and you've got the whole thing up and running in five minutes, an entire membership authentication model.

Richard Campbell: But it's also strong. If you get a cross-site exploit, it now grabs your cookies successfully. They can't use it because it's salted. They've already done that in a membership provider.

 

 

 

TH: Yes. So the persistent storage of that password is salted. There is a salted hash. If someone grabs that database, you're safe.

So I look into other things that fall into broken authentication and session management, things like maximizing password strength. Now this is obviously not a .NET objective. It's just a good common practice, and there are a lot of other things . . . making sure we use a password and [that] this email password is in plain text and being conscious about things like "remember me" functions. This is one of the things that come up with security. Things like a "remember me" function are awesome for usability. I mean, you'll stride in there, so it's a very, very live valid entry. But then you try and often you go, OK, well, that means somebody else could come along and use the machine. And it's the same when we talk about things like session timeouts or sliding expression.

CF: Right.

TH: They're nice little usability things, but if we can kind of tie those a little bit, we're going to be a bit more secure.

CF: Bottom line: Use the native bits. Don't roll your own.

RC: Yes. As soon as you're writing more, you're in trouble. What do you consider a secure password these days? I'm surprised how many websites now insist on between six and 12 characters, must have an uppercase, must have a lowercase, must have a punctuation [mark], must have a number. I don't think that that's a secure password.

TH: Well, this one is a bit of a -- it's probably one that's "religious" to debates. From my perspective, particularly for us and for the people who are listening to this podcast, we have a lot of online accounts. I think I have something like 150 online accounts in my password manager. So that's a whole heap of accounts. Now I have absolutely no hope of remembering those passwords. The best I can do is to try and create a few really strong passwords that are using instances where I have to type them in. So when I load them on my PC, I have to type them in, and then what I do is use a password manager. And I then create crazy big long random passwords.

I think there's got to be a little bit of a balance. You can't just, unfortunately, have one password for everything. The reality is that's not going to work. But you can have a few, and for the vast majority, don't try to remember them, put them in a password manager.

CF: All right, let's get back to the list of top 10 application security risks. I believe we were on number four, insecure direct object references.

TH: Yeah. Insecure direct object reference is when we're exposing a reference to an internal object via an externally visible resource. So, for example, you might be on a banking site, and you'll see an account number in the URL. That account number will naturally meet back to some sort of key in the database. Now the risk within insecure direct object reference is that if I start manipulating that account number, can I pull somebody else's bank account back? This is what happened with Citibank last year. So Citibank allowed someone to just start manipulating the URL, and, hey, I've got somebody else's bank account here.

RC: Oops.

TH: So that was a real worry, particularly when we're talking about natural sort of keys or incrementing keys where it's easy to just add one to a long number and then suddenly you get a different record. The underlying problem with insecure direct object reference is access control. So there wasn't the proper access control to say, "Hey, is the person who's authenticated in this session actually allowed to access that record?" That's fundamentally the thing that needs to be done right.

The other option with an insecure direct object reference is to use what's called an indirect object reference map, where you have a map which might persist in the session, and that map would say, "OK, let's take that bank account number and keep that internal, and let's map it to a nice sort of cryptographically random key that will expose it externally. So [in] a new account, then go through and change that to any logical or natural sort of key, and then we'll throw the whole thing away at the end of the session, so that no one else can use it later."

RC: Yeah.

TH: So -- a cross-site request forgery, also often seen referred to as CSRF. The idea of a CSRF attack is that it tries to trick someone's browser into issuing a malicious request. An example of that might be where on a banking site we want to transfer money, and then when we click the submit button, we're making probably a post-request for the formed parameters.

A CSRF attack would be, well, what if we can trick the browser into making exactly that same request, but we will launch it from somewhere else. So, for example, if an attacker could stand up a website, they might just use a bit of JScript on that site to make the same sort of post with the right formed parameters. If they can get someone to have their browser execute that post . . . so maybe they use a bit of cross-site scripting to make it happen, maybe they use a bit of social engineering and now send a really attractive sounding tweet to the person with a shortened link, that could work. You know, this is the thing. It's about these little link obfuscators that can shorten and highlight the nasty things behind them, a nice sort of a launching pad for this sort of tag.

So to mitigate CSRF, we've got a couple of different approaches. One thing we can do is use a synchronized token. This synchronized token pattern tries to set a unique ID somewhere in the form, and then it makes sure that the unique ID is actually submitted with the formed request. So the page which is receiving that request would say, hey, did you send me this right ID, which [is] persisted probably in session on the way through to make sure that what comes in the form is what I expected to get when you submit it. Now the reason that works is because we then have an external site which has got some sort of static launch pad from the malicious request -- it's not going to have that token. So it sort of adds that little bit of randomness into the process. That works fine in a Web Forms app. Mind you, if you're in MVC, then you've got the HTML helper, which is the anti-forgery token.

CF: Yeah.

TH: So it's really easy to drop them in MVC, and then you can just decorate the controller, which receives that post with the validated anti-forgery token decoration, and you've got a nice little sort of native synchronized token.

RC: Nice.

TH: So those work well as sort of technical controls. And then there are more sort of social controls, things like a captcha, so you could always put a captcha in there. Now they drove people nuts, so then you've got the usability tradeoff.

But a captcha is a pretty secure way of avoiding CSRF, and the browsers are also getting better at defending anything against things like CSRF, where it sees that there's a request coming from another origin, so coming from another site. The browsers are now sort of saying to the site, "Well, hang on a second, this might not be such a good thing." And they're doing the same sort of thing with cross-site scripting as well, so fortunately the client is getting a little bit smarter.

There's much more! You can find the full interview with the rest of the top 10 .NET security vulnerabilities at dotnetrocks.com/default.aspx?showNum=735.

Richard Campbelland Carl Franklin are the voices and brains behind .NET Rocks! They interview experts to bring you insights into .NET technology and the state of software development.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish