Security Purity

Defend Your Site against Common Hack Attacks





Security Purity

Defend Your Site against Common Hack Attacks


By Steve C. Orr


Movies often portray hackers as evil super-geniuses or prodigy whiz kids. While there might be a few in the real world that almost live up to that mythical status, the vast majority of modern day hackers are actually script kiddies who know little to nothing about the inner workings of computer systems. Instead, they tend to find some pre-written malicious code or script on the Internet and tinker with it, sometimes getting results that harmonize with their dubious goals.


Indeed, there is very little originality left in the hacking world these days; it is much more evolutionary than revolutionary. Virtually every variety of hacker exploit falls into one of several distinct, known categories (see Figure 1). By familiarizing yourself with these common attack vectors, you can craft resilient software that will not easily fall victim to such exploits. Read on and you ll also be introduced to some free Microsoft software tools you can download to help secure your software s borders.


Hacker Attack


Common Defenses

Social Engineering

Coaxing passwords and other valuable information from unsuspecting users through seemingly innocent conversation.

User education, two-factor authentication.


Cracking passwords by trying every word in a dictionary.

Require strong passwords, limit the number of failure retry attempts, two-factor authentication.

Brute Force

Cracking passwords by trying every possible combination of characters.

Require strong passwords, limit the number of failure retry attempts, two-factor authentication.


Network traffic is recorded and replayed later by a hacker after being adjusted to meet their dubious goals.

SSL, secure session management, authenticate every application layer, analyze your own network traffic, use the RegisterRequiresViewStateEncryption page method, threat modeling.


Phishing, intermediate software that pretends to be a third-party Web site in order to collect passwords, credit card numbers, etc.

SSL, secure session management, authenticate every application layer, analyze your own network traffic, use the RegisterRequiresViewStateEncryption page method, threat modeling, user education.


Software that pretends to be a human Web site user and consumes resources without permission.

Turing-test technologies, such as CAPTCHA.

Denial of Service

Web servers are overwhelmed with dummy requests designed to consume CPU cycles.

Web farms with failover capability, code that fails early and intelligently.

Code Injection

SQL Injection, Cross Site Scripting (XSS), XPath Injection, etc. Causes a hacker s code to execute on others computers.

Use parameterized ADO.NET SQL queries, encode output, validate and filter input with white lists, set the Page s ViewStateUserKey property, use the Microsoft Anti-Cross Site Scripting Library.

Figure 1: Protect your site against these common attack vectors by implementing appropriate defensive measures.


Password Extraction

Most computer systems require only a username and password for entry. Lists of usernames are easily obtained, as they are typically not considered secret. Passwords, on the other hand, tend to be a bit more of a challenge to acquire.


Social engineering is the art of convincing users to willingly divulge their passwords. Hackers may be stereotyped as nerds with few social skills, but some of them (or their associates) do indeed have the gift of gab. You may be surprised how easy it can be to extract passwords and other sensitive information from unsuspecting employees with nothing more than friendly conversation. If schmoozing fails, the manipulator might try switching to a more intimidating technique, such as demanding information on behalf of the CEO, FBI, or other authority figures. Technology alone cannot overcome such unusually extraverted data extraction techniques; employee training is a key factor to help ensure workers don t naively divulge sensitive information.


Dictionary attacks are repetitive automated attempts to uncover a user s password by sequentially supplying every word in a customized dictionary as their password until a login attempt is finally successful.


Brute force attacks are the hacking technique most often illustrated in movies. Entry into a system is obtained by software that sequentially tries every possible combination of characters until a password is finally accepted.


Smart companies use two-factor authentication as a retort to the above exploits. Dictionary and brute force attacks are also mitigated by enforcing strong password requirements for users. (Strong passwords contain long combinations of uppercase and lowercase letters, along with numbers and symbols.) Additionally, password entry failures should be logged and action should be taken after x number of sequential bad password entries. For example, if a user has entered their password incorrectly 10 times in a row, this is a highly suspicious situation indicative of a possible dictionary or brute force attack. In such situations, many modern and secure software systems cause a user s account to become locked out for a certain period of time or until the user checks in through some other process. During the lockout period, all attempts to log in to that user s account are met with rejection.


Packet Peeping

Modern applications pass tons of data across many kinds of networks. Internal company networks, wireless networks, and the Internet are all leveraged for the services they provide. With all of these data packets flying around, there are many opportunities for hackers to intercept them and pick them apart.


Replay attacks consist of previously recorded network traffic that is later rebroadcast by a hacker instead of the original user. If the replayed network traffic includes a user s login attempt, then a lucky hacker may essentially end up logged in as that user. If the user s password was sent as plain text in that network request, the hacker can easily extract it and use it at will. In many cases, the hacker can alter the original network traffic before replaying it to accomplish whatever sinister goals they may have.


An alternate form of replay attack involves using an HTTP analyzer tool, such as Fiddler, to see what is being submitted from their browser to the server (see Fiddler to learn more about Fiddler). These page requests can be recorded, altered, and resubmitted to the server. This can result in the Web server receiving malformed, unanticipated page requests that can cause the server to respond in unexpected ways. Web developers should keep this in mind when designing Web pages and Web services to ensure sufficient resilience is built in. Additionally, they should analyze their own traffic using a tool such as Fiddler to help foresee any potential problems.


Man-in-the-middle attacks are similar to replay attacks, but are more complex. The hacker inserts their computer between a user and the Web site the user was intending to visit (this is commonly accomplished by Phishing, although there are other ways, too). It intercepts the traffic coming from the user and extracts useful information, such as passwords and credit card numbers. In some cases, the hacker then bails out because he has the information he sought but this is often a giveaway that something suspicious happened. So more sophisticated tactics involve forwarding the traffic (sometimes slightly modified) to the Web site the user was intending to visit. Replies from that Web site can be similarly intercepted and examined before being sent back to the user. If done properly, neither the user nor the Web site will have any idea that anything unusual happened until perhaps later when the gathered information is used maliciously.


The most common defense against network snooping like replay and man-in-the-middle attacks is Secure Sockets Layer (SSL). All network communication is securely encrypted when SSL is used, rendering network snooping rather pointless. Because this encryption requires a lot of processing (which slows things down considerably), it tends to be used only on pages that deal with sensitive information, such as passwords and credit card numbers. Users visiting an SSL-secured page will see a little lock icon in their browser, and the page s address will start with https: instead of simply http: .


Sensitive ASP.NET pages should also consider calling the Page.RegisterRequiresViewStateEncryption method to ensure all data being stored in ViewState is encrypted ( The extra server processing required for this feature is worth it for pages that deal with information that users (or hackers) should not see.


Bots are infectious bits of software that pretend to be human Web site users. They can replay network traffic, surf Web sites, sign up for user accounts, and even send out spam through those accounts (via e-mail, messages, etc.). The worst bots can even take over a user s computer completely, turning it into a remote-control zombie at the mercy of the originating hacker. Often times these brainwashed computers are then used to execute denial of service attacks or other evil deeds. Bots are usually stopped cold by Turing test technologies, such as CAPTCHA (for more on CAPTCHA, see CAPTCHASP).


Denial of service attacks happen when a Web site is bombarded with phony page requests. When successful, such an attack will overwhelm a Web site s servers, causing the Web site to be unable to respond to legitimate page requests from legitimate users. In effect, the hacker has brought down the Web site, making it inaccessible to the world. To help mitigate such problems, large-scale Web sites should have server farms to share the load amongst many computers. No matter how large or small the Web site, it should ideally be designed with a scalable architecture that can respond to changing numbers of page requests and allow flexible deployment scenarios. It s also good to write intelligent code that fails as early as possible. For example, a quick validation of input parameters could weed out immediately a suspiciously malformed request, instead of blindly passing invalid parameters to a pointless, time-consuming database query.


Complex software applications often consist of many layers (presentation, business objects, data access, etc.) that may eventually be scattered across many servers at many locations. Every method on every object in every layer is a potential hacker target, especially those that are publicly exposed. Therefore, security should be carefully considered when designing every method on every object in every layer. This already complex task is complicated further by the fact that there are usually many users in many roles performing many kinds of functions. For large applications, it can be virtually impossible to perfectly analyze the nearly infinite number of ways that users (and hackers) could potentially interact with the system unless you have some good tools to help you. One such tool is the free Microsoft Threat Analysis and Modeling Tool (see Figure 2), available for download via the Microsoft Application Threat Modeling home page at


Figure 2: Microsoft s free Threat Analysis and Modeling Tool can ferret out security holes while you re still in the design phase, and identify potentially undiscovered holes in your existing applications, as well.


Code Injection

Code injection is a common exploit that causes a hacker s code to be executed on another computer. This kind of exploit is widely covered elsewhere, so I won t dive into it too deeply but I will cover the basics and point you toward more detailed references.


SQL injection is caused by a hacker entering carefully constructed bits of SQL code into an unsuspecting input field. Perhaps the user is normally expected to enter a ZIP code, but, instead, a hacker enters some SQL into that field. This exploit can only happen when a Web site blindly accepts unvalidated user input and concatenates it into a dynamically constructed SQL string. Always use ADO.NET parameter objects for supplying parameters instead of dynamically concatenating user input directly into a SQL string. For more information on avoiding SQL injection attacks, visit


Cross Site Scripting (XSS) is also caused by hackers entering carefully constructed malicious input into unsuspecting fields but instead of SQL code, bits of JavaScript are entered into the field. In situations where one user s input is later displayed to other users (such as a wiki or message board), the script is then executed on those other users computers. In many cases, this code is operating under the security context of that user or the Web site they are visiting. Any security warnings that may pop up are often ignored by users because they trust the site they are visiting and because they don t know a hacker has compromised it.


XPath injection is another variation on the above techniques, where bits of XPath code are inserted into an unvalidated input field. In cases where that input is blindly concatenated into an XPath statement, it can completely change the execution of that statement, potentially releasing reams of private data to the hacker.


To avoid code injection attacks, user input should always be validated and user-generated output should always be encoded.


User input should be validated on the client side with JavaScript for such reasons as usability and efficiency. User input should always be validated on the server side, as well, as hackers have various tricks for getting around client-side validation. ASP.NET s validation controls (see Validate User-entered Data) are useful for this because they are flexible and provide both client-side and server-side validation.


User input should be validated against a white list instead of the more common black list approach. A black list contains a list of characters or other input characteristics that are not permitted. The problem is, a clever hacker can often figure out what is contained in that list, then carefully construct their malicious input to avoid those triggers. Conversely, a white list contains a more limited list of characters that are allowed all other input is rejected.


In case some malicious script does sneak its way into a shared content store, all user-entered content displayed in a Web page should first be encoded. Methods such as Server.HTMLEncode ( and Server.URLEncode ( can be used to help ensure the content is displayed as text and not executed as script. Unfortunately, these classic methods use a black list approach that only encodes four potentially malicious characters. Any hacker that knows this can carefully construct malicious script that avoids those four characters and still cause you problems. This is why I recommend using Microsoft s free Anti-Cross Site Scripting Library. It provides intuitive encoding and decoding functions that use a far more secure white list approach, which stops hackers in their tracks. You can download the free Anti-Cross Site Scripting Library from


Armored Authentication

When authenticating a user, essentially the question that needs to be answered is: Who are you? Unfortunately, the existence of dishonest people necessitates the answers be verified. The best verifications are obtained by having the user provide proof of knowledge and/or possessions.


Knowledge-based verification requires information that (theoretically) only the user knows. It is most often implemented in the form of a password, although secret question/answer pairs are a popular backup in cases where the user has forgotten their password.


Possession-based verification requires users to provide something that (theoretically) only they have. Modern companies tend to implement possession-based verification in one of two ways: biometrics or smart cards.


Biometrics is a burgeoning industry. Because your fingerprints are unique to the fingers that only you possess, a scan of them is a convincing way to verify your identity. Fingerprint readers are commonly built in to many laptops these days. Retinal eye scanners seem to be more common in the movies, but they do exist sporadically in the real world, too. Similar systems are catching on that use voice and face recognition. While biometrics are indeed valuable, they not infallible. For example, some scanners are better than others about telling the difference between a real finger and a fake one with prints on it that were lifted from a recently touched drinking glass. You should also consider that any scan of body parts is (like all data) ultimately broken down into a series of ones and zeros inside the computer. If a hacker is able to obtain a copy of your unique pattern and replay it into the system, then they ve effectively stolen your identity possibly forever (you can t get new fingers).


Smart cards are the only widely accepted form of possession-based verification besides biometrics. Employees are given identification badges that look much like credit cards. Each card has a uniquely coded tamper-resistant computer chip embedded within it that is associated only with its assigned owner. When a user wishes to enter a secure building or computer system, they insert their smart card into a card reader to verify their identity. Many people are more comfortable with smart cards than biometrics because smart cards are less invasive and they can be immediately deactivated and replaced if lost or stolen.


The most secure approach is to combine two or more of the above verification techniques, preferably with a mix of both knowledge-based and possession-based verification. Security experts agree that two-factor authentication is a must for any organization that wishes to pride itself on rock-solid security. For example, Microsoft requires its workers to provide both a password and a successful smart card scan before being granted access to some sensitive internal resources.


Two-factor authentication makes a hacker s job nearly impossible. Even if they somehow extract your password, they cannot do anything with it because they don t have your smart card. Conversely, if they manage to successfully pickpocket your smart card, they ll continue to suffer in futility without your password. You could even take things a step further by implementing three-factor authentication. There are some new smart cards on the horizon that have built-in fingerprint scanners. A two-factor possession-based verification device like that, combined with (knowledge-based) password verification, would be as close to 100% secure as any modern-day data guardian could hope to achieve.


Because not all computer systems deal with top-secret, highly sensitive data, there are some other less-secure verification techniques that are often considered good enough for more casual scenarios. For example, verification e-mails are often sent to the e-mail address associated with a user. The user must then reply or click a link within the e-mail to prove they received it, thereby verifying they are indeed the user associated with that e-mail address. Keep in mind that e-mail is not especially secure. E-mails often bounce through countless servers all over the globe before finally reaching their final destinations after being cached in many places. This exposes numerous scenarios for interception. Because of this, you should avoid sending passwords via e-mail. Even if your system stores no sensitive data, be considerate of the possibility that your users may reuse passwords across systems, so you may be inadvertently exposing a more valuable password than you think.


It should go without saying that passwords should be stored only in encrypted form, and should remain in encrypted form as much of the time as possible even in memory at run time. I suggest hashing passwords before storage ( so they are computationally infeasible to decrypt (even by you).


If you must store credit card numbers, then of course they should also be encrypted but I strongly recommend you avoid storing credit card numbers at all. It s impossible to accidentally expose data that you do not possess, and therefore you cannot be held financially liable for the potentially devastating effects that such a security breach could cause. Once a credit card has been processed, delete all records of it, except perhaps the last four digits (for reference purposes).



There is no such thing as 100% secure. In the never-ending cat and mouse game between hackers and security professionals, every measure is eventually met with a countermeasure. Whenever valuable new technologies are released, new security holes are eventually discovered in them. In the unlikely event a perfect piece of software is ever written, it may still end up vulnerable through the components upon which it depends. For example, third-party controls, the underlying operating system, or the network infrastructure may have security holes that allow hackers to gain administrator access and wreak unlimited havoc.


Even though it s impossible to achieve a 100% secure software system, a good developer will strive to get as close as possible to that goal. Armed with the information covered in this article, you can craft software secure enough to deflect hackers toward easier targets elsewhere.


Steve C. Orr is an ASPInsider, MCSD, Certified ScrumMaster, Microsoft MVP in ASP.NET, and author of the book Beginning ASP.NET 2.0 AJAX by Wrox. He s been developing software solutions for leading companies in the Seattle area for more than a decade. When he s not busy designing software systems or writing about them, he can often be found loitering at local user groups and habitually lurking in the ASP.NET newsgroup. Find out more about him at or e-mail him at mailto:[email protected].




Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.