Exam 3 Study Guide

The one-hour study guide for exam 3

Paul Krzyzanowski

April 2024

Disclaimer: This study guide attempts to touch upon the most important topics that may be covered on the exam but does not claim to necessarily cover everything that one needs to know for the exam. Finally, don't take the one hour time window in the title literally.

Last update: Sun Apr 21 13:27:15 EDT 2024

Application Sandboxing

Application sandboxing provides a controlled environment to safely execute potentially harmful software, minimizing system-wide risks. It restricts program operations based on predefined rules, allowing only certain actions within the system. This method is crucial for running applications from unknown sources and is extensively utilized by security researchers to monitor software behavior and detect malware. Sandboxes enforce restrictions on file access, network usage, and system interactions, offering a fundamental layer of security by controlling application capabilities more granitely than traditional methods like containers or jails.

We previously looked at isolation via jails and containers, which use mechanisms that include namespaces, control groups, and capabilities. These constitute a widely used form of sandboxing. However, these techniques focus on isolating an application (or group of processes) from other processes, restricting access to parts of the file system, and/or providing a separate network stack with a new IP address.

While mechanisms like jails and containers, which include namespaces, control groups, and capabilities are great for creating an environment to run services without the overhead of deploying virtual machines, they do not fully address the ability of restricting what normal applications can do. For instance, they cannot block a process from calling a non-privileged system call. We want to protect users from their applications: give users the ability to run apps but define rules to restrict what those apps can do on a per-app basis, such as opening files only with a certain name or permitting only TCP networking.

Sandboxing is currently supported on a wide variety of platforms at either the kernel or application level.

Application sandboxing via system call interposition & user-level validation

An example of a user-level sandbox is the Janus sandbox. Application sandboxing with Janus involves creating policies to define permissible system calls for each application. Janus uses a kernel module to intercept these calls and sends them to a user-level monitor program that decides whether to allow or block the call based on the configured policy file. Challenges include maintaining system state across processes and handling complex scenarios like network and file operations, pathname parsing, and potential race conditions (TOCTTOU issues).

Application sandboxing with integrated OS support

The better alternative to having a user-level process decide on whether to permit system calls is to incorporate policy validation in the kernel. Some operating systems provide kernel support for sandboxing. These include the Android Application Sandbox, the iOS App Sandbox, the macOS sandbox, and AppArmor on Linux. Microsoft introduced the Windows Sandbox in December 2018, but this functions far more like a container than a traditional application sandbox, giving the process an isolated execution environment.

Seccomp-BPF (SECure COMPuting with Berkeley Packet Filters) is a Linux security framework that enables limits on which system calls a process can execute. It uses the Berkeley Packet Filter to evaluate system calls as “packets,” and applying rules that govern their execution. Though it doesn’t provide complete isolation on its own, Seccomp is an essential component for constructing robust application sandboxes when combined with other mechanisms like namespaces and control groups.

Process virtual machine sandboxes: Java

The Java Virtual Machine (JVM) was designed to run compiled Java applications in a controlled manner on any system regardless of the operating system or hardware architecture. The JVM employs three main components to ensure security:

  1. Bytecode Verifier: It scrutinizes Java bytecode before execution to confirm it adheres strictly to Java’s standards without security breaches like bypassing access controls or array bounds.

  2. Class Loader: This component safeguards against the loading of untrusted classes and ensures the integrity of runtime environments through Address Space Layout Randomization (ASLR), maintaining the security of essential class libraries.

  3. Security Manager: This enforces protection domains that define permissible actions within the JVM. It intercepts calls to sensitive methods, verifying permissions against a security policy, which can restrict actions like file and network access, preventing operations not allowed by the policy.

Building an effective sandbox in Java has proven complex, highlighted by persistent bugs, especially in the underlying C libraries and across different JVM implementations. Moreover, Java’s allowance for native methods can bypass these security mechanisms, introducing potential risks.

Virtual Machines

As a general concept, virtualization is the addition of a layer of abstraction to physical devices. With virtual memory, for example, a process has the impression that it owns the entire memory address space. Different processes can all access the same virtual memory location and the memory management unit (MMU) on the processor maps each access to the unique physical memory locations that are assigned to the process.

Process virtual machines present a virtual CPU that allows programs to execute on a processor that does not physically exist. The instructions are interpreted by a program that simulates the architecture of the pseudo machine. Early pseudo-machines included o-code for BCPL and P-code for Pascal. The most popular pseudo-machine today is the Java Virtual Machine (JVM). This simulated hardware does not even pretend to access the underlying system at a hardware level. Process virtual machines will often allow “special” calls to invoke system functions or provide a simulation of some generic hardware platform.

Operating system virtualization is provided by containers, where a group of processes is presented with the illusion of running on a separate operating system but, in reality, shares the operating system with other groups of processes – they are just not visible to the processes in the container.

System virtual machines*, allow a physical computer to act like several real machines, with each machine running its own operating system (on a virtual machine) and applications that interact with that operating system. The key to this machine virtualization is not to allow each operating system to have direct access to certain privileged instructions in the processor. These instructions would allow an operating system to directly access I/O ports, MMU settings, the task register, the halt instruction, and other parts of the processor that could interfere with the processor’s behavior and with the other operating systems on the system. Instead, a trap and emulate approach is used. Privileged instructions, as well as system interrupts, are caught by the Virtual Machine Monitor (VMM), also known as a hypervisor. The hypervisor arbitrates access to physical resources and presents a set of virtual device interfaces to each guest operating system (including the memory management unit, I/O ports, disks, and network interfaces). The hypervisor also handles preemption. Just as an operating system may suspend a process to allow another process to run, the hypervisor will suspend an operating system to give other operating systems a chance to run.

The two configurations of virtual machines are hosted virtual machines and native virtual machines. With a hosted virtual machine (also called a type 2 hypervisor), the computer has a primary operating system installed that has access to the raw machine (all devices, memory, and file system). This host operating system does not run in a virtual environment. One or more guest operating systems can then be run on virtual machines. The VMM serves as a proxy, converting requests from the virtual machine into operations that get sent to and executed on the host operating system. A native virtual machine (also called a type 1 hypervisor) is one where there is no “primary” operating system that owns the system hardware. The hypervisor is in charge of access to the devices and provides each operating system drivers for an abstract view of all the devices.

Security implications

Virtual machines (VMs) provide a deep layer of isolation, encapsulating the operating system along with all the applications it runs and files it needs within a secure environment separate from the physical hardware. Unlike lighter confinement methods like containers, a VM-contained compromise affects only that VM, akin to a contained issue in a physical machine.

Despite this isolation, VMs can still pose risks if compromised. Malicious entities can exploit VMs to attempt attacks on other systems within the same physical environment, leveraging the shared physical resources. Such scenarios underscore potential vulnerabilities in even well-isolated environments, highlighting the need for vigilant security practices across all layers.

A specific threat in such environments is the creation of covert channels through side channel attacks. These channels exploit system behaviors like CPU load variations to clandestinely transmit information between VMs clandestinely, bypassing conventional communication restrictions. This technique reveals how attackers can bridge gaps between highly secure and less secure systems, manipulating physical resource signals to communicate stealthily.


Malware is a term that refers to any malicious software that is unintentionally installed on a computer system. Malware can be distributed in various ways: viruses, worms, unintentional downloads, or trojan horses. It may spy on user actions and collect information on them (spyware), or present unwanted ads (adware). It may disable components of the system or encrypt files, undoing its damage if the owner pays money (ransomware). The software may sit dormant and wait for directives from some coordinator (a command and control server), who assembled an arsenal of hundreds of thousands of computers ready to do its bidding (for example, launch a distributed denial of service, DDoS, attack). Some software might be legitimate but may contain backdoors – undocumented ways to allow an outsider to use that software to perform other operations on your system.

Functions of malware

Malware can perform a variety of functions:

Destruction and denial of service:
Wiper malware can delete files or format the entire file system, deleting even the operating system system. Denial of service (DoS) attacks can flood a network or server with requests to make services unavailable to legitimate users. Another form of a DoS attack can lock users from accessing their computers or destroy devices.
: Exfiltration refers to stealing data. Malware can upload confidential files, authentication credentials, messages. Spyware can track a user’s activity, acquiring browsing history messages being sent or received, file access, keyboard operations via keyloggers, and capture camera and microphone inputs.
Bots are processes that are deployed by an attacker and usually sit dormant. They periodically contact a Command & Control (C&C) server that, at the right time, can give them directions for an attack. These directions will often require downloading additional software needed for an attack. Attackers can deploy bots across millions of compromised computers, creating an army of them that is called a botnet. This is instrumental in carrying out distributed denial of service (DDoS) attacks or compute-intensive crypto mining.
A backdoor is a type of malicious code that, once installed, allows an attacker remote access to a computer or network while remaining hidden. This access typically bypasses normal authentication processes, giving attackers the ability to remotely control the affected system, steal sensitive data, or deploy additional malware. For example, a backdoor in a computer system could allow an attacker to remotely execute commands, manipulate files, and monitor user activities without detection and without logging onto the system.
Ransomware is software that will typically lock users from being able to access their system or encrypt their files, demanding payment to re-enable access or avoid disclosure. It may include running a wiper to delete data permanently if the ransom isn’t paid. There are various forms of ransomware, which include:
  • Crypto ransomware: Denial of service malware that encrypts files or storage devices.
  • Locker ransomware: Denial of service malware that locks users out of their devices.
  • Extortion ransomware: Exfiltrates data to a remote site and threatens to expose it.
  • Double extortion ransomware: Exfiltrate data to a remote site before encrypting it and threaten to disclose it if ransom isn’t paid.
Adware is generally non-destructive but is unwanted. It automatically displays or downloads advertising material such as banners or pop-ups when a user is online. It’s often bundled with free software or services, providing revenue to developers while offering the software at no cost to the user. Adware may compromise privacy by tracking user behavior to target ads more effectively.

Malware Infiltration mechanisms

There are various ways in which malware gets onto a system but the mechanisms fall into two categories:

  1. An attacker exploited some vulnerability to enable the malware to be installed.
  2. You installed the malware unknowingly.

Zero-day vulnerabilities refer to software flaws that are unknown to those who would be interested in mitigating the vulnerability, such as the vendor. The term “zero-day” indicates that the developers have zero days to fix the issue because it has already been exploited in the wild. These vulnerabilities are highly sought after by attackers because they are effective until discovered and patched.

Example: If a hacker discovers an unknown vulnerability in a web browser that allows unauthorized administrative access and this flaw is exploited before the developer becomes aware and fixes it, that is a zero-day vulnerability.

N-day vulnerabilities, also known as known vulnerabilities, refer to software flaws that have been publicly disclosed and for which a patch is often available. The “N” in N-day represents the number of days that have elapsed since the vulnerability was disclosed. Unlike zero-day vulnerabilities, N-day vulnerabilities are already known to vendors and cybersecurity professionals, and patches or workarounds are typically developed to mitigate them.

Example: A vulnerability in an operating system that allows elevation of privileges is reported and patched. If attackers exploit this vulnerability after the patch is released, it is considered an N-day vulnerability, as the patch availability makes it “known.”

Worms and viruses

A virus is a type of malware that attaches itself to a legitimate program and requires human interaction, such as running the infected program, to spread and execute its malicious activities.

Conversely, a worm is a standalone malware that self-replicates and spreads independently across networks without the need for attachment to a specific program or human interaction. For example, a worm might exploit vulnerabilities in a network to spread itself, while a virus might spread via email attachments opened by unsuspecting users.

The distinction from a virus is that a worm runs as a standalone process while a virus requires a host program.

The popular use of both terms, worm and virus, has often blurred the distinctions between them. People often refer to any malware as a virus. Their malicious effects can be similar.

Malware components

Key components of malware include:

Infection Mechanism: The method by which malware spreads or inserts itself into a system, such as through email attachments or exploiting vulnerabilities.

Packer: A tool that compresses or encrypts malware to evade detection from anti-virus software, often making it harder to analyze or identify the malware.

Dropper: A small helper program that installs the main malware, often avoiding detection by not containing the malicious code itself.

Payload: The part of malware designed to perform malicious actions, ranging from data theft to system damage.

Trigger: A condition or event that activates the malware’s payload, like a specific date or user action.

File infector viruses

A file infector virus is a type of malware that attaches itself to executable files and spreads by modifying other executable files it can access. When an infected file is launched, the virus is executed, usually performing malicious actions while also seeking other files to infect. This used to be the dominant mechanism for malware propagation in the early days of PCs but is more challenging with systems where users have restricted permissions or where the OS validates the digital signature of applications and drivers.

Infected flash drives

Malware can spread through USB devices in several ways:

  1. Unprotected USB Firmware: Some malware targets the firmware of USB devices, which can be rewritten to include malicious code. When such a compromised device is plugged into any computer, the malware in the firmware can activate and cause the USB device to, for example, behave like a keyboard in addition to a storage device and send keyboard events to invoke a shell and run commands.

  2. USB Drop Attack: This method involves intentionally leaving infected USB drives in public or easily accessible places. Unsuspecting individuals who find and use these drives on their computers inadvertently trigger malware installation.

  3. Malicious Software or Links: USB drives may contain files that, when executed, install malware directly, or they may include links that lead to malicious websites. Opening these files or following these links can initiate the download and installation of harmful software.

Macro viruses

Macro viruses are a type of malware that embed themselves in documents and are executed when the document is opened. They are commonly written in Visual Basic for Applications, targeting Microsoft Office applications. Once activated, they can infect not only the document in which they reside but also other documents, spreading rapidly. These viruses can perform a series of operations from simple annoyances to damaging actions like corrupting files or sending data to third parties.

Even though Microsoft would present a warning about macros, users often explicitly permit them because they believe the content they are accessing is legitimate. Microsoft patched bugs that allowed macros to run without the user’s authorization but, as of 2022, attackers still found ways around these barriers.

Social engineering

By far the most common way that malware enters a system is via deception: the legitimate user of the system installed it unknowingly. This uses a social engineering attack to convince the user that it is in his or her interest to install the software. Social engineering is the art of manipulating, influencing, or deceiving a user into taking some action that is not in his/her or the organization’s best interest.

Attackers exploit human psychology rather than technical hacking techniques to infiltrate systems. This can involve phishing emails, pretexting, baiting with infected media, or any form of communication designed to elicit trust, provoke fear, or create urgency, leading individuals to reveal passwords, install malware, or open malicious links.

Any information the attacker can get about a user can help an attacker create a more convincing social attack. The term pretexting refers to using a concocted scenario to contact a user and get additional information (e.g., an attacker can pretend to be a caller from the IT department or a high-level manager from another location to try to extract information; with some rudimentary information, the attacker can mention some employee, department, or project names to sound like a true insider).

Phishing is a type of cyber attack that involves tricking individuals into revealing sensitive information or downloading malware by masquerading as a trustworthy entity in electronic communications, typically through emails that appear to come from reputable sources. Spear phishing is a more targeted version of phishing, where the attacker chooses specific individuals or organizations and tailors the message based on their characteristics, job positions, or other personal information to increase the likelihood of success. This specificity makes spear phishing significantly more effective and dangerous than generic phishing.

Credential stuffing

An attacker may obtain collections of stolen email addresses (or usernames) and passwords. Since people often use the same name and password on multiple systems, this often give the attacker access to services on other websites on which the user has accounts. Accounts for banking sites are, of course, particularly valuable since they can be a direct conduit for transferring money. This attack is called credential stuffing.

In some situations, such as getting access to a user’s email accounts, an attacker can log onto the systems or services as the owner of the account and install malware, monitor the internal organization, and even send email, disguised as the user (e.g., contact other employees or friends), which becomes a powerful social engineering attack.

Where does malware live?

File infector virus

A file infector virus is a virus that adds itself to an executable program. The virus patches the program so that, upon running, control will flow to the the virus code. Ideally, the code will install itself in some unused area of the file so that the file length will remain unchanged. A comparison of file sizes with the same programs on other systems will not reveal anything suspicious. When the virus runs, it will run the infector to decide whether to install itself on other files. The trigger will then decide whether the payload should be executed. If not, the program will appear to run normally.

Bootloader malware

A bootkits, also known as a boot sector virus, is a type of malware that infects the master boot record (MBR) or similar critical startup sectors of a computer. It loads itself before the operating system starts, giving it high-level control over the system and making it extremely difficult to detect and remove. Boot kits are often used to bypass operating system security measures and provide persistent access to the infected machine, even surviving system reinstalls if the MBR is not specifically cleaned.

JavaScript and PDF files

JavaScript, like Visual Basic, has evolved into a full programming language. Most browsers have security holes that involve Javascript. JavaScript can not only modify the content and structure of a web page but can connect to other sites. This allows any malicious site to leverage your machine. For example, systems can perform port scans on a range of IP addresses and report any detected unsecured services.

PDF (Portable Document Format) files, would seem to be innocent printable documents, incapable of harboring executable code. However, PDF is a complex format that can contain a mix of static and dynamic elements. Dynamic elements may contain Javascript, dynamic action triggers (e.g., “on open”), and the ability to retrieve “live” data via embedded URLs. As with Visual Basic scripts, PDF readers warn users of dynamic content but, depending on the social engineering around the file, the user may choose to trust the file … or not even pay attention to the warning in yet-another-dialog-box.


A Trojan horse is a program with two purposes: an overt purpose and a covert one. The overt purpose is what compels the user to get and run the program in the first place. The covert purpose is unknown to the user and is the malicious part of the program.

For example, a script with the name of a common Linux command might be added to a target user’s search path. When the user runs the command, the script is run. That script may, in turn, execute the proper command, leading the user to believe that all is well. As a side effect, the script may create a setuid shell to allow the attacker to impersonate that user or mail copy over some critical data. Users install Trojans because they believe they are installing useful software, such as an anti-virus tool (BTW, a lot of downloadable hacker tools contain Trojans: hackers hacking wannabe hackers). The side-effect of this software can activate cameras, enable key loggers, or deploy bots for anonymization servers, DDoS attacks, or spam attacks.

Trojans may include programs (games, utilities, anti-malware programs), downloading services, rootkits (see next) and backdoors (see next). They appear to perform a useful task that does not raise suspicion on the part of the victim.


A backdoor is software that is designed with some undocumented mechanism to allow someone who knows about it to be able to access the system or specific functions in a way that bypasses proper authentication mechanisms. In many cases, they are not designed for malicious use: they may allow a manufacturer to troubleshoot a device or a software author to push an update. However, if adversarial parties discover the presence of a backdoor, they can use it for malicious purposes.


A rootkit is software that is designed to allow an attacker to access a computer and hide the existence of the software … and sometimes hide the presence of the user on the system.

Historically, a basic rootkit would replace common administration commands (such as ps, ls, find, top, netstat, etc.) with commands that mimic their operation but hide the presence of intruding users, intruding processes, and intruding files. The idea is that a system administrator should be able to examine the system and believe that all is fine and the system is free of malware (or of unknown user accounts).

User mode rootkits
A user mode rootkit involves replacing commands, interposing libraries intercepting messages, and patching commonly-used APIs that may divulge the presence of the malware. A skilled administrator may find unmodified commands or import software to detect the intruding software.
Kernel mode rootkits
A kernel mode rootkit is installed as a kernel module. Being in the kernel gives the rootkit unrestricted access to all system resources and the ability to patch kernel structures and system calls. For example, directory listings from the getdents64 system call may not report any names that match the malware. Commands and libraries can be replaced and not give any indication that malicious software is resident in the system.
Hypervisor rootkits
The most insidious rootkits are hypervisor rootkits. A hypervisor rootkit is a type of rootkit that attacks virtualized environments by targeting the hypervisor layer that controls the virtual machines. By infecting the hypervisor, the rootkit can gain control over all the virtual machines running on the host, enabling it to monitor and manipulate operations on these machines. This level of control makes detection and removal exceptionally challenging, as the rootkit can hide its presence from both the operating system and antivirus programs running on the virtual machines.

Deceptive web sites

Quite often, malicious links in phishing attacks direct the user to a web site in order to obtain their login credentials. These sites masquerade as legitimate sites. The Proofpoint study mentioned earlier found that for every legitimate website, there are 20 malicious sites that mimic it. This is known as typosquatting. Such sites can be masqueraded banking sites, Google/Microsoft/Apple authentication pages, videoconferencing plugin-software downloads, etc.

File serving sites, including those that host software or those that provide services such as PDF or mp3 conversion are often ad-sponsored. Some of the ads on these sites, however, often look like download links and can trick a user into clicking on the ad instead of the link for the actual content. The


Malware was particularly easy to spread on older Windows systems since user accounts, and hence processes, ran with full administrative rights, which made it easy to modify any files on the system and even install kernel drivers. Adding file protection mechanisms, such as a distinction between user and administrator accounts added a significant layer of protection. However, malware installed by the user would run with that user’s privileges and would have full access to all of a user’s files. If any files are read or write protected, the malware can change DAC permissions.

Systems took the approach of warning users if software wanted to install software or asked for elevated privileges. Social engineering hopes to convince users that they actually want to install the software (or view the document). They will happily grant permissions and install the malware. MAC permissions can stop some viruses as they will not be able, for instance, to override write permissions on executable files but macro viruses and the user files are still a problem.

In general, however, studies have shown that by simply taking away admin rights (avoiding privilege escalation) from users, 94% of the 530 Microsoft vulnerabilities that were reported in 2016 could be mitigated and 100% of vulnerabilities in Office 2016 could be mitigated.

Anti-virus (anti-malware) software

There is no way to recognize all possible viruses. Anti-virus software uses two strategies: signature-based and behavior-based approaches.

With signature-based systems, anti-virus programs look for byte sequences that match those in known malware. Each bit pattern is an excerpt of code from a known virus and is called a signature. A virus signature is simply a set of bytes that make up a portion of the virus and allow scanning software to see whether that virus is embedded in a file. The hope is that the signature is long enough and unique enough that the byte pattern will not occur in legitimate programs. This scanning process is called signature scanning. Lists of signatures (“virus definitions”) have to be updated by the anti-virus software vendor as new viruses are discovered. Signature-based detection is used by most anti-virus products.

A behavior-based system monitors the activities of a process (typically the system calls or standard library calls that it makes). Ideally, sandboxing is employed, to ensure that the suspected code is run within a sandbox or even in an interpreted environment within a sandbox to ensure that it cannot cause real damage. Behavior-based systems try to perform anomaly detection. If the observed activity is deemed suspicious, the process is terminated and the user alerted. Sandboxed, behavior-based analysis is often run by anti-malware companies to examine what a piece of suspected malware is actually doing and whether it should be considered to be a virus. A behavior-based can identify previously-unseen malware but these systems tend to have higher false positive rates of detection: it is difficult to characterize exactly what set of operations constitute suspicious behavior.

Malware Countermeasures

Some viruses will take measures to try to defend themselves from anti-virus software.

Signature scanning countermeasures

A common thing to do in malware is to use a packer on the code, unpacking it prior to execution. Packing can be one of several operations:

  • Simply obscure the malware payload by exclusive-oring (xor) with a repeating byte pattern (exclusive-oring the data with the same byte pattern reconstructs it.
  • Compress the code and then uncompress it upon loading it prior to execution.
  • Encrypt the code and decrypt it prior to execution.

All of these techniques will change the signature of a virus. One can scan for a signature of a compressed version of the virus but there are dozens of compression algorithms around, so the scanning process gets more complicated.

With encryption (xor is a simple form of encryption), only the non-encrypted part of the virus contains the unpacking software (decryption software and the key). A virus scanner will need to match the code for the unpacker component since the key and the encrypted components can change each time the virus propagates itself.

Polymorphic viruses mutate their code each time they run while keeping the algorithm the same. This involves replacing sequences of instructions with functionally-identical ones. For example, one can change additions to subtractions of negative numbers, invert conditional tests and branches, and insert or remove no-op instructions. This thwarts signature scanning software because the the byte pattern of the virus is different each time.

Access control countermeasures

Access controls help but do not stop the problem of malware. Containment mechanisms such as containers work well for server software but are usually impractical for user software (e.g., you want Microsoft Word to be able to read documents anywhere in a user’s directories). Application sandboxing is generally far more effective and is a dominant technique used in mobile software.

Trojans, deceptive downloads, and phishing attacks are insidiously difficult to defend against since we are dealing with human nature: users want to install the software or provide the data. They are conditioned to accepting pop-up messages and entering a password. Better detection in browsers & mail clients against suspicious content or URLs helps. However, malware distributors have been known to simply ask a user to rename a file to turn it into one that is recognized by the operating system as an executable file (or a disk image, PDF, or whatever format the malware come in and may otherwise be filtered by the mail server or web browser.

Sandboxing countermeasures

Virusus are unlikely to get through a sandbox (unless there are vulnerabilities or an improper configuration). However, there are areas where malware can address sandboxing:

  1. Vendor examination
    Anti-virus vendors often test software within a tightly configured sandboxed environment so they can detect whether the software is doing anything malicious (e.g., accessing files, devices, or the network in ways it is not supposed to). If they detect that they do have malware, they will dig in further and extract a signature so they can update and distribute their list of virus definitions. Viruses can try to get through this examination phase by setting a trigger to keep the virus from immediately performing malicious actions or to stay dormant for the first several invocations. The hope is that the anti-virus vendors will not see anything suspicious and the virus will never be flagged as such by their software.

  2. User configuration (entitlements)
    Virtually all mobile applications, and increasingly more desktop/laptop applications, are run with application sandboxes in place. These may disallow malware from accessing files, devices, or the network. However, it never hurts to ask. The software can simply ask the user to modify the sandbox settings. If social engineering is successful, the user may not even be suspicious and not wonder why a game wants access to contacts or location information.

Network Security

The Internet is designed to interconnect various networks, each potentially using different hardware and protocols, with the Internet Protocol (IP) providing a logical structure atop these physical networks. IP inherently expects unreliability from underlying networks, delegating the task of packet loss detection and retransmission to higher layers like TCP or applications. Communication via IP involves multiple routers and networks, which may compromise security due to their unknown trust levels.

The OSI model helps describe the networking protocol stacks for IP:

  1. Physical Layer: Involves the actual network hardware.
  2. Data Link Layer: Manages protocols for local networks like Ethernet or Wi-Fi.
  3. Network Layer: Handles logical networking and routing across physical networks via IP.
  4. Transport Layer: Manages logical connections, ensuring reliable data transmission through TCP, or provides simpler, unreliable communication via UDP.

Each layer plays a critical role in ensuring data is transmitted securely and efficiently across the internet.

Data link layer

In an Ethernet network, the data link layer is handled by Ethernet transceivers and Ethernet switches. Security was not a consideration in the design of this layer and several fundamental attacks exist at this layer. Wi-Fi also operates at the data link layer and uses the same address structure as ethernet. It adds encryption on wireless data between the device and access point. Note that the encryption is not end-to-end, between hosts, but only to the access point.

Switch CAM table overflow

Sniff all data on the local area network (LAN).

A CAM table overflow attack exploits the self-learning mechanism of a network switch, which uses a content addressable memory (CAM) table to map MAC addresses to switch ports for efficient packet forwarding. By flooding the switch with fake MAC addresses, an attacker can overflow the CAM table. Once the table is full, the switch behaves like a hub, broadcasting packets to all ports, thus allowing the attacker to intercept data. To protect against this, port security can be configured to limit the number of MAC addresses allowed on a port, preventing unauthorized devices from overwhelming the CAM table.

VLAN hopping (switch spoofing)

Sniff all data from connected virtual local area networks.

A VLAN hopping attack exploits VLAN (Virtual Local Area Network) configurations to gain unauthorized access to multiple VLANs. VLANs segregate network traffic for enhanced security and efficiency. Since switches can connect to other switches, VLAN trunking, managed via the IEEE 802.1Q standard, allows multiple VLANs to share a single physical connection between switches.

Attackers can perform switch spoofing by emulating a switch, tricking a real switch into thinking it’s connected to another switch. This allows the attacker’s device to receive traffic across all VLANs. Defending against such attacks involves configuring managed switches to restrict trunking to authorized ports.

ARP cache poisoning

Redirect IP packets by changing the IP address to MAC address mapping.

An ARP cache poisoning attack exploits the Address Resolution Protocol (ARP), which is used by the operating system to map IP addresses to MAC addresses. Attackers can respond falsely to ARP queries or send gratuitous ARP responses not associated with a request, claiming their MAC address corresponds to another device’s IP address. This corrupts the ARP caches of devices on the network.

Defenses include Dynamic ARP Inspection on switches, which verifies ARP packets against a trusted list, and static ARP entries to prevent unauthorized changes.

DHCP spoofing

Configure new devices on the LAN with your choice of DNS address, router address, etc.

DHCP spoofing attacks target the Dynamic Host Configuration Protocol (DHCP), which networks use to assign IP addresses and network configuration parameters to devices dynamically.

The attack begins with a DHCP Discover message, which devices broadcast to find DHCP servers. Malicious actors respond to these messages before legitimate servers, directing devices to use attacker-specified DNS or gateway settings. This redirection allows attackers to intercept, manipulate, or block data.

The problem is challenging to mitigate because of the trust placed in network broadcasts and the speed of response. A defense mechanism, DHCP snooping, helps by validating DHCP messages on network switches and blocking unauthorized DHCP offers, thereby safeguarding against malicious server responses.

Network (IP) layer

The Internet Protocol (IP) layer is responsible for getting datagrams (packets) to their destination. It does not provide any guarantees on message ordering or reliable delivery. Datagrams may take different routes through the network and may be dropped by queue overflows in routers.

Source IP address authentication

Anyone can impersonate an IP datagram.

One aspect of the design of IP networking is that there is no source IP address authentication. Clients are expected to use their own source IP address but anybody can override this if they have administrative privileges on their system by using a raw sockets interface.

This enables an attacker to forge messages to appear that they come from another system. Any software that authenticates requests based on their IP addresses will be at risk.

Anonymous denial of service

The ability to set an arbitrary source address in an IP datagram can be used for anonymous denial of service attacks. If a system sends a datagram that generates an error, the error will be sent back to the source address that was forged in the query. For example, a datagram sent with a small time-to-live, or TTL, value will cause a router that is hit when the TTL reaches zero to respond back with an ICMP (Internet Control Message Protocol) Time to Live exceeded message. Error responses will be sent to the forged source IP address and it is possible to send a vast number of such messages from many machines (by assembling a botnet) across many networks, causing the errors to all target a single system.


Routers are computers with multiple network links and often with special-purpose hardware to facilitate the rapid movement of packets across interfaces. They run operating systems and have user interfaces for administration. As with many other devices that people don’t treat as “real” computers, there is a danger that they routers will have simple or even default passwords. Moreover, owners of routers may not be nearly as diligent in keeping the operating system and other software updated as they are with their computers.

Routers can be subject to some of the same attacks as computers. Denial of service (DoS) attacks can keep the router from doing its job. One way this is done is by sending a flood of ICMP datagrams. The Internet Control Message Protocol is typically used to send routing error messages and updates and a huge volume of these can overwhelm a router. Routers may also have input validation bugs and not handle certain improper datagrams correctly.

Route table poisoning is the modification of the router’s routing table either by breaking into a router or by sending route update datagrams over an unauthenticated protocol.

Transport layer (UDP, TCP)

UDP and TCP are transport layer protocols that allow applications to establish communication channels with each other. Each endpoint of such a channel is identified by a port number (a 16-bit integer that has nothing to do with Ethernet switch ports). The port number allows the operating system to direct traffic to the proper socket.

UDP, the User Datagram Protocol, is stateless, connectionless, and unreliable. As we saw with IP source address forgery, anybody can send UDP messages with forged source IP addresses.

TCP (Transmission Control Protocol) is a stateful, connection-oriented, and reliable protocol used in network communications. Being stateful, TCP keeps track of the connection’s state through sequence numbers, ensuring that packets are ordered correctly and no data is lost. As a connection-oriented protocol, TCP establishes a connection using a three-way handshake process before any data transfer. This handshake involves SYN (synchronize), SYN-ACK (synchronize acknowledgment), and ACK (acknowledgment) packets to synchronize and acknowledge connection establishment.

TCP’s three-way handshake not only establishes a connection but also initializes sequence numbers, which are crucial for ensuring data integrity and order. The process starts when the client sends a SYN packet to the server with a random initial sequence number. The server responds with a SYN-ACK packet, acknowledging the client’s sequence number by adding one, and provides its own random initial sequence number. The client completes the handshake by sending an ACK packet, acknowledging the server’s sequence number. This exchange of sequence numbers sets the foundation for a reliable, ordered data transmission.

TCP’s use of random initial sequence numbers is critical for security. By starting with a random sequence number, TCP mitigates sequence number prediction attacks, where an attacker predicts the sequence numbers of packets to spoof legitimate packets or hijack a session. This randomness helps in maintaining the integrity and security of the data exchange process.

SYN flooding

SYN flooding attacks target the TCP three-way handshake by flooding a server with SYN packets, often from spoofed IP addresses, leading to server resource exhaustion and service unavailability.

SYN cookies defend against SYN flooding attacks by having the server create an initial sequence number is a cryptographic hash of the source and destination IP addresses and ports, along with a secret number. This allows the server to verify the legitimacy of incoming ACK packets without needing to store state information prematurely, thus preventing resource exhaustion. By encoding this connection-specific information into the sequence number, the server ensures that only clients completing the valid handshake can establish a connection.

TCP Reset

A somewhat simple attack is to send a RESET (RST) segment to an open TCP socket. If the server sequence number is correct, then the connection will close. Hence, the tricky part is getting the correct sequence number to make it look like the RESET is part of the genuine message stream.

Sequence numbers are 32-bit values. The chance of successfully picking the correct sequence number is tiny: 1 in 232, or approximately one in four billion. However, many systems will accept a large range of sequence numbers approximately in the correct range to account for the fact that packets may arrive out of order, so they shouldn’t necessarily be rejected just because the sequence number is incorrect. This can reduce the search space tremendously, and an attacker can send a flood of RST packets with varying sequence numbers and a forged source address until the connection is broken.

Routing protocols

Autonomous Systems (AS) sets of IP addresses that are under the control of a single network operator. The Border Gateway Protocol (BGP) is the protocol used by external routers at each AS to exchange routing information between each other. BGP enables AS to determine the best routes for sending network traffic and manage the pathways by which data packets travel across the Internet, thus ensuring efficient and reliable routing.

BGP Hijacking

BGP hijacking, also known as route hijacking, involves maliciously redirecting internet traffic by corrupting the routing tables used by Border Gateway Protocol (BGP). An attacker misleads other networks into believing that the best route to specific IP addresses goes through their malicious system. This can be used to intercept, inspect, or redirect internet traffic to fraudulent sites.

BGP Path Forgery attacks manipulate the Border Gateway Protocol (BGP) by falsely advertising optimal paths to specific network destinations. This type of attack exploits BGP’s trust-based nature, which lacks mechanisms for path verification, leading to traffic being misrouted through the attacker’s network. These actions enable the attacker to intercept or manipulate data traffic.

BGP Prefix Forgery involves malicious actors advertising unauthorized IP prefixes via BGP. By advertising more specific prefixes than those used legitimately, attackers can divert traffic to themselves. BGP favors the most specific route available, making this a particularly effective method for redirecting traffic. This can lead to data interception or denial of service as traffic is misrouted to the attacker’s network.

Two security measures that were added to BGP were RPKI and BGPsec. RPKI (Resource Public Key Infrastructure) enhances BGP security by allowing networks to use public keys and digital signatures to verify that a network is authorized to announce specific IP prefixes, thus preventing invalid route announcements. However, RPKI’s effectiveness is limited by partial adoption and the need for network operators to maintain accurate and up-to-date certificate information.

BGPsec secures BGP by providing cryptographic validation of the entire AS path, not just the origin. This helps prevent path manipulation attacks. The main drawbacks of BGPsec include its increased complexity, higher computational overhead, and slow adoption.

Domain Name System (DNS)

The Domain Name System (DNS) is a Hierarchical service that maps Internet domain names to IP addresses. A user’s computer runs the DNS protocol via a program known as a DNS stub resolver. It first checks a local file for specific preconfigured name-to-address mappings. Then it checks its cache of previously-found mappings. Finally, it contacts an external DNS resolver, which is usually located at the ISP or is run as a public service, such as Google Public DNS or OpenDNS.

We trust that the name-to-address mapping is legitimate. Web browsers, for instance, rely on this to enforce their same-origin policy. However, DNS queries and responses are sent using UDP with no authentication or integrity checks. The only check is that each DNS query contains a Query ID (QID). A DNS response must have a matching QID so that the client can match it to the query. These responses can be intercepted and modified or just forged. Malicious responses can return a different IP address that will direct IP traffic to different hosts

A solution called DNSsec has been proposed. It is a secure extension to the DNS protocol that provide authenticated requests & responses. However, few sites support it.

Pharming attack

A pharming attack is an attack on the configuration information maintained by a DNS server –either modifying the information used by the local DNS resolver or modifying that of a remote DNS server. By changing the name to IP address mapping, an attacker can cause software to send packets to the wrong system.

The most direct form of a pharming attack is to modify the local hosts file to add a malicious name-to-address mapping. Alternatively, malware may modify the DNS server settings on a system so that it would contact an attacker’s DNS server, which can provide the wrong IP address for certain domain names.

DNS cache poisoning (DNS spoofing attack)

DNS queries first check the local host’s DNS cache to see if the results of a past query have been cached. A DNS cache poisoning attack, also known as DNS spoofing, involves corrupting the DNS cache with false information to redirect users to malicious websites. In the general case, DNS cache poisoning refers to any mechanism where an attacker is able to provide malicious responses to DNS queries, resulting in those responses getting cached locally.

JavaScript on a malicious website can perform a DNS cache poisoning attack. This attack takes advantage of the fact that a DNS response for a subdomain, such as a.bank.com can contain information about a new DNS server for the entire bank.com domain.

The browser requests access to a legitimate site but with an invalid subdomain. For example, a.bank.com. Because the system will not have the address of a.bank.com cached, it sends a DNS query to an external DNS resolver using the DNS protocol.

The DNS query includes a query ID (QID) x1. At the same time that the request for a.bank.com is made, JavaScript launches an attacker thread that sends 256 responses with random QIDs (y1, y2, y3, …}. Each of these DNS responses tells the server that the DNS server for bank.com is at the attacker’s IP address.

If one of these responses happens to have a matching QUD, the host system will accept it as truth that all future queries for anything at bank.com should be directed to the name server run by the attacker. If the responses don’t work, the script can try again with a different subdomain, b.bank.com. The attack might take several minutes, but there is a high likelihood that it will eventually succeed.

Summary: An attacker can run a local DNS server that will attempt to provide spoofed DNS responses to legitimate domain name lookup requests. If the query ID numbers of the fake response match those of a legitimate query (trial and error), the victim will get the wrong IP address, which will redirect legitimate requests to an attacker’s service.

DNS Rebinding

Web application security is based on the same-origin policy. Browser scripts can access cookies and other data on pages only if they share the same origin, which is the combination of URI (protocol), host name, and port number. The underlying assumption is that resolving a domain name takes you to the correct server.

The DNS rebinding attack allows JavaScript code on a malicious web page to access private IP addresses in the victim’s network. The attacker configures the DNS entry for a domain name to have a short time to live (TTL). When the victim’s browser visits the page and downloads JavaScript from that site, that JavaScript code is allowed to interact with the domain thanks to the same origin policy. However, right after downloading the script, the attacker can reconfigure the DNS server so that future queries will return an address in the internal network. The JavaScript code can then try to request resources from that system since, as far as the browser is concerned, the origin is the same because the name of the domain has not changed.

Summary: short time-to-live values in DNS allow an attacker to change the address of a domain name so that scripts from that domain can now access resources inside the private network.

Distributed Denial of Service (DDoS) attacks

  • Purpose: Disrupt the normal functioning of a system, making it unavailable to users.
  • Impact: Can cause system crashes, unresponsiveness, or significant slowdowns.

Key Characteristics of DoS Attacks

Exploit vulnerabilities in system software or configurations. Attackers seek situations where a small effort on their part causes a large amount of processing on the target. These attacks commonly involve flooding the network with traffic.

Techniques Used in DoS Attacks

  1. Exploit Asymmetries: Target scenarios where processing requests is more taxing than sending them.
  2. Fake Return Addresses: Use spoofed addresses to make tracing difficult and avoid managing response traffic.
  3. Response Redirection: Set the return address of a request to the target, causing innocent services to bombard the target with responses.
  4. Amplification: Send small queries to services that respond with much larger data, increasing the volume of traffic aimed at the target.
  5. Botnets: Use a network of compromised devices to generate massive attack traffic.

Distributed DoS (DDoS) Attacks

DDoS attacks use multiple systems distributed globally to flood the target with traffic.


  • Volumetric Attacks: Flood the target with massive amounts of data to consume bandwidth.
  • Packet-per-Second Attacks: Overwhelm the processing capacity of network devices with high rates of packets.

Defensive Strategies

  • Overprovision Bandwidth: Maintain more bandwidth than typically needed to absorb higher traffic volumes.
  • Rate Limiting: Implement limits on traffic rates to prevent overconsumption of resources.
  • Blackhole Routing: Divert and drop traffic identified as malicious.
  • Network Redundancy: Ensure availability with multiple networks.
  • Disable Unnecessary UDP Services: Reduce exposure to UDP-based attacks by disabling irrelevant services.

Virtual Private Networks (VPNs)

Network tunnels serve the purpose of securely transmitting data between different network segments or over the internet by encapsulating the data packets within the protocol of an underlying network. This enables moving data across networks that might not otherwise support the same communication protocols, creating a communication channel over a public network infrastructure. For example, an IP packet on a local area network that is directed to a local IP address at a branch can be encapsulated within an IP packet that is sent to the router at that branch office, which would then extract the packet and route it on the internal network.

A tunnel provides connectivity but not security. A VPN (Virtual Private Network) is created by adding security to a network tunnel. This usually involves encrypting the encapsulated packet and adding a message authentication code (MAC) to ensure that any data transmitted between the endpoints remains confidential and secure from potential eavesdropping or modification. Additionally, VPNs employ authentication methods to verify the identities of the endpoints, further securing the data exchange within the tunnel.

IPsec (Internet Protocol Security) is a set of VPN protocols used to secure Internet communications by authenticating and encrypting each IP packet in a data stream. Communications in IPsec use one of two two main protocols: AH (Authentication Header) or ESP (Encapsulating Security Payload).

AH ensures data integrity and authenticity by adding a message authentication code (MAC) to each datagram but does not provide encryption.

ESP provides the same assurance of integrity as AH but also adds encryption in addition to a MAC, ensuring the confidentiality, integrity, and authenticity of data.

IPsec can operate in two modes: Transport and Tunnel. Transport mode encrypts only the payload of the IP packet, leaving the header untouched, and is suitable for end-to-end communication between hosts. Tunnel mode encrypts the entire IP packet and encapsulates it within a new packet, and is used mainly for gateway-to-gateway communications, such as VPNs, where entire packets need to be protected as they traverse untrusted networks.

IPSec supports the use of:

  • HMAC for message authentication.
  • Diffie-Hellman key exchange to create random session keys for HMAC and encryption while assuring forward secrecy.
  • Symmetric encryption of data using AES for the ESP protocol
  • X.509 digital certificates or pre-shared keys for authentication of endpoints.

Transport Layer Security (TLS)

Virtual Private Networks (VPNs) operate at the network layer to connect entire networks, tunneling all IP traffic without differentiating between specific data streams. This approach does not directly provide application-to-application secure communication. In contrast, Transport Layer Security (TLS), evolved from Secure Sockets Layer (SSL), operates above TCP to provide authentication, integrity, and encryption directly to applications. TLS preserves the sockets interface, allowing developers to implement network security transparently. Applications like web browsers use HTTPS, which incorporates TLS for secure communication over HTTP.

TLS has been designed to provide:

Data encryption
Symmetric cryptography is used to encrypt data.
Key exchange
During the authentication sequence, TLS performs a Diffie-Hellman key exchange so that both sides can obtain random shared session keys. From the common key, TLS uses a pseudorandom generator to create all the keys it needs for encryption and integrity.
Data integrity
Ensure that we can detect if data in transit has not been modified and new data has not been injected. TLS includes an HMAC function based on the SHA-256 hash for each message.
TLS authenticates the endpoints prior to sending data. Authentication can be unidirectional (the client may just authenticate the server) or bidirectional (each side authenticates the other). TLS uses public key cryptography and X.509 digital certificates as a trusted binding between a user’s public key and their identity.
Interoperability & evolution
TLS was designed to support different key exchange, encryption, integrity, & authentication protocols. The start of each session enables the protocol to negotiate what protocols to use for the session.

TLS sub-protocols

TLS operates through two main phases:

the handshake protocol and the record protocol.

  1. The handshake protocol (authentication and setup):
    During the handshake protocol, the client authenticates the server using X.509 digital certificates and digital signatures. They then use Ephemeral Diffie-Hellman key exchange to create a common key. This provides forward secrecy to the communication session.

  2. The record protocol (communication):
    Following the handshake, the record protocol encrypts application data using the agreed-upon symmetric encryption algorithm, ensuring confidentiality and using a hashed message authentication code (HMAC) to ensure message integrity as data is transmitted between the server and client.


A firewall protects the junction between an untrusted network (e.g., external Internet) and a trusted network (e.g., internal network). Two approaches to firewalls are packet filtering and proxies. A packet filter, or screening router, determines not only the route of a packet but whether the packet should be dropped based on contents in the IP header, TCP/UDP header, and the interface on which the packet arrived. It is usually implemented inside a border router, also known as the gateway router that manages traffic flow between the ISP and user’s network. The basic principle of firewalls is to never have a direct inbound connection from the originating host from the Internet to an internal host; all traffic must flow through a firewall and be inspected.

The packet filter evaluates a set of rules to determine whether to drop or accept a packet. This set of rules forms an access control list, often called a chain. Strong security follows a default deny model, where packets are dropped unless some rule in the chain specifically permits them.

First-generation packet filters implemented stateless inspection. A packet is examined on its own with no context based on previously-seen packets.

Second-generation packet filters track TCP connections and other information from previous connections. These stateful packet inspection (SPI) firewalls allow the router to keep track of outstanding TCP connections. For instance:

  • They can block TCP data traffic if a connection setup did not take place to avoid sequence number prediction attacks.

  • They can track that a connection has been established by a client to a remote server and allow return traffic to that client (which is essential for any interaction by someone inside the network with external services).

  • They can track connectionless UDP and ICMP messages and allow responses to be sent back to clients in the internal network. DNS queries and pings (ICMP echo-reply messages) are examples of these.

  • They also and understand the relationship between packets. For example, when a client establishes an FTP (file transfer protocol) connection to a server on port 21, the server establishes a connection back to the client on a different port when it needs to send data.

Packet filters traditionally do not look above the transport layer (UDP and TCP protocols and port numbers).

Third-generation packet filters incorporate deep packet inspection (DPI), which allows a firewall to examine application data as well and make decisions based on its contents. Deep packet inspection can validate the protocol of an application as well as check for malicious content such as malformed URLs or other security attacks. DPI is often considered to be part of Intrusion Prevention Systems. Examples are detecting application-layer protocols such as HTTP and then applying application-specific filters, such as checking for suspicious URLs or disallowing the download of certain ActiveX or Java applets.

Deep Packet Inspection (DPI) firewalls evolved to Deep Content Inspection (DCI) firewalls. These use the same concept but are capable of buffering large chunks of data from multiple packets that contain an entire object and acting on it, such as unpacking base64-encoded content from web and email messages and performing a signature analysis for malware.

Application proxies

Application proxies act as intermediaries for specific applications. They inspect and filter traffic at the application layer, ensuring that only valid protocol traffic passes between networks. By validating data exchanges against known protocols, they enhance security by preventing protocol-specific attacks. When running on dual-homed hosts, these proxies benefit from an added layer of isolation; one network interface connects to the public network and the other to the private network, thereby controlling and monitoring all inbound and outbound communication effectively.


In a typical firewalled environment using a screened subnet architecture, two distinct subnets are established: the DMZ (**demilitarized zone)** for externally accessible services like web and mail servers, and another for internal systems shielded from external access. Traffic control and security are enforced by screening routers. The exterior router manages access to the DMZ, filtering incoming traffic to allowed services, while the interior router controls traffic from the DMZ to the internal network, ensuring only necessary communications pass. This setup can be simplified using a single router with detailed filtering rules for each interface to accomplish the same function.

Deperimeterization and zero trust

The trustworthiness of systems in internal networks diminished as people would move their laptops and phones between different environments, users would install random software on their systems, systems had to access cloud services, remote work become common, and there was an increased likelihood of malware getting installed on any computers in a company’s network. The breakdown of a secure boundary between a trusted internal and untrusted external network is called deperimiterization.

This shift led to the development of the Zero Trust model, which does not assume internal network traffic is automatically safe. Instead, it enforces strict identity verification and least privilege access for every user and device, regardless of their location relative to the traditional network perimeter.

Host-based firewalls

Firewalls generally intercept all packets entering or leaving a local area network. A host-based firewall, on the other hand, runs on a user’s computer. Unlike network-based firewalls, a host-based firewall can associate network traffic with individual applications. Its goal is to prevent malware from accessing the network. Only approved applications will be allowed to send or receive network data. Host-based firewalls are particularly useful in light of deperimiterization.. A concern with host-based firewalls is that if malware manages to get elevated privileges, it may be able to shut off the firewall or change its rules.

Intrusion detection/prevention systems

An enhancement to screening routers is the use of intrusion detection systems (IDS). Intrusion detection systems are often parts of DPI firewalls and try to identify malicious behavior. There are three forms of IDS:

  1. A protocol-based IDS validates specific network protocols for conformance. For example, it can implement a state machine to ensure that messages are sent in the proper sequence, that only valid commands are sent, and that replies match requests.

  2. A signature-based IDS is similar to a PC-based virus checker. It scans the bits of application data in incoming packets to try to discern if there is evidence of “bad data”, which may include malformed URLs, extra-long strings that may trigger buffer overflows, or bit patterns that match known viruses.

  3. An anomaly-based IDS looks for statistical aberrations in network activity. Instead of having predefined patterns, normal behavior is first measured and used as a baseline. An unexpected use of certain protocols, ports, or even amount of data sent to a specific service may trigger a warning.

Anomaly-based detection implies that we know normal behavior and flag any unusual activity as bad. This is difficult since it is hard to characterize what normal behavior is, particularly since normal behavior can change over time and may exhibit random network accesses (e.g., people web surfing to different places). Too many false positives will annoy administrators and lead them to disregard alarms.

A signature-based system employs misuse-based detection. It knows bad behavior: the rules that define invalid packets or invalid application layer data (e.g., ssh root login attempts). Anything else is considered good.

Intrusion Detection Systems (IDS) monitor traffic entering and leaving the network and report any discovered problems. Intrusion Prevention Systems (IPS) serve the same function but are positioned to sit between two networks like a firewall and can actively block traffic that is considered to be a threat or policy violation.

Type Description
Firewall (screening router) 1st generation packet filter that filters packets between networks. Blocks/accepts traffic based on IP addresses, ports, protocols
Stateful inspection firewall 2nd generation packet filter. Like a screening router but also takes into account TCP connection state and information from previous connections (e.g., related ports for TCP)
Deep Packet Inspection firewall 3rd generation packet filter. Examines application-layer protocols
Application proxy Gateway between two networks for a specific application. Prevents direct connections to the application from outside the network. Responsible for validating the protocol
IDS/IPS Can usually do what a stateful inspection firewall does + examine application-layer data for protocol attacks or malicious content
Host-based firewall Typically screening router with per-application awareness. Sometimes includes anti-virus software for application-layer signature checking
Host-based IPS Typically allows real-time blocking of remote hosts performing suspicious operations (port scanning, ssh logins)

Web security

Early Web Browsers: Initially, browsers could only deal with static content. Because of this, they weren’t a useful target of attacks and security efforts were mainly directed at server-side attacks through malformed URLs, buffer overflows, and similar vulnerabilities.

Modern Browsers: As browsers evolved, they became more complex, with support for cookies, JavaScript, DOM, CSS, AJAX, WebSockets, and multimedia. All this introduces new security challenges since scripts can communicate over the network, access page contents, and modify them. WebAssembly and Google Native Client (NaCl) enable the execution of sandboxed binary software in browsers, enhancing performance but providing additional challenges in ensuring isolation and proper behavior.

Web security model

The web security model is designed to protect both users and providers of web applications by managing how scripts interact with different web resources. Central to this model is the Same-Origin Policy, which allows scripts running on web pages to only access data from the same site that delivered them.

The term same-origin refers to a policy where two resources are considered to be of the same origin if they have the same scheme (protocol), hostname, and port number. The policy helps to prevent malicious scripts on one site from obtaining access to sensitive data on another site through the user’s browser, thereby protecting user data and privacy.

Under the same-origin policy, each origin has access to common client-side resources that include:

  • Cookies: Key-value data that clients or servers can set. Cookies associated with the origin are sent with each http request.

  • JavaScript namespace: Any functions and variables defined or downloaded into a frame share that frame’s origin.

  • DOM tree: This is the JavaScript definition of the HTML structure of the page.

  • DOM storage: Local key-value storage.

Any JavaScript code downloaded into a frame will execute with the authority of its frame’s origin. For instance, if cnn.com loads a script from jQuery.com, the script runs with the authority of cnn.com.

Passive content, which is non-executable content such as CSS files and images, has no authority. This normally should not matter as passive content does not contain executable code but there have been attacks in the past that had code in passive content and made that passive content turn active.

Cross-origin content

A page may load content from multiple origins. The same-origin policy defines that JavaScript code loaded from anywhere runs with the authority of the frame’s origin. Content from other origins is generally not readable or writable by JavaScript. For example:

  • A frame can load images from other origins but cannot inspect that image.

  • A frame may embed CSS from any origin but cannot inspect the CSS content.

  • A frame can load JavaScript, which executes with the authority of the frame’s origin but if the code is downloaded from a different origin, it is executable but not readable.

Cross-Origin Resource Sharing (CORS) is a security feature that allows web applications running at one origin to request resources from a different origin. This is an extension of the Same-Origin Policy that restricts such cross-origin requests by default. CORS provides a way for server administrators to specify who can access their resources and under what conditions. This is done through the use of HTTP headers that send browsers an identification of sites that should be considered to be treated as if they share the same origin. For example, when a user downloads a page, a server on example.com can send an HTTP header that contains:

Access-Control-Allow-Origin: http://www.example.com

which tells the browser that the URL http://www.example.com will be treated as the same origin as the frame’s URL (e.g., http://example.com).


Cookies are small pieces of data, name-value sets, sent from a website and stored on a user’s web browser. Every time the user loads the website, the browser sends relevant cookies back to the server to notify the website of the user’s previous activity.

Cookies serve three primary purposes on the web:

Session Management: Cookies can store login information, shopping cart data, and other details that keep track of user sessions, allowing users to pick up where they left off on previous visits without needing to re-enter information.

Personalization: They store user preferences, such as themes, language settings, and location, to tailor the browsing experience to the user’s needs and preferences.

Tracking: Cookies are used to monitor and analyze user behavior over time, helping websites and advertisers gather insights into browsing habits, which can be used for targeted advertising and optimizing the user experience.

There are two main types of cookies based on their lifetime:

Session cookies: These are temporary cookies that remain in the cookie file of your browser until you leave the site.

Persistent cookies: These remain in the cookie file of your browser for much longer (though how long will depend on the specified lifetime of the specific cookie). They are used to remember your preferences within an application and remain on your desktop after you close your browser.

A browser will handle cookies for multiple web sites (origins) and various parts of a site.

Browsers send and receive cookies but cookies don’t quite use the same concept of an origin. Cookies are bound by a scope that includes the domain and path where they were set. A cookie associated with a specific domain and path will only be sent to the server when a request is made that matches its scope. The domain attribute specifies which domain the cookie belongs to, while the path attribute restricts the cookie to a specific directory. A server at example.com might set a cookie with a path of /blog to ensure that the cookie is only sent when accessing parts of the site within the /blog directory. This provides a degree of isolation that can prevent cookies from being sent across different contexts, which can be important for security and compartmentalization of user sessions and preferences.

Third-party cookies: Placed by websites other than the one you are currently visiting, often by advertisers to track browsing history.

Security implications arise because cookies can store sensitive information such as user IDs, passwords, login state, and other personal details that might be exploitable. To enhance security, cookies often incorporate:

  • HttpOnly flag: This makes the cookie inaccessible to client-side scripts, reducing the risk of cross-site scripting (XSS) attacks.

  • Secure flag: This restricts the transmission of cookies to secure (HTTPS) connections, preventing them from being intercepted during the transmission over unsecured networks.

Cross-Site Request Forgery (XSRF)

Cross-Site Request Forgery (CSRF) is an attack that tricks a web browser into executing an unwanted action on a web service where a user is authenticated. An attacker crafts a malicious website or email with requests to a vulnerable web service where the user is already logged in. When the user interacts with the malicious content, the browser makes requests to the application, sending cookies with the user’s credentials, as if the user themselves made the request.

For example, if a user is logged into their banking site and then clicks on a deceptive link that requests a funds transfer, the banking site might process that request as legitimate. This vulnerability exploits the trust that a web application has in the user’s browser, and the mitigation often includes implementing anti-forgery tokens which must accompany each transaction, ensuring that the request was intentionally made by the user.

There are several defenses against Cross-site request forgery:

  • The server can validate the Referer header on the request. This will tell it whether the request came via a link or directly from a user (or from a link on a trusted site).

  • The server can require some unique token (an anti-CSRF token) to be present in the request. For instance, visiting netflix.com might cause the Netflix server to return a token that must be passed to any successive URL. An attacker will not be able to create a static URL on her site that will contain this random token.

  • The interaction with the server can use HTTP POST requests instead GET requests, placing all parameters into the body of the request rather than in the URL. State information can be passed via hidden input fields instead of cookies. This doesn’t solve the problem but gives the attacker the challenge of getting the victim to click on a malicious web page that can run a script to post a request rather than simply present a URL that contains parameters for the desired action.


Clickjacking is a malicious technique of tricking a web user into clicking on something different from what the user perceives, effectively hijacking the clicks meant for another page. This is done by overlaying a transparent iframe over a visually appealing element, such as a video play button or a survey form. The user believes they are interacting with the genuine site, but the click is being routed to a hidden frame, leading to potential unauthorized actions, such as liking a page, sharing personal information, or enabling microphone access.

There are several ways for a web programmer to defend against clickjacking. JavaScript code can be added to a web page to prevent it from being framed. This script checks if the current window is the topmost window, and if it’s not, it can force the page to break out of the frame. Alternatively, an HTTP header can indicate whether a browser should be allowed to render a page in an iframe.

Input sanitization problems

Any user input needs to be parsed carefully before it can be made part of a URL, HTML content, or JavaScript. Consider a script that is generated with some in-line data that came from a malicious user:

<script> var x = "untrusted_data"; </script>

The malicious user might define that untrusted_data to be

Hi"; </script> <h1> Hey, some text! </h1> <script> malicious code... x="Bye

The resulting script to set the variable x now becomes

<script> var x = "Hi"; </script> <h1> Hey, some text! </h1> <script> malicious code... x="Bye"; </script>

Cross-site scripting

Cross-site scripting (XSS) is a web security vulnerability that enables attackers to inject malicious scripts into content from otherwise benign and trusted websites. This is achieved by manipulating web applications to return user input as part of their pages. XSS attacks occur when an application includes untrusted data without proper validation or escaping, allowing attackers to execute scripts in the victim’s browser context. This can lead to hijacked sessions, defacement of websites, or redirection to malicious sites.

The two main types of XSS are reflected and persistent (stored). Reflected XSS attacks involve crafting a URL that contains the malicious script. When the URL is visited, the script executes within the context of the user’s browser. Persistent XSS, on the other hand, stores the malicious script in the web server (like in a database – for example, user reviews or blogs), and the script is then served to users in the content of a normal page.

XSS can be avoided via input sanitization, which is the process of cleaning and validating user input, ensuring it’s safe for processing by the web application and not executable as code. This often involves escaping special characters, stripping out unwanted script elements, or using secure input handling frameworks to prevent the insertion of untrusted HTML content.

Homograph (homoglyph) attacks

Homograph attacks take advantage of characters that look alike used to deceive users. For example, the domain “paypaI.com” where the last letter is a capital ‘I’ instead of an ‘l’, mimicking “paypal.com” can be used in phishing scams to make users believe they are going to a valid website.

Unicode is a comprehensive system designed to represent over 128,000 characters, covering almost all of the world’s scripts and symbols, including alphabets like Latin, Greek, Cyrillic, and scripts for languages such as Arabic, Hindi, Chinese, and many more, along with emojis and ancient scripts.

Unicode’s design, which allows visually similar or identical characters from different scripts, poses risks for deception attacks. The introduction of IDNs (Internationalized Domain Names) allows the use of Unicode characters in domain names, which has further facilitated deceptive practices by enabling the creation of domain names that visually mimic legitimate ones but use characters from different scripts. For instance, using a Cyrillic ‘a’ instead of the Latin ‘a’ can mislead users into thinking they are accessing a familiar website. The characters look identical but are different to a DNS service.

Websites like “wikipedia.org” can be mimicked using characters from non-Latin scripts, such as Greek or Cyrillic, to create visually indistinguishable yet technically different URLs, misleading users and potentially leading to phishing or other forms of cyber fraud.

Tracking via images

The same-origin policy treats images as static content with no authority. It would seem that images should not cause problems. However, an image tag (IMG) can pass parameters to the server, just like any other URL:

<img src="http://evil.com/images/balloons.jpg?extra_information" height="300" width="400"/>

The parameter can be used to notify the server that the image was requested from a specific page. Unlike cookies, which can sometimes be disabled, users will not block images from loading.

An image itself can be hidden by setting its size to a single pixel … and even making it invisible:

<img src="https://attacker.com/onebyone.png" height="1" width="1" />

These images are called tracking pixels or spy pixels.

When a browser loads an image:

  • The server that hosts the image is contacted with an HTTP GET request for the content.
  • Any cookies for that server will be sent by the browser.
  • Any extra information that’s part of the image URL will be sent. This information can, for example, identify the website or page that is hosting the content.
  • The server logs the time and IP address that requested the image.
  • The HTTP headers also identify the browser version, operating system, and type of device.

A server can use the image data to identify the specific page and read a cookie to get a unique ID for the user. The ID can be used as a key for an object store or database and store every page a user visited. That enables tracking the user’s visits across different pages.

Last modified April 18, 2024.
recycled pixels