Network Security

Core Internet Protocols

Paul Krzyzanowski

April 3, 2022


The Internet was built using a packet switching architecture. This means that all data is broken up into chunks called packets. Each packet contains a header that identifies where it came from and where it’s going.

The Internet was designed to support the interconnection of multiple networks, each of which may use different underlying networking hardware and protocols. The Internet Protocol, IP, is a logical network built on top of these physical networks.

Individual networks under IP are connected by routers, which are computing elements that are each connected to multiple networks. They receive packets on one network and relay them onto another network to get them toward their destination. A packet from your computer will often flow through dozens networks and routers that you know nothing about on its way to its destination. This poses security concerns since you do not know of the trustworthiness of the routers and networks.

IP assumes that the underlying networks support packet switching but do not provide reliable communication. IP provides best-effort packet delivery; the network tries to get the packet to the destination but guarantees neither reliable delivery nor in-order delivery of messages. It is up to higher layers of the IP software stack (either TCP or the application) to detect lost packets.

Networking protocol stacks are usually described using the OSI layered model. For the Internet, the layers are:

  1. Physical. This represents the actual hardware: the cables, connectors, voltage levels, modulation techniques, etc.

  2. Data Link. This layer defines the local area network (LAN). In homes and offices, this is typically Ethernet (802.1) or Wi-Fi (802.11). Ethernet and Wi-Fi use the same addressing scheme and were designed to be bridged together to form a single local area network.

  3. Network. The network layer creates a single logical network and routes packets across physical networks. The Internet Protocol (IP) is responsible for this. There are two versions of this that are currently deployed: IPv4 and IPv6. IPv4 was first deployed in 1983. It supports 32-bit addresses and we have already run out of IPv4 addresses that can be allocated. IPv6 was created as a successor and uses 128 bit addresses. It was first deployed in 2012 but has been slow to gain adoption in places where IPv4 is in widespread use, such as the U.S., since systems on an IPv4 network would not be able to communicate directly with systems on an IPv6 network.

  4. Transport. The transport layer is responsible for creating logical software endpoints (ports) so that one application can send a stream of data to another via an operating system’s sockets interface. TCP uses sequence numbers, acknowledgement numbers, and retransmission to provide applications with a reliable, connection-oriented, bidirectional communication channel. UDP does not provide reliability and simply sends a packet to a given destination host and port. Higher layers of the protocol stack are handled by applications and the libraries they use.

Data link layer

In an Ethernet network, the data link layer is handled by Ethernet transceivers and Ethernet switches. Security was not a consideration in the design of this layer and several fundamental attacks exist at this layer. Wi-Fi also operates at the data link layer and added encryption on wireless data between the device and access point. Note that Wi-Fi’s encryption is not end-to-end, between hosts, but ends at the access point.

Switch CAM table overflow

Sniff all data on the local area network (LAN).

Ethernet frames1 are delivered based on their 48-bit MAC2 address. IP address are meaningless to ethernet transceivers and to switches since IP is handled at higher levels of the network stack. Ethernet was originally designed as a bus-based shared network; all devices on the LAN shared the same wire. Any system could see all the traffic on the Ethernet. This resulted in increased congestion as more hosts were added to the local network.

Ethernet switches alleviated this problem by using a dedicated cable between each host and the switch and extra logic within the switch. The switch routes an ethernet frame only to the Ethernet port (the connector on the switch) that is connected to the system that contains the desired destination address. This switched behavior isolates communication streams - other hosts can no longer see the messages flowing on the network that are targeted to other systems.

Unlike routers, switches are not programmed with routes. Instead, they learn which computers are on which switch ports by looking at the source MAC addresses of incoming ethernet frames. An incoming ethernet frame indicates that the system with that source address is connected to that switch port.

To implement this, a switch contains a switch table (a MAC address table). This table contains entries for known MAC addresses and their interface (the switch port). The switch then uses forwarding and filtering:

When a frame arrives for some destination address D, the switch looks up D in the switch table to find its interface. If D is in the table and on a different port than that of the incoming frame, the switch forwards the frame to that interface, queueing it if necessary.

If D is not found in the table, then the switch assumes it has not yet learned what port that address is associated with, so it forwards the frame to ALL interfaces.

This procedure makes the switch self-learning: the switch table is empty initially and gets populated as the switch inspects source addresses.

A switch has to support extremely rapid lookups in the switch table. For this reason, the table is implemented using content addressable memory (CAM, also known as associative memory). CAM is expensive and switch tables are fixed-size and not huge. The switch will delete less-frequently used entries if it needs to make room for new ones.

The CAM table overflow attack exploits the limited size of this CAM-based switch table. The attacker sends bogus Ethernet frames with random source MAC addresses. Each newly-received address will displace an entry in the switch table, eventually filling up the table. With the CAM table full, legitimate traffic will now be broadcast to all links. A host on any port can now see all traffic.

Countermeasures for CAM table attacks require the use of managed switches that support port security. These switches allow you to configure individual switch ports to limit the number of addresses the table will hold for each switch port.

Another option to prevent this attack is to use a switch that supports the 802.1x protocol. This is a protocol that was created to improve security at the link layer. With 802.1x in place, all traffic coming to a switch port is initially considered to be “unauthorized”. The switch redirects the traffic, regardless of its destination address, to an authentication server.

If the user authenticates successfully, the authentication server then configures a rule in the switch that will allow traffic coming from that user’s MAC address to be accepted by the switch. The port becomes “authorized” for that specific address. This is a common technique that is used to allow users to connect to public access wireless networks.

VLAN hopping (switch spoofing)

Sniff all data from connected virtual local area networks.

Companies often deploy multiple local area networks in their organization to isolate users into groups on separate networks. This isolates broadcast traffic between groups of systems and allows administrators to set up routers and firewalls that can restrict access between these networks. Related users can all be placed on a single LAN. For instance, we might want software developers to be on a physically distinct local area network than the human resources or finance groups. Partitioning different types of employees onto different local area networks is good security practice.

However, users may be relocated within an office and switches may be used inefficiently. Virtual Local Area Networks (VLANs) create multiple logical LANs using a single switch. The network administrator can assign each port on a switch to a specific VLAN. Each VLAN is a separate broadcast domain so that each VLAN acts like a truly separate local area network. Users belonging to one VLAN will never see any traffic from the other; it would have to be routed through an IP router.

Switches may be extended by cascading them with other switches: an ethernet cable from one switch simply connects to another switch. With VLANs, the connection between switches forms a VLAN trunk3 and carries traffic from all VLANs to the other switch. To support this behavior, a VLAN Trunking protocol was created, called the IEEE 802.1Q standard – the Extended Ethernet frame format. 802.1Q simply takes a standard ethernet frame and adds a VLAN tag that identifies the specific VLAN number from which the frame originated.

A VLAN hopping attack employs switch spoofing: an attacker’s computer sends and receives 802.1Q frames and the switch and will believe that the connected computer is another switch and consider it to be a member of all VLANs on the system.

Depending on switch tables and forwarding policies, the attacker might not receive all the traffic but the attacker make that happen by performing a CAM overflow on the switch. The attacker’s computer will receive all broadcast messages, which often come from services advertising their presence. The attacker can also create and inject ethernet packets onto any VLAN. Recall that all higher-level protocols, such as UDP, are encapsulated within ethernet packets.

Defending against this attack requires a managed switch where an administrator can disable unused ports and associate them with an unused VLAN. Auto-trunking should be disabled on the switch so that each port cannot become a trunk. Instead, trunk ports must be configured explicitly for the ports that have legitimate connections to other switches.

ARP cache poisoning

Redirect IP packets by changing the IP address to MAC address mapping.

Recall that IP is a logical network that sits on top of physical networks. If we are on an Ethernet network and need to send an IP datagram4, that IP datagram needs to be encapsulated within an Ethernet frame. The Ethernet frame has to contain a destination MAC address that corresponds to the destination machine or the MAC address of a router, if the destination address is on a different LAN. Before an operating system can send an IP packet it needs to figure out what MAC address corresponds to that IP address.

There is no relationship between an IP and Ethernet MAC address. To find the MAC address when given an IP address, a system uses the Address Resolution Protocol, ARP. The sending computer creates an Ethernet frame that contains an ARP message with the IP address it wants to query. This ARP message is then broadcast: all network adapters on the LAN receive the message. If a computer receives this message and sees that its own IP address matches the address in the query, it then sends back an ARP response. This response identifies the MAC address of the system that owns that IP address.

To avoid the overhead of issuing this query each time the system has to use the IP address, the operating system maintains an ARP cache that stores recently used addresses. To further improve performance, hosts cache any ARP replies they see, even if they did not originate them. This is done on the assumption that many systems use the same set of IP addresses and the overhead of making an ARP query is substantial. Along the same lines, a computer can send an ARP response even if nobody sent a request. This is called a gratuitious ARP and is often sent by computers when they start up as a way to give other systems on the LAN the IP:MAC address mapping without them having to ask for it at a future time.

Note that there is no way to authenticate that a response is legitimate. The asking host does not have any idea of what MAC address is associated with the IP address. Hence, it cannot tell whether a host that responds really has that IP address or is an imposter.

An ARP cache poisoning attack is one where an attacker creates fake ARP replies that contain the attacker’s MAC address but the target’s IP address. This will direct any traffic meant for the target over to the attacker. It enables man-in-the-middle or denial of service attacks since the real host will not be receiving that IP traffic. Because other systems pick up ARP replies, the ARP cache poisoning reply will affect all the systems on the LAN.

There are several defenses against ARP cache poisoning. One defense is to ignore replies that are not associated with requests. However, you need to hope that the reply you get is a legitimate one since an attacker may respond more quickly or perhaps launch a denial of service attack against the legitimate host and then respond.

Another defense is to give up on ARP broadcasts and simply use static ARP entries. This works but can be an administrative nightmare since someone will have to keep the list of IP and MAC address mappings and the addition of new machines to the environment.

Finally, one can enable something called Dynamic ARP Inspection. This essentially builds a local ARP table by using DHCP (Dynamic Host Configuration Protocol) Snooping data as well as static ARP entries. Any ARP responses will be validated against DHCP Snooping database information or static ARP entries. The DHCP snooping database is populated whenever systems first get configured onto the network. This assumes that the environment uses DHCP instead of fixed IP address assignments.

DHCP server spoofing

Configure new devices on the LAN with your choice of DNS address, router address, etc.

When a computer joins a network, it needs to be configured to use the Internet Protocol (IP) on that network. This is most often done automatically via DHCP, the Dynamic Host Configuration Protocol. It is used in practically every LAN environment and is particularly useful where computers (including phones) join and leave the network regularly, such as Wi-Fi hotspots. Every access point and home gateway provides DHCP server capabilities.

A computer that joins a new network broadcasts a DHCP Discover message. As with ARP, we have the problem that a computer does not know whom to contact for this informations. It also does not have an IP address, it sends the query as an Ethernet broadcast, hoping that it gets a legitimate response.

A DHCP server on the network picks up this request and sends back a response that contains configuration information for this new computer on the network:

  • IP address – the IP address for the system
  • Subnet mask – which bits of the IP address identify the local area network
  • Default router – gateway to which all non-local datagrams will be routed
  • DNS servers – servers that system can query to find IP addresses for a domain name
  • Lease time – how long this configuration is valid

With DHCP Spoofing, any system can pretend to be a DHCP server and spoof responses that would normally be sent by a valid DHCP server. This attacker can provide the new system with a legitimate IP address but with a false address for the gateway (the default router). This will cause the computer to route all non-local datagrams to the attacker.

The attacker can provide a false DNS server in the response. This will cause domain name queries to be sent to a server chosen by the attacker, which can give false IP addresses to redirect traffic for chosen domains.

As with ARP cache poisoning, the attacker may launch a denial of service attack against the legitimate DHCP server to keep it from responding or at least delay its responses. If the legitimate server sends its response after the imposter, the new host will simply ignore the response.

There aren’t many defenses against DHCP spoofing. Some switches (such as those by Cisco and Juniper) support DHCP snooping. This allows an administrator to configure specific switch ports as “trusted” or “untrusted." Only specific machines, those on trusted ports, will be permitted to send DHCP responses. Any other DHCP responses will be dropped. The switch will also use DHCP data to track client behavior to ensure that hosts use only the IP address assigned to them and that hosts do not generate fake ARP responses.

Network (IP) layer

The Internet Protocol (IP) layer is responsible for getting datagrams (packets) to their destination. It does not provide any guarantees on message ordering or reliable delivery. Datagrams may take different routes through the network and may be dropped by queue overflows in routers.

Source IP address authentication

Anyone can impersonate an IP datagram.

One fundamental problem with IP communication is that there is absolutely no source IP address authentication. Clients are expected to use their own source IP address but anybody can override this if they have administrative privileges on their system by using a raw sockets interface.

This enables one to forge messages to appear that they come from another system. Any software that authenticates requests based on their IP addresses will be at risk.

Anonymous denial of service

The ability to set an arbitrary source address in an IP datagram can be used for anonymous denial of service attacks. If a system sends a datagram that generates an error, the error will be sent back to the source address that was forged in the query. For example, a datagram sent with a small time-to-live, or TTL, value will cause a router that is hit when the TTL reaches zero to respond back with an ICMP (Internet Control Message Protocol) Time to Live exceeded message. Error responses will be sent to the forged source IP address and it is possible to send a vast number of such messages from many machines (by assembling a botnet) across many networks, causing the errors to all target a single system.


Routers are nothing more than computers with multiple network links and often with special purpose hardware to facilitate the rapid movement of packets across interfaces. They run operating systems and have user interfaces for administration. As with many other devices that people don’t treat as “real” computers, there is a danger that they routers will have simple or even default passwords. For instance, you can go to to get a database of thousands of default passwords for different devices.

Moreover, owners of routers may not be nearly as diligent in keeping the operating system and other software updated as they are with their computers.

Routers can be subject to some of the same attacks as computers. Denial of service (DoS) attacks can keep the router from doing its job. One way this is done is by sending a flood of ICMP datagrams. The Internet Control Message Protocol is typically used to send routing error messages and updates and a huge volume of these can overwhelm a router. Routers may also have input validation bugs and not handle certain improper datagrams correctly.

Route table poisoning is the modification of the router’s routing table either by breaking into a router or by sending route update datagrams over an unauthenticated protocol.

Transport layer (UDP, TCP)

UDP and TCP are transport layer protocols that allow applications to establish communication channels with each other. Each endpoint of a channel is identified by a port number (a 16-bit integer that has nothing to do with Ethernet switch ports). The port number allows the operating system to direct traffic to the proper socket. Hence, both TCP and UDP segments5 contain not only source and destination addresses but also source and destination ports.

UDP, the User Datagram Protocol, is stateless, connectionless, and unreliable.

As we saw with IP source address forgery, any system can create and send UDP messages with forged source IP addresses. UDP interactions have no concept of sessions as far as the operating system is concerned and do not use sequence numbers, so attackers can inject messages directly without having to take over some session.

TCP, the Transmission Control Protocol, is stateful, connection-oriented, and reliable. Every packet contains a sequence number (a byte offset) and the operating system assembles received packets into their correct order. The receiver also sends acknowledgements so that any missing packets will be retransmitted.

To handle in-order, reliable communication, TCP needs to establish state at both endpoints. It does this through a connection setup process that comprises a three-way handshake.

  1. SYN: The client sends a SYN segment.
    The client selects a random initial sequence number (client_isn). This is the starting sequence number for the segments it will send.

  2. SYN/ACK: The server responds with a SYN/ACK.
    The server receives the SYN segment and now knows that a client wants to connect to it. It allocates memory to store the connection state and to store received, possibly out-of-sequence segments. The server also generates an initial sequence number (server_isn) for its side of the data stream. This is also a random number. The response also contains an acknowledgement to the client’s SYN request with the value client_isn+1.

  3. ACK: The client sends a final acknowledgement.
    The client acknowledges receipt of the server’s SYN/ACK message by sending a final ACK message that contains an acknowledgement of server_isn+1.

Note that the initial sequence numbers are random rather than starting at zero as one might expect. There are two reasons for this.

The primary reason is that message delivery times on an IP network are unpredictable and it is possible that a recently-closed connection may receive delayed messages, confusing the server on the state of that connection.

The security-sensitive reason is that if sequence numbers were predictable then it would be quite easy to launch a sequence number prediction attack where an attacker would be able to guess at likely sequence numbers on a connection and send masqueraded packets that will appear to be part of the data stream. Random sequence numbers do not make the problem go away but make it more challenging to launch the attack, particularly if the attacker does not have the ability to see traffic on the network.

SYN flooding

In the second step of the three-way handshake, the server is informed that a client would like to connect and allocates memory to manage this new connection. Given that kernel memory is a finite resource, the operating system will allocate only a finite amount of TCP buffers in its TCP queue. After that, it will refuse to accept any new connections.

In the SYN flooding attack, the attacker sends a large number of SYN segments to the target. These SYN messages contain a forged source address of an unreachable host, so the target’s SYN/ACK responses never get delivered anywhere. The handshake is never completed but the operating system has allocated resources for this connection. There is a window of time before the server times out on waiting for a response and cleans up the memory used by these pending connections. Meanwhile, all TCP buffers have been allocated and the operating system refuses to accept any more TCP connections, even if they come from a legitimate source. This window of time can usually be configured. Its default value is 10 seconds on Windows systems.

SYN flooding attacks cannot be prevented completely. One way of lessening impact of these attacks is the use of SYN cookies. With SYN cookies, the server does not allocate memory for buffers & TCP state when a SYN segment is received. It responds with a SYN/ACK that contains an initial sequence number created as a hash of several known values:

	hash(src_addr, dest_addr, src_port, dest_port, SECRET)

The SECRET is not shared with anyone; it is local to the operating system. When (if) the final ACK comes back from a legitimate client, the server will need to validate the acknowledgement number. Normally this requires comparing the number to the stored server initial sequence number plus 1. We did not allocate space to store this value but we can recompute the number by re-generating the hash, adding one, and comparing it to the acknowledgement number in the message. If it is valid, the kernel believes it was not the victim of a SYN flooding attack and allocate resources necessary for managing the connection.

TCP Reset

A somewhat simple attack is to send a RESET (RST) segment to an open TCP socket. If the server sequence number is correct then the connection will close. Hence, the tricky part is getting the correct sequence number to make it look like the RESET is part of the genuine message stream.

Sequence numbers are 32 bit values. The chance of successfully picking the correct sequence number is tiny: 1 in 232, or approximately one in four billion. However, many systems will accept a large range of sequence numbers that are approximately in the correct range to account for the fact that packets may arrive out of orders so they shouldn’t necessarily be rejected just because the sequence number is not exactly correct. This can reduce the search space tremendously and an attacker can send a flood of RST packets with varying sequence numbers and a forged source address until the connection is broken.

Routing protocols

The Internet was designed to connect multiple independently managed networks, each of which may use different hardware. Routers connect local area networks as well as wide area networks.

A collection of consecutive IP addresses (most significant bits, called prefixes) as well as the underlying routers and network infrastructure, all managed as one administrative entity, is called an Autonomous System (AS). For example, the part of the Internet managed by Comcast is an autonomous system (Comcast actually has 42 of them in different regions). The networks managed by Verizon constitute a few autonomous systems as well. For purposes of our discussion, think of ASes as ISPs or large data centers such as Google or Amazon. Incidentally, Rutgers is an Autonomous System: AS46, owning the range of IP addresses starting with 128.6. This is usually expressed as, meaning that the first 16 bits of the address identify the range of addresses in the Rutgers network.

Routers that are connected to routers in other ASes use an Exterior Gateway Protocol (EGP) called the Border Gateway Protocol, or BGP. With BGP, each autonomous system exchanges routing and reachability information with the autonomous systems with which it connects. For example, Comcast can tell Verizon what parts of the Internet it can reach. BGP uses a distance vector routing algorithm to enable the routers to determine the most efficient path to use to send packets that are destined for other networks. Unless an administrator explicitly configures a route, BGP will generally be configured to pick the shortest route.

BGP Hijacking

So what are the security problems with BGP? Edge routers in an autonomous system use BGP to send route advertisements to routers of neighboring autonomous systems. An advertisement is a list of IP address prefixes the AS can reach (shorter prefixes mean a bigger range of addresses) and the distance (number of hops) to each group of systems.

These messages are sent over a TCP connection between the routers with no authentication, integrity checks, or encryption. With BGP hijacking, a malicious party that has access to the network link or a connected router can inject advertisements for arbitrary routes. The information will propagate throughout the Internet and can cause routers throughout the Internet to send IP datagrams to the attacker, with the belief that is the shortest path to the destination.

A BGP attack can be used for eavesdropping (direct network traffic to a specific network by telling everyone that you’re offering a really short path) or a denial of service (DoS) attack (make parts of the network unreachable by redirecting traffic and then dropping it). There are currently close to 33,000 autonomous systems and most have multiple administrators. We live in the hope that none of them are malicious, cannot be bribed or blackmailed, and that all routers are properly configured and properly secured.

It is difficult to change BGP since tens of thousands of independent entities use it worldwide. Two partial solutions to this problem emerged. The Resource Public Key Infrastructure (RPKI) framework simply has each AS get an X.509 digital certificate from a trusted entity (the Regional Internet Registry). Each AS signs its list of route advertisements with its private key and any other AS can validate that list of advertisements using the AS’s certificate.

An alternate, but related, solution is BGPsec, which is an IETF standard. Instead of signing an individual AS’s routes, every BGP message between ASes is signed.

Both solutions require every AS to employ this solution. If some AS is willing to accept untrusted route advertisements and will relay them to other ASes as signed messages then the integrity is meaningless. Moreover, most BGP hijacking incidents took place because legitimate system administrators misconfigured route advertisements either accidentally or on purpose. They were not the actions of attackers that hacked into a router.

A high profile BGP attack occurred against YouTube in 2008. Pakistan Telecom received a censorship order from the Ministry of Information Technology and Telecom to block YouTube traffic to the country. The company sent spoofed BGP messages claiming to offer the best route for the range of IP addresses used by YouTube. It did this by using a longer address prefix than the one advertised by YouTube (longer prefix = fewer addresses). Because a longer prefix is more specific, BGP gives it a higher priority. This logic makes it easy for an AS to offer different routes to small parts of its address space.

YouTube is its own AS and announces their network of computers with a 22-bit prefix. Pakistan Telecom advertised the same set of IP addresses with a 24-bit prefix. A longer prefix means the route supports fewer addresses and thus refers to fewer computers and BGP gave Pakistan Telecom’s routes a higher routing priority. This way, Pakistan Telecom hijacked those routes.

Within minutes, routers worldwide were directing their YouTube requests to Pakistan Telecom, which would simply drop them. YouTube tried countermeasures, such as advertising more specific networks, such as a /26 network, which advertised blocks of 64 addresses. The AS to which Pakistan telecom was connected was also reconfigured to stop relaying the routes advertised by Pakistan Telecom but it took about two hours before routes were restored.

Domain Name System (DNS)

The Domain Name System (DNS) is a tree-structured hierarchical service that maps Internet domain names to IP addresses. A user’s computer runs the DNS protocol via a program known as a DNS stub resolver. It first checks a local file for specific preconfigured name-to-address mappings. Then it checks its cache of previously-found mappings. Finally, it contacts an external DNS resolver, which is usually located at the ISP or is run as a public service, such as Google Public DNS, Cloudflare DNS, or OpenDNS.

We trust that the name-to-address mapping is legitimate. Web browsers, for instance, rely on this to enforce their same-origin policy, which involves validating content based on the domain name it comes from rather than its IP address.

However, DNS queries and responses are sent using UDP with no authentication or integrity checks. The only validation is that each DNS query contains a Query ID (QID). A DNS response must have a matching QID so that the client can match it with the query it issued. These responses can be intercepted and modified or simply forged. Malicious responses can return a different IP address that will direct IP traffic to different hosts.

Pharming attack

A pharming attack is an attack on the configuration information maintained by a DNS server –either modifying the information used by the local DNS resolver or modifying that of a remote DNS server. By changing the name to IP address mapping, an attacker can cause software to send packets to the wrong system.

The most direct form of a pharming attack is to modify the local hosts file. This is the file (/etc/hosts on Linux, BSD, and macOS systems; c:\Windows\System32\Drivers\etc\hosts on Windows) that contains mappings between domain names and IP addresses. If an entry is found here, the system will not bother checking a remote DNS server.

Alternatively, malware may modify the DNS server settings on a system so that the system would contact an attacker’s DNS server, which can then provide the wrong IP address for certain domain names.

DNS cache poisoning (DNS spoofing attack)

DNS queries first check the local host’s DNS cache to see if the results of a past query have been cached. This yields a huge improvement in performance since a network query can be avoided. If the cached name-to-address mapping is not valid, then the wrong IP address is returned to the program that asked for it.

Modifying this cached mapping is called DNS cache poisoning, also known as DNS spoofing. In the general case, DNS cache poisoning refers to any mechanism where an attacker is able to provide malicious responses to DNS queries.

For instance, if an attacker can install malware that can inspect ethernet packets on the network, the malware can detect DNS queries and issue forged responses. The response’s source address can even be forged to appear that it’s coming from a legitimate server. The local DNS resolver will accept the data because there is no way to verify whether it is legitimate or not.

Here is another way that browser-based DNS cache poisoning attack can be performed via JavaScript on a malicious website. The attack takes advantage of the fact that a DNS response for a subdomain, such as can contain information about a new DNS server for the entire domain. The goal of the attacker is to redirect requests for, even if the IP address for the domain is already cached in the system.

The browser requests access to a legitimate site but with an invalid subdomain. For example, Because the system will not have the address of cached, it sends a DNS query to an external DNS resolver using the DNS protocol.

The DNS query includes a query ID (QID) x1. At the same time that the request for is made, JavaScript launches an attacker thread that sends 256 responses with random QIDs (y1, y2, y3, …}. Each of these DNS responses tells the server that the DNS server for is at the attacker’s IP address.

If one of these responses happens to have a matching QUD, the host system will accept it as truth that all future queries for anything at should be directed to the name server run by the attacker. If the responses don’t work, the script can try again with a different sub-domain, The attack might take several minutes, but there is a high likelihood that this attack will eventually succeed.

Summary: An attacker can run a local DNS server that will attempt to provide spoofed DNS responses to legitimate domain name lookup requests. If the query ID numbers of the fake response match those of a legitimate query (trial and error), the victim will get the wrong IP address, which will redirect legitimate requests to an attacker’s service.

DNS cache poisoning defenses

Several defenses can prevent this form of attack. The first two we discuss require non-standard actions that will need to be coded into the system.

Randomized source port

We can randomize the source port number of the query. Since the attacker does not get to see the query, it will not know where to send the bogus responses. There are 216 (65,536) possible ports to try.

Double queries

The second defense is to force all DNS queries to be issued twice. The attacker will have to guess a 16-bit query ID twice in a row and the chances of doing that successfully are infinitesimally small.

DNS over TCP

We can make these attacks far more difficult by using DNS over TCP rather than UDP. Inserting a message into a TCP session is much more difficult than just sending a UDP packet since you need to get the correct sequence numbers as well as source address and port numbers. You also need to have access to a raw sockets interface to create a masqueraded TCP segment.

DNS servers can be configured to user either or both protocols. TCP is often avoided because it creates a much higher latency for processing queries and results in a higher overhead at the DNS server.


The strongest solution is to use a more secure version of the DNS protocol. DNSSEC, which stands for Domain Name System Security Extensions, was created to allow a DNS server to provide authenticated, signed responses to queries.

Every response contains a digital signature signed with the domain zone owner’s private key. For instance, Rutgers would have a private key and responses to queries for anything under would be accompanied with a signature signed with Rutgers' private key. This authenticates the origin of the data and ensures its integrity – that the data has not been later modified.

The receiver needs to validate the signature with a public key. Public keys are trusted because they are distributed in a manner similar to X.509 certificates. Each public key is signed by the next top-level domain. For example, the public key for would be signed with the private key of the owner of .edu domain, EDUCAUSE. Everyone would need a root public key to verify this chain of trust.

DNSSEC has been around for since 2008 and is in use but widespread adoption has been really slow. It is difficult to overcome industry inertia and a lack of desire for updating well-used protocols. It also requires agreements between various service providers and vendors. Systems can be reluctant to use it because it’s more compute intensive and results in larger data packets.

DNS Rebinding

Web application security is based on the same-origin policy. This restricts the resources that JavaScript can access. Browser scripts can access cookies and other data on pages only if they share the same origin. The underlying assumption is that resolving a domain name takes you to the correct server.

The DNS rebinding attack allows JavaScript code on a malicious web page to access private IP addresses in the victim’s network.

The attacker configures the DNS entry for a domain name to have a short time-to-live value (TTL). When the victim’s browser visits the page and downloads JavaScript from that site, that JavaScript code is allowed to interact with the domain thanks to the same origin policy. However, right after the user’s browser downloads the script, the attacker reconfigures the DNS server so that future queries will return an address in the user’s internal network. The JavaScript code can then try to request resources from that system since, as far as the browser is concerned, the origin is the same because the name of the domain has not changed. The attacker may have to guess but most local addresses start with 192.168 or 10.0. By continuing to use short TTL values, the JavaScript code can continue to issue DNS queries and allow the attacker to try different addresses.

Summary: short time-to-live values in DNS allow an attacker to change the address of a domain name so that scripts from that domain can now access resources inside the private network.

  1. At the data link layer, packets are called frames.  ↩︎

  2. MAC = Media Access Control and refers to the hardware address of the Ethernet device. Bluetooth, Ethernet, and Wi-Fi (802.11) share the same addressing formats.  ↩︎

  3. a trunk is the term for the connection between two switches.  ↩︎

  4. At the network layer, a packet is referred to as a datagram.  ↩︎

  5. at the transport layer, we refer to packets as segments. Don’t blame me. I don’t know why we need different words for each layer of the protocol stack.  ↩︎

Last modified January 17, 2024.
recycled pixels