Skip to content
Commit 51038e8b authored by Ruoyao Liu (刘若尧)'s avatar Ruoyao Liu (刘若尧) Committed by Maciej Żenczykowski
Browse files

Add API to set the sll_protocol on PacketSocket

Problem & Root cause:
the mInterfaceBroadcastAddr.sll_protocol is not assigned when the
interface initializes, sll_protocol is 0x0000 by default.
This causes packets to be filtered incorrectly in packet capture,
typically with tcpdump. The previous API is used by DhcpClient, causing
DHCP tx messages to not be recognized properly.

Background: inside the kernel packets carry both an ethertype metadata
(skb->protocol) and may also carry a real ethertype in the mac header.

Previously skb->protocol would be inherited from the socket either from
the protocol from socket() creation or from bind().  This was zero,
so skb->protocol would end up 0, even though the DHCP packets we actually
wrote would have the right on-the-wire ethertype populated in the bytes
passed to send().

As such DHCP packets would look correctly on the wire, but were lacking
the skb->protocol metadata to correctly tag them as IPv4.

This results in 'tc' and packet hooks potentially not triggering
correctly, and can thus result in tcpdump 'ipv4' filters discarding
these packets leading to confusing/erroneous tcpdump output.

In newer kernels (somewhere around 5.3), if socket protocol is 0, we
actually parse out the right ethertype from the mac header during send().

However, for old kernels we can't rely on this kernel magic, and the
right fix is simply to make sure that socket bound protocol is correctly
set to ipv4 [htons(ETH_P_IP)] in the bind() system call.

Solution:
  Add a new constructor in SocketUtils to set the protocol parameter.
Bug: 133196453
Test: manual test
Change-Id: I07887b82e0e32aadb0cbb9f930f2b2fa3e277ca9
parent 10d6f20b
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment