虚拟网络技术的应知应会

image

学点网络总是没错。

TUN & TAP

TUN/TAP为用户态程序提供数据包接收和发送。它可以被视为一个简单的点对点或以太网设备,它不是从物理介质接收数据包,而是从用户空间程序接收数据包,并且不是通过物理介质发送数据包,而是将它们写入用户空间程序。

区别上就是

  • TUN: 只支持 L3 的 IP 包
  • TAP: 支持 Raw 以太网包

image

Try it

  1. 查下当前的 Interface
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:85:f2:c7 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 172.16.140.134/24 brd 172.16.140.255 scope global dynamic ens33
valid_lft 1666sec preferred_lft 1666sec
inet6 fe80::20c:29ff:fe85:f2c7/64 scope link
valid_lft forever preferred_lft forever
  1. 创建一个 TUN 设备
1
2
3
4
$ openvpn --mktun --dev tun0
2024-04-02 16:32:01 Note: --mktun does not support DCO. Creating TUN interface.
2024-04-02 16:32:01 TUN/TAP device tun0 opened
2024-04-02 16:32:01 Persist state set to: ON
  1. 启用并且为之分配 IP
1
2
3
4
5
6
7
$ ip link set tun0 up # 在 kernels < 2.6.36 可以强制启动,对于当前 Linux ,设置也是无效的
$ ip addr add 10.0.0.1/24 dev tun0
$ ip a
3: tun0: <NO-CARRIER,POINTOPOINT,MULTICAST,NOARP,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 500
link/none
inet 10.0.0.1/24 scope global tun0
valid_lft forever preferred_lft forever
  1. 接受 TUN 的请求

因为没有程序监听,就处于无法 UP 的状态,这里我们找一个开源工具 tuntap-packet-dumper

1
2
3
4
$ git clone  https://github.com/sjlongland/tuntap-packet-dumper.git
$ make
$ $ ./packetdumper
Assuming tun device

如上文,程序就自动帮我们创建一个 tun 设备,不过就没有分配 IP 我们把上面的再执行一下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ip a
8: tun1: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 500
link/none

$ ip link delete tun0 # 删除刚刚手工创建的
$ ip link set tun1 up
$ ip addr add 10.0.0.1/24 dev tun1
$ ip a
8: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.0.0.1/24 scope global tun1
valid_lft forever preferred_lft forever
inet6 fe80::bc01:6a91:6eec:4f32/64 scope link stable-privacy
valid_lft forever preferred_lft forever
  1. 监听网卡
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ tshark -i tun1
Running as user "root" and group "root". This could be dangerous.
Capturing on 'tun1'
** (tshark:4824) 16:59:18.643942 [Main MESSAGE] -- Capture started.
** (tshark:4824) 16:59:18.644005 [Main MESSAGE] -- File: "/tmp/wireshark_tun1JMT7K2.pcapng"

# 从另外一个控制台访问
$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
^C
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4079ms


# 从 tshark 可以看到如下
$ tshark -i tun1
Running as user "root" and group "root". This could be dangerous.
Capturing on 'tun1'
** (tshark:4824) 16:59:18.643942 [Main MESSAGE] -- Capture started.
** (tshark:4824) 16:59:18.644005 [Main MESSAGE] -- File: "/tmp/wireshark_tun1JMT7K2.pcapng"
1 0.000000000 10.0.0.1 ? 10.0.0.2 ICMP 84 Echo (ping) request id=0x2dd3, seq=1/256, ttl=64
2 1.007009130 10.0.0.1 ? 10.0.0.2 ICMP 84 Echo (ping) request id=0x2dd3, seq=2/512, ttl=64
3 2.031034293 10.0.0.1 ? 10.0.0.2 ICMP 84 Echo (ping) request id=0x2dd3, seq=3/768, ttl=64
4 3.055078875 10.0.0.1 ? 10.0.0.2 ICMP 84 Echo (ping) request id=0x2dd3, seq=4/1024, ttl=64
5 4.078980807 10.0.0.1 ? 10.0.0.2 ICMP 84 Echo (ping) request id=0x2dd3, seq=5/1280, ttl=64

用途

  1. 因为我们可以获得数据的流量,所以我们可以用学习 TCP/IP 的协议栈,自己来实现一个 TCP Stack,这也是大多数的用户态网络协议栈的实现基础。
  2. 另外一个用法就是充当一个隧道,我们可以在 localremote 两者都创建一个 tap/tun 设备,local 的请求通过 tun 被程序劫持,然后进行转发到 remote 上,remote 上的程序将数据再吐到 tun 上,这样就形成一个虚拟的 overlay 网络

可以参考的内容

Linux Namespace 命名空间

Linux Namespace 是 Linux 提供的一种内核级别环境隔离的方法。我们这里只考虑 Network Namespace 的情况

image

Try it

  1. 创建一个网络命名空间

    1
    2
    3
    $ ip netns add <namespace name>
    # 创建一个 peer 对
    $ ip link add <namespace name> type veth peer name <namespace name>
  2. 将网卡挂载到 namespace 中

    1
    $ ip link set <veth name> netns <namespace name>
  3. 在命名空间执行命令

    1
    $ ip netns exec <namespace name> <command>

用途

主要使用 namespace 隔离网络设备,避免网络设备互相干扰。

Bridge 网桥

Linux 网桥技术,就像真实的交换机一样,可以将多个网络设备连接在一起,但是值得注意的是 交换机 是一个二层设备,因此其实是通过 MAC 来进行通讯的,需要支持 STP 之类的功能。

Try it

  1. 创建 bridge 设备

    1
    2
    3
    # ip link add <bridge name> type bridge
    $ ip link add br0 type bridge

  2. 将网卡挂载到网桥上

1
2
$ ip link set dev ens33 master br0     # 将 ens33 接到 br0 上
$ ip link set dev ens34 master br0 # 将 ens34 接到 br0 上
  1. 启用网桥
    1
    ip link set dev br0 up               # 启用 br0 这时网络应该通了

用途

网桥的本质就是将互不连通的网卡接到一起。

为什么网桥有时候需要 IP

配置 IP 的目的只要是为了充当网关的角色,因为网桥内部的设备都是基于 MAC 来进行通讯的。因此如果我们想要将外部流量导入这个网桥的话,就需要 IP 地址了。

Linux Veth pair

veth 设备构建为成对连接的虚拟以太网接口,可以将其视为虚拟跳线。从一端进入的东西将从另一端出来。

veth pair 是成对出现的一种虚拟网络设备接口,一端连着网络协议栈,一端彼此相连。如下图所示:

image

Try it

  1. 创建 veth 设备对

    1
    2
    3
    4
    5
    6
    7
    8
    # ip link add <veth name> type veth peer name <peer name>
    $ ip link add veth0 type veth peer name veth1

    $ ip a
    9: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 32:c1:1e:0e:38:dc brd ff:ff:ff:ff:ff:ff
    10: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether aa:13:47:de:c7:54 brd ff:ff:ff:ff:ff:ff
  2. veth 设置 IP

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    $ ip addr add 10.0.0.1/32 dev veth0
    $ ip link set veth0 up
    $ ip addr add 10.0.0.2/32 dev veth1
    $ ip link set veth1 up
    $ ip a
    11: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:c1:1e:0e:38:dc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/32 scope global veth1
    valid_lft forever preferred_lft forever
    inet6 fe80::30c1:1eff:fe0e:38dc/64 scope link
    valid_lft forever preferred_lft forever
    12: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:13:47:de:c7:54 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/32 scope global veth0
    valid_lft forever preferred_lft forever
    inet6 fe80::a813:47ff:fede:c754/64 scope link
    valid_lft forever preferred_lft forever

用途

当我们集合上面的网桥,命名空间和veth 我们就可以构建一个容器网络了,我们在这里尝试展示一下,借鉴 通过实验学习 Linux VETH 和 Bridge

容器网络 Demo

网络拓扑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
                           +------------------------+
| | iptables +----------+
| br01 10.1.0.100/24 | | |
+----------+ <--------->+ ens160 |
| +------------------+-----+ | 172.x |
| | +----------+
+----+---------+ +-----------+-----+
| | | |
| br-veth01 | | br-veth02 |
+--------------+ +-----------+-----+
| |
+--------+------+-----------+ +-------+---+-------------+
| | | | | |
| ns01 | veth01 | | ns02 | veth01 |
| | | | | |
| | 10.1.0.1 | | | 10.1.0.2 |
| | | | | |
| +------------------+ | +-----------------+
| | | |
| | | |
+---------------------------+ +-------------------------+

Try it

初始环境

1
2
3
4
5
6
7
8
$ ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:85:f2:c7 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 172.16.140.134/24 brd 172.16.140.255 scope global dynamic ens33
valid_lft 1248sec preferred_lft 1248sec
inet6 fe80::20c:29ff:fe85:f2c7/64 scope link
valid_lft forever preferred_lft forever

准备网桥

1
2
3
4
5
6
7
8
9
10
$ brctl addbr br01
$ ip link set dev br01 up
$ ifconfig br01 192.168.88.1/24 up
$ ip a

3: br01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:9e:a2:c7:5f:76 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.100/24 brd 10.1.0.255 scope global br01
valid_lft forever preferred_lft forever
inet6 fe80::b49e:a2ff:fec7:5f76/64 scope link

分配 namespace

1
2
3
4
5
$ ip netns add ns01
$ ip netns add ns02
$ ip netns
ns02
ns01

创建 veth pair & 分配到 namespace

1
2
3
4
5
6
7
8
9
10
11
12
$ ip link add veth01 type veth peer name br-veth01
$ ip link add veth02 type veth peer name br-veth02
$ ip link

4: br-veth01@veth01: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether fe:35:5f:75:ba:cf brd ff:ff:ff:ff:ff:ff
5: veth01@br-veth01: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 42:24:37:16:8e:9a brd ff:ff:ff:ff:ff:ff
6: br-veth02@veth02: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2a:81:49:42:64:3f brd ff:ff:ff:ff:ff:ff
7: veth02@br-veth02: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether f2:70:e1:1a:2d:d5 brd ff:ff:ff:ff:ff:ff

将一头放到网桥里面

1
2
3
4
5
6
$ brctl addif br01 br-veth01
$ brctl addif br01 br-veth02
$ brctl show
bridge name bridge id STP enabled interfaces
br01 8000.b69ea2c75f76 no br-veth01
br-veth02

启用网卡

1
2
$ ip link set dev br-veth01 up
$ ip link set dev br-veth02 up

另外一头放入 namespace 中

1
2
3
4
5
6
$ ip link set veth01 netns ns01
$ ip link set veth02 netns ns02

$ ip netns exec ns01 ip a
5: veth01@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 42:24:37:16:8e:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0

分配 IP 地址和网关

1
2
3
4
5
6
7
8
9
10
$ ip netns exec ns01 ip link set dev veth01 up
$ ip netns exec ns01 ifconfig veth01 10.1.0.1/24 up
# 这里是为了让默认的请求都走这里,不配置的话非10.1.0.0/24的无法寻址
$ ip netns exec ns01 ip route add default via 10.1.0.100


$ ip netns exec ns02 ip link set dev veth02 up
$ ip netns exec ns02 ifconfig veth02 10.1.0.2/24 up
# 这里是为了让默认的请求都走这里,不配置的话非10.1.0.0/24的无法寻址
$ ip netns exec ns02 ip route add default via 10.1.0.100

PING 抓包查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 在网桥抓包
$ tcpdump -i br01 -nn

# 从 ns1 ping ns2
$ ip netns exec ns01 ping 10.1.0.2
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 10.1.0.2: icmp_seq=3 ttl=64 time=0.049 ms
^C
--- 10.1.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2054ms
rtt min/avg/max/mdev = 0.038/0.048/0.058/0.008 ms

# 从 tcpdumpl 可以看到
$ tcpdump -i br01 -nn
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on br01, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:22:44.385254 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 62059, seq 1, length 64
12:22:44.385268 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 62059, seq 1, length 64
12:22:45.415140 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 62059, seq 2, length 64
12:22:45.415162 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 62059, seq 2, length 64
12:22:46.439146 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 62059, seq 3, length 64
12:22:46.439169 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 62059, seq 3, length 64

但是我们尝试 Ping 外网服务,会失败,抓包会发现只有发出去的包,并没有收到回复的包。

1
2
3
4
5
$ ip netns exec ns01 ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.
^C
--- 114.114.114.114 ping statistics ---
12 packets transmitted, 0 received, 100% packet loss, time 11244ms

我们在 br0 上抓包发现包已经出去了

1
2
3
4
5
6
7
8
$ tcpdump -i br01 -nn
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on br01, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:38:08.071149 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 5533, seq 8, length 64
12:38:09.095145 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 5533, seq 9, length 64
12:38:10.119178 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 5533, seq 10, length 64
12:38:11.143144 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 5533, seq 11, length 64
12:38:12.167171 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 5533, seq 12, length 64

那我们在物理网卡上继续抓包,就没有任何数据。

1
2
3
$ tcpdump -i ens33 -nn host 114.114.114.114
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), snapshot length 262144 bytes

那说明我们网络的包被丢弃了,我们需要修改一下网络配置,这里主要是打开 ip forward 让 linux 可以转发

1
2
3
4
5
6
$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0 # 0 表示关闭 1 表示开启


$ sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

我们继续抓包物理网卡就有数据了

1
2
3
4
5
6
7
8
9
10
11
12
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:46:57.247940 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 1, length 64
12:46:58.279151 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 2, length 64
12:46:59.303149 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 3, length 64
12:47:00.327140 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 4, length 64
12:47:01.351192 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 5, length 64
12:47:02.375173 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 6, length 64
12:47:03.399176 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 7, length 64
12:47:04.423141 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 8, length 64
12:47:05.447134 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 9, length 64
12:47:06.471182 IP 10.1.0.1 > 114.114.114.114: ICMP echo request, id 10450, seq 10, length 64

但是并没有回复的报文,因为 Linux 会丢失不是本网卡的请求包,这里需要使用 NAT 技术了

1
$ iptables -t nat -A POSTROUTING -s 10.1.0.0/24 -j MASQUERADE  #让内核把来源是 10.1.0.0/24 的请求通过 `SNAT` 的方式转出去

这个时候我们就发现链路联调了

1
2
3
4
5
$ ip netns exec ns01 ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.
64 bytes from 114.114.114.114: icmp_seq=1 ttl=127 time=20.4 ms
64 bytes from 114.114.114.114: icmp_seq=2 ttl=127 time=20.2 ms
64 bytes from 114.114.114.114: icmp_seq=3 ttl=127 time=19.0 ms

到这里,我们其实已经实现了单机的 POD Overlay 网络了,POD 和 POD 之间通过网桥连接,POD 和外网之间通过 NAT 转发,下面我们继续学习下如何进行 SVC 转发

IPVS

IPVS(IP Virtual Server)是Linux内核自带的一个实现虚拟服务器的软件。
IPVS能够在Linux系统上实现负载均衡,将来自客户端的请求分发到多个后端服务器上,从而提高系统的可扩展性和可靠性。
IPVS的实现原理主要是基于虚拟IP和虚拟服务器,通过分析客户端的请求,将请求转发到后端服务器上,从而实现负载均衡的功能。

Try it

在上面的实验中,我们将 POD 的 CIDR 定义在 10.1.0.0/24 宿主机的 IP 是 172.x 那我们这里用一个 192.168.0.0/24 的网段给 SVC

  1. 创建一个虚拟地址
1
2
3
4
5
6
7
8
9
10
11
12
# 增加一个虚拟服务 192.168.20.128:80,并指定调度算法为 wrr,即 weighted  round  robin
$ ipvsadm -A -t 192.168.20.128:80 -s wrr
# 将两台真实服务器添加到虚拟服务表中,-m 表示 masquerading,即 NAT 模式,而 -w 1 则表示设置的权重为 1
$ ipvsadm -a -t 192.168.20.128:80 -r 10.1.0.1:80 -m -w 1
$ ipvsadm -a -t 192.168.20.128:80 -r 10.1.0.2:80 -m -w 1
$ ipvsadm --list -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.20.128:80 wrr
-> 10.1.0.1:80 Masq 1 0 0
-> 10.1.0.2:80 Masq 1 0 0
  1. 通过抓包 POD
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # 多 Curl 几次虚拟地址,让他可以访问到 0.1 地址
    $ curl 192.168.20.128:80

    # 从容器内抓包
    $ ip netns exec ns01 tcpdump -i veth01 -nn -v
    tcpdump: listening on veth01, link-type EN10MB (Ethernet), snapshot length 262144 bytes
    15:06:21.537188 IP (tos 0x0, ttl 63, id 17209, offset 0, flags [DF], proto TCP (6), length 60)
    172.16.140.134.58956 > 10.1.0.1.80: Flags [S], cksum 0x42c7 (incorrect -> 0xe8f4), seq 2924738622, win 64240, options [mss 1460,sackOK,TS val 3067927925 ecr 0,nop,wscale 7], length 0
    15:06:21.537199 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    10.1.0.1.80 > 172.16.140.134.58956: Flags [R.], cksum 0xe007 (correct), seq 0, ack 2924738623, win 0, length 0
    15:06:24.905873 IP (tos 0x0, ttl 63, id 33240, offset 0, flags [DF], proto TCP (6), length 60)
    172.16.140.134.48152 > 10.1.0.2.80: Flags [S], cksum 0x42c8 (incorrect -> 0xb3b0), seq 807127238, win 64240, options [mss 1460,sackOK,TS val 3067931293 ecr 0,nop,wscale 7], length 0
    15:06:25.384265 IP (tos 0x0, ttl 63, id 45310, offset 0, flags [DF], proto TCP (6), length 60)
    172.16.140.134.48166 > 10.1.0.1.80: Flags [S], cksum 0x42c7 (incorrect -> 0x5987), seq 3884018077, win 64240, options [mss 1460,sackOK,TS val 3067931772 ecr 0,nop,wscale 7], length 0
    15:06:25.384275 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    10.1.0.1.80 > 172.16.140.134.48166: Flags [R.], cksum 0x5fa1 (correct), seq 0, ack 3884018078, win 0, length 0
    15:06:26.215791 IP (tos 0x0, ttl 63, id 59300, offset 0, flags [DF], proto TCP (6), length 60)
    172.16.140.134.48178 > 10.1.0.1.80: Flags [S], cksum 0x42c7 (incorrect -> 0xc171), seq 3535541037, win 64240, options [mss 1460,sackOK,TS val 3067932603 ecr 0,nop,wscale 7], length 0
    15:06:26.215803 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)

到这里对于单机的网络我们都已经搞定了,那我们再来看看分布式网络应该如何处理吧。

参考读物