Ambient Mesh Works

在最新的一次发布中,Istio 带来一个超级重磅的消息,带来了无 Sidecar 的版本 Ambient Mesh

Intro

在官方的文档中,我们可以发现这次我们取消了 App 同样 PODSidecar 转而将转发的逻辑下沉到 2个全新的组件上

  • ztunnel 一个处理 L4 请求的 Node 级别的组件
  • waypoint 一个处理 L7 请求的 Namespace 级别的组件

可以理解成当前的 gateway 有四种角色

  • ingressGateway: 消费 Gateway CR
  • eastwestGateway: 消费 VS/DR 但是仅仅为跨集群通讯服务
  • Ztunnel: Node 级别的 Gateway 进行 mtls+tcp 流量传播
  • Waypoint: Namespace 级别的 Gateway 进行 L7 流量治理。

Ztunnel

对于 Ztunnel 的实现,官方描述的比较少,不过在社区的体验分支中有这么一些说明

1
* ambient-mesh: uproxy l4 implementation with Envoy

显然 L4 还是基于 Envoy,具体的逻辑在 istio proxy: experimental-ambient

按照架构图的思路,我们很好去理解,Ztunnel 需要做两件事情,分别是转发 L4 请求和判断权限(含mtls)因此,在 istio 中包含了全新的生成器来协助新组件完成工作。

  • UProxyConfigGenerator: 负责生成转发的逻辑
  • PEPGenerator: 负责生成安全策略

后来做了二合一

  • ZTunnelConfigGenerator: 负责所有策略
ztunnelgen.gogithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func (g *ZTunnelConfigGenerator) Generate(
proxy *model.Proxy,
w *model.WatchedResource,
req *model.PushRequest,
) (model.Resources, model.XdsLogDetails, error) {
push := req.Push
switch w.TypeUrl {
case v3.ListenerType:
return g.BuildListeners(proxy, push, w.ResourceNames), model.DefaultXdsLogDetails, nil
case v3.ClusterType:
return g.BuildClusters(proxy, push, w.ResourceNames), model.DefaultXdsLogDetails, nil
case v3.EndpointType:
return g.BuildEndpoints(proxy, push, w.ResourceNames), model.DefaultXdsLogDetails, nil
}

return nil, model.DefaultXdsLogDetails, nil
}

这里再配合原先生成 Cluster 的逻辑等等,我们就把整个流程给串联起来。

Waypoint

Waypoint 显然是处理 L7 逻辑的。

Ambient mesh uses HTTP CONNECT over mTLS to implement its secure tunnels and insert waypoint proxies in the path, a pattern we call HBONE (HTTP-Based Overlay Network Environment). HBONE provides for a cleaner encapsulation of traffic than TLS on its own while enabling interoperability with common load-balancer infrastructure. FIPS builds are used by default to meet compliance needs. More details on HBONE, its standards-based approach, and plans for UDP and other non-TCP protocols will be provided in a future blog.

而对于 Waypoint 的创建,因为是一个 Namespace 的组件。有一个 Watch NSController 来完成这个事情。

waypoint_controller.gogithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
func (rc *WaypointProxyController) Reconcile(name types.NamespacedName) error {
if rc.injectConfig().Values.Struct().GetGlobal().GetHub() == "" {
// Mostly used to avoid issues with local runs
return fmt.Errorf("injection config invalid, skipping reconile")
}
log := waypointLog.WithLabels("gateway", name.String())

// 查询所有的 waypoint
gw, err := rc.gateways.Gateways(name.Namespace).Get(name.Name)


// 找有的,和需要有的
haveProxies := sets.New()
wantProxies := sets.New()


// 去创建 waypoint
for _, k := range add {
log.Infof("adding waypoint proxy %v", k+"-waypoint-proxy")
input := MergedInput{
Namespace: gw.Namespace,
GatewayName: gw.Name,
UID: string(gw.UID),
ServiceAccount: k,
Cluster: rc.cluster.String(),
}
proxyDeploy, err := rc.RenderDeploymentMerged(input)

}
return nil
}

Waypoint 还是一个 Envoy 实例,对其的配置项和 Ztunnel 一致。

waypointgen.go githublink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
func (p *WaypointGenerator) Generate(proxy *model.Proxy, w *model.WatchedResource, req *model.PushRequest) (model.Resources, model.XdsLogDetails, error) {
var out model.Resources
switch w.TypeUrl {
case v3.ListenerType:
sidecarListeners := p.ConfigGenerator.BuildListeners(proxy, req.Push)
resources := model.Resources{}
for _, c := range sidecarListeners {
resources = append(resources, &discovery.Resource{
Name: c.Name,
Resource: protoconv.MessageToAny(c),
})
}
out = append(p.buildWaypointListeners(proxy, req.Push), resources...)
out = append(out, outboundTunnelListener("tunnel", proxy.Metadata.ServiceAccount))
case v3.ClusterType:
sidecarClusters, _ := p.ConfigGenerator.BuildClusters(proxy, req)
waypointClusters := p.buildClusters(proxy, req.Push)
out = append(waypointClusters, sidecarClusters...)
}
return out, model.DefaultXdsLogDetails, nil
}

Ztuunel 访问 Waypoint 通过一个 Proxy 组件,这个就比较简单了

server.gogithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
func handleConnect(w http.ResponseWriter, r *http.Request) bool {
t0 := time.Now()
log.WithLabels("host", r.Host, "source", r.RemoteAddr).Info("Received CONNECT")
// Send headers back immediately so we can start getting the body
w.(http.Flusher).Flush()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()


dst, err := (&net.Dialer{}).DialContext(ctx, "tcp", r.Host)
if err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
log.Errorf("failed to dial upstream: %v", err)
return true
}
log.Infof("Connected to %v", r.Host)
w.WriteHeader(http.StatusOK)


wg := sync.WaitGroup{}
wg.Add(1)
go func() {
// downstream (hbone client) <-- upstream (app)
copyBuffered(w, dst, log.WithLabels("name", "dst to w"))
r.Body.Close()
wg.Done()
}()
// downstream (hbone client) --> upstream (app)
copyBuffered(dst, r.Body, log.WithLabels("name", "body to dst"))
wg.Wait()
log.Infof("connection closed in %v", time.Since(t0))
return false
}

链路分析 [黑盒版]

我们先按照社区的文档,使用 kind 准备下我们的环境,最终形态如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bookinfo-productpage-waypoint-proxy-5c9c4d858b-bcmx5 1/1 Running 0 116m 10.244.1.9 ambient-worker2 <none> <none>
details-v1-5ffd6b64f7-tmxz2 1/1 Running 0 120m 10.244.1.4 ambient-worker2 <none> <none>
notsleep-6d6c8669b5-8t9rk 1/1 Running 0 120m 10.244.2.6 ambient-worker <none> <none>
productpage-v1-979d4d9fc-j84w2 1/1 Running 0 120m 10.244.1.8 ambient-worker2 <none> <none>
ratings-v1-5f9699cfdf-6l4xf 1/1 Running 0 120m 10.244.1.5 ambient-worker2 <none> <none>
reviews-v1-569db879f5-4w4p4 1/1 Running 0 120m 10.244.1.6 ambient-worker2 <none> <none>
reviews-v2-65c4dc6fdc-tnjxt 1/1 Running 0 120m 10.244.1.7 ambient-worker2 <none> <none>
reviews-v3-c9c4fb987-254lm 1/1 Running 0 120m 10.244.2.4 ambient-worker <none> <none>
sleep-7b85956664-nvfkm 1/1 Running 0 11m 10.244.2.7 ambient-worker <none> <none>



$ k get pod -n istio-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-cni-node-2klk6 1/1 Running 0 122m 172.18.0.3 ambient-worker2 <none> <none>
istio-cni-node-d6xpk 1/1 Running 0 122m 172.18.0.4 ambient-control-plane <none> <none>
istio-cni-node-kbbq4 1/1 Running 0 122m 172.18.0.2 ambient-worker <none> <none>
istio-ingressgateway-7879f6cd5c-z5wqs 1/1 Running 0 122m 10.244.1.3 ambient-worker2 <none> <none>
istiod-56dcbdc66b-hfdkg 1/1 Running 0 123m 10.244.2.2 ambient-worker <none> <none>
ztunnel-4gkmd 1/1 Running 0 123m 10.244.2.3 ambient-worker <none> <none>
ztunnel-q55t5 1/1 Running 0 123m 10.244.1.2 ambient-worker2 <none> <none>
ztunnel-wxmvw 1/1 Running 0 123m 10.244.0.5 ambient-control-plane <none> <none>

$ k get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ambient-control-plane Ready control-plane 6h37m v1.25.0 172.18.0.4 <none> Ubuntu 22.04.1 LTS 5.4.0-105-generic containerd://1.6.7
ambient-worker Ready <none> 6h37m v1.25.0 172.18.0.2 <none> Ubuntu 22.04.1 LTS 5.4.0-105-generic containerd://1.6.7
ambient-worker2 Ready <none> 6h37m v1.25.0 172.18.0.3 <none> Ubuntu 22.04.1 LTS 5.4.0-105-generic containerd://1.6.7

我们使用一个常见的 HTTP 请求分析链路

1
kubectl exec deploy/sleep -- curl -s http://productpage:9080/ | head -n1

sleep(work2) 访问 productpage(work1)

Sleep -> zTunnel

istio-cni-node 的日志里我们可以看到修改的参数

1
2
2022-09-16T05:27:57.759272Z        info        cni        Adding pod 'sleep-7b85956664-nvfkm/default' (d7e030b1-3269-4016-8742-e2322bea7fcb) to ipset
2022-09-16T05:27:57.759275Z info cni Adding route for sleep-7b85956664-nvfkm/default: [table 100 10.244.2.7/32 via 192.168.126.2 dev istioin src 10.244.2.1]

显然我们增加一个路由信息,将访问导入了 10.244.2.1 这个地址, 从容器内也可以发现是这样的一个逻辑。

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl exec sleep-7b85956664-nvfkm -- ip route
default via 10.244.2.1 dev eth0
10.244.2.0/24 via 10.244.2.1 dev eth0 src 10.244.2.7
10.244.2.1 dev eth0 scope link src 10.244.2.7


$ kubectl exec sleep-7b85956664-nvfkm -- nslookup productpage
Server: 10.96.0.10
Address: 10.96.0.10:53

Name: productpage.default.svc.cluster.local
Address: 10.96.4.220

而这个地址并没什么真的 POD 在运行,这是一个 Veth 设备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ k exec -n istio-system istio-cni-node-kbbq4  -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: veth7de83706@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether fe:88:8c:cf:58:3e brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 10.244.2.1/32 scope global veth7de83706
valid_lft forever preferred_lft forever
3: veth2e666193@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether aa:d1:d4:6d:c1:7b brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.244.2.1/32 scope global veth2e666193
valid_lft forever preferred_lft forever

而这里就可以清晰的发现,流量只不过被 Route 转到 宿主机上来了,然后宿主机上看一看。就发现了端倪

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@ambient-worker:/$ ip route show table 101
default via 192.168.127.2 dev istioout
10.244.2.3 dev veth2e666193 scope link


root@ambient-worker:/$ ip rule
0: from all lookup local
100: from all fwmark 0x200/0x200 goto 32766
101: from all fwmark 0x100/0x100 lookup 101
102: from all fwmark 0x40/0x40 lookup 102
103: from all lookup 100
32766: from all lookup main
32767: from all lookup default

root@ambient-worker:/$ iptables-save
-A ztunnel-POSTROUTING -m mark --mark 0x100/0x100 -j ACCEPT
-A ztunnel-PREROUTING -m mark --mark 0x100/0x100 -j ACCEPT

通过对请求 Mark 的方式,流量走了 Table 101,然后进入了 istioout

1
2
3
4
6: istioout: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 96:99:db:e4:ca:ad brd ff:ff:ff:ff:ff:ff
inet 192.168.127.1/30 brd 192.168.127.3 scope global istioout
valid_lft forever preferred_lft forever

而这个恰好就是 zTunnel 组件。

zTunnel -> Waypoint

到这里就非常的符合我们的经验了,直接看 Envoy 就好了。

1
2
3
4
5
6
7
8
$ k exec -n istio-system ztunnel-4gkmd    --  iptables

:POSTROUTING ACCEPT [19284:5797390]
-A PREROUTING -j LOG --log-prefix "mangle pre [ztunnel-4gkmd] "
-A PREROUTING -i pistioin -p tcp -m tcp --dport 15008 -j TPROXY --on-port 15008 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING -i pistioout -p tcp -j TPROXY --on-port 15001 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING -i pistioin -p tcp -j TPROXY --on-port 15006 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING ! -d 10.244.2.3/32 -i eth0 -p tcp -j MARK --set-xmark 0x4d3/0xfff

请求被重定向到 15001 ztunnel_outbound

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
"10.96.4.220": {
"matcher": {
"matcher_tree": {
"input": {
"name": "port",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.matching.common_inputs.network.v3.DestinationPortInput"
}
},
"exact_match_map": {
"map": {
"9080": {
"action": {
"name": "spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage",
"typed_config": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage"
}
}
}
}
}
}
}
}
}

按照 orgin_iptarget_port 一通转换,然后就走到 cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::observability_name::spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::default_priority::max_connections::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::default_priority::max_pending_requests::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::default_priority::max_requests::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::default_priority::max_retries::3
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::high_priority::max_connections::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::high_priority::max_pending_requests::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::high_priority::max_requests::1024
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::high_priority::max_retries::3
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::added_via_api::true
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::cx_active::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::cx_connect_fail::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::cx_total::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::rq_active::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::rq_error::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::rq_success::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::rq_timeout::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::rq_total::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::hostname::
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::health_flags::healthy
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::weight::1
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::region::
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::zone::
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::sub_zone::
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::canary::false
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::priority::0
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::success_rate::-1
spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage::10.244.1.9:15006::local_origin_success_rate::-1
prometheus_stats::observability_name::prometheus_stats

10.244.1.9:15006 这就是我们 waypoint 的入口,而在这里我们已经进入了 mtls 的保护范围,具体可以看 cluster 的信息。

Waypoint -> zTunnel

这部分就更加的清晰了。直接 Route 下就到服务的地址了。如果需要更多的 L7 的治理策略,也会是在这个 Level 去处理的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
{
"name": "inbound_CONNECT_terminate",
"active_state": {
"version_info": "2022-09-16T03:43:55Z/36",
"listener": {
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
"name": "inbound_CONNECT_terminate",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 15006
}
},
"filter_chains": [
{
"filters": [
{
"name": "capture_tls",
"typed_config": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/istio.tls_passthrough.v1.CaptureTLS"
}
},
{
"name": "envoy.filters.network.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"stat_prefix": "inbound_hcm",
"route_config": {
"name": "local_route",
"virtual_hosts": [
{
"name": "connect",
"domains": [
"*"
],
"routes": [
{
"match": {
"headers": [
{
"name": ":authority",
"exact_match": "10.96.4.220:9080"
}
],
"connect_matcher": {}
},
"route": {
"cluster": "inbound-vip|9080|internal|productpage.default.svc.cluster.local",
"upgrade_configs": [
{
"upgrade_type": "CONNECT",
"connect_config": {}
}
]
}
},

]
}
],
"validate_clusters": false
},
}
}
],
"name": "inbound_CONNECT_terminate"
}
]
},
"last_updated": "2022-09-16T09:40:30.193Z"
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
outbound|9080||productpage.default.svc.cluster.local::observability_name::outbound|9080||productpage.default.svc.cluster.local
outbound|9080||productpage.default.svc.cluster.local::default_priority::max_connections::4294967295
outbound|9080||productpage.default.svc.cluster.local::default_priority::max_pending_requests::4294967295
outbound|9080||productpage.default.svc.cluster.local::default_priority::max_requests::4294967295
outbound|9080||productpage.default.svc.cluster.local::default_priority::max_retries::4294967295
outbound|9080||productpage.default.svc.cluster.local::high_priority::max_connections::1024
outbound|9080||productpage.default.svc.cluster.local::high_priority::max_pending_requests::1024
outbound|9080||productpage.default.svc.cluster.local::high_priority::max_requests::1024
outbound|9080||productpage.default.svc.cluster.local::high_priority::max_retries::3
outbound|9080||productpage.default.svc.cluster.local::added_via_api::true
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::cx_active::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::cx_connect_fail::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::cx_total::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::rq_active::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::rq_error::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::rq_success::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::rq_timeout::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::rq_total::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::hostname::
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::health_flags::healthy
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::weight::1
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::region::
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::zone::
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::sub_zone::
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::canary::false
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::priority::0
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::success_rate::-1
outbound|9080||productpage.default.svc.cluster.local::envoy://tunnel/10.244.1.8:9080::local_origin_success_rate::-1

也就是将请求发送给 proudctpage 这个服务的 pod 10.244.1.8
下面的逻辑通路是由 kube 本身保持通讯,那我们就知道通过宿主机的 ipvs/iptables 的修改就会访问到 worker2 节点

ZTunnel Inbound

流量到了宿主机上的时候,通过 IP Rule

1
2
3
4
5
6
7
root@ambient-worker2:/# ip route show table 100
10.244.1.2 dev veth983ae22d scope link
10.244.1.4 via 192.168.126.2 dev istioin src 10.244.1.1
10.244.1.5 via 192.168.126.2 dev istioin src 10.244.1.1
10.244.1.6 via 192.168.126.2 dev istioin src 10.244.1.1
10.244.1.7 via 192.168.126.2 dev istioin src 10.244.1.1
10.244.1.8 via 192.168.126.2 dev istioin src 10.244.1.1

显然所有的被纳入管理范围的 POD 的流量都被导入了 istioin 这个网卡

1
2
3
4
5
6
7
$ k get pod -owide | grep ambient-worker2
bookinfo-productpage-waypoint-proxy-5c9c4d858b-bcmx5 1/1 Running 0 46h 10.244.1.9 ambient-worker2 <none> <none>
details-v1-5ffd6b64f7-tmxz2 1/1 Running 0 46h 10.244.1.4 ambient-worker2 <none> <none>
productpage-v1-979d4d9fc-j84w2 1/1 Running 0 46h 10.244.1.8 ambient-worker2 <none> <none>
ratings-v1-5f9699cfdf-6l4xf 1/1 Running 0 46h 10.244.1.5 ambient-worker2 <none> <none>
reviews-v1-569db879f5-4w4p4 1/1 Running 0 46h 10.244.1.6 ambient-worker2 <none> <none>
reviews-v2-65c4dc6fdc-tnjxt 1/1 Running 0 46h 10.244.1.7 ambient-worker2 <none> <none>

这个正是 ztunnel 的设备。

Ztunnel -> ProductPage

最后一条其实就相对简单了,按照 spiffe 的方式,分流下,然后就直接将数据转发给真实的 POD 即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ ./istioctl pc listener -n istio-system ztunnel-q55t5
ADDRESS PORT MATCH DESTINATION
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/bookinfo-reviews
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/bookinfo-details
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/notsleep
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/bookinfo-ratings
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/bookinfo-productpage
0 ALL Cluster: outbound_tunnel_clus_spiffe://cluster.local/ns/default/sa/sleep
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 ALL Non-HTTP/Non-TCP
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_status-port_istio-ingressgateway.istio-system.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_server_waypoint_proxy_spiffe://cluster.local/ns/default/sa/bookinfo-productpage
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_metrics_kube-dns.kube-system.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_https_istio-ingressgateway.istio-system.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_https-webhook_istiod.istio-system.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_https-dns_istiod.istio-system.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http_sleep.default.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http_reviews.default.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http_ratings.default.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http_notsleep.default.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http_details.default.svc.cluster.local_outbound_internal
0.0.0.0 15001 ALL Cluster: spiffe://cluster.local/ns/default/sa/bookinfo-reviews_to_http2_istio-ingressgateway.istio-system.svc.cluster.local_outbound_internal