How Higress Works

阿里巴巴开源下一代云原生网关Higress:基于Envoy,支持Nginx Ingress零成本快速迁移中,介绍了阿里开源的云原生网关。
我们今天就来探索下,这个项目是做了什么,他又是怎么工作的。

Intro

对于一个全新的系统,我们首先从架构上切入是一个比较的好的选择。

从架构图上,我们可以发现,整体还是分成了

  • 控制面: istio + Higress Controller
  • 数据面: Higress Data Plane

从图上我们就可以猜测出,数据面 是基于 Envoy + 一些自定义的插件来构成的。而控制面 看起来并不是很直观。
从上图可以看出 istioEnvoy 通讯,而 higress 不直接和 Envoy 通讯,因此,我们可以大胆的猜测 Higress 作为 istioMCP 远端数据源提供服务给 istio。那么基于这的假设我们来看看它是如何工作的。

How it works

初始化环境

直接看代码是不明智的,一般来说我们先从一个项目的 Statup 开始,对一个项目有自己的观感才是比较好的选择。那我们先按照社区的文档初始化这个项目

  1. 安装 istio
1
2
kubectl create ns istio-system
helm install istio -n istio-system oci://higress-registry.cn-hangzhou.cr.aliyuncs.com/charts/istio

需要 helm 较高版本的,如果运行出现 failed to download “oci://higress-registry.cn-hangzhou.cr.aliyuncs.com/charts/istio” (hint: running helm repo update may help),更新helm即可。

  1. 安装 higess
1
2
kubectl create ns higress-system
helm install higress -n higress-system oci://higress-registry.cn-hangzhou.cr.aliyuncs.com/charts/higress
  1. 创建 Ingress 配置

假设在 default 命名空间下已经部署了一个 demo service,服务端口为 80 ,则创建下面这个 K8s Ingress

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-example
spec:
rules:
- host: demo.bar.com
http:
paths:
- path: /demo
pathType: Prefix
backend:
service:
name: demo
port:
number: 80

最终环境

1
2
3
4
5
6
7
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default echo-web-89668cbcc-7jsx7 1/1 Running 0 82s
default echo-web-89668cbcc-zxjsl 1/1 Running 0 82s
higress-system higress-controller-857bc7484-znslv 1/1 Running 0 2m54s
higress-system higress-gateway-74cb5978cc-fw28r 1/1 Running 0 2m54s
istio-system istiod-5bf74cd79b-9gzm9 1/1 Running 0 6m59s
  1. 执行访问
1
2
$ curl "$(k get svc -n higress-system higress-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"/demo -H 'host: demo.bar.com'
{"host":{"hostname":"demo.bar.com","ip":"::ffff:172.30.52.234","ips":[]},"http":{"method":"GET","baseUrl":"","originalUrl":"/demo","protocol":"http"},"request":{"params":{"0":"/demo"},"query":{},"cookies":{},"body":{},"headers":{"host":"demo.bar.com","user-agent":"curl/7.79.1","accept":"*/*","DETAILS_SERVICE_PORT":"9080","HELLOWORLD_SERVICE_PORT_HTTP":"5000","KUBERNETES_PORT_443_TCP_PORT":"443","KUBERNETES_PORT_443_TCP":"tcp://172.31.160.1:443","HOME":"/root"}}

从日志验证

首先第一步,我们可以从启动日志中去看出一些端倪,这是一个比较好的习惯。

从 istiod 日志中我们就可以获得如同我们猜想的日志信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
$ kubectl logs -n istio-system istiod-5bf74cd79b-9gzm9

nc: getaddrinfo for host "higress-controller.higress-system" port 15051: Name or service not known
testing higress controller is ready to connect...
nc: getaddrinfo for host "higress-controller.higress-system" port 15051: Name or service not known
testing higress controller is ready to connect...
nc: getaddrinfo for host "higress-controller.higress-system" port 15051: Name or service not known
testing higress controller is ready to connect...
2022-11-07T02:22:11.924444Z info FLAG: --caCertFile=""
2022-11-07T02:22:11.924484Z info FLAG: --clusterAliases="[]"
2022-11-07T02:22:11.938185Z info klog Config not found: /var/run/secrets/remote/config
2022-11-07T02:22:11.939130Z info initializing mesh configuration ./etc/istio/config/mesh
2022-11-07T02:22:11.939963Z warn Using local mesh config file ./etc/istio/config/mesh, in cluster configs ignored
2022-11-07T02:22:11.940311Z info mesh configuration: {
"proxyListenPort": 15001,
"connectTimeout": "10s",
"protocolDetectionTimeout": "0.100s",
"ingressClass": "istio",
"ingressService": "istio-ingressgateway",
"ingressControllerMode": "STRICT",
"enableTracing": true,
"accessLogFile": "/dev/stdout",
"accessLogFormat": "{\"authority\":\"%REQ(:AUTHORITY)%\",\"bytes_received\":\"%BYTES_RECEIVED%\",\"bytes_sent\":\"%BYTES_SENT%\",\"downstream_local_address\":\"%DOWNSTREAM_LOCAL_ADDRESS%\",\"downstream_remote_address\":\"%DOWNSTREAM_REMOTE_ADDRESS%\",\"duration\":\"%DURATION%\",\"istio_policy_status\":\"%DYNAMIC_METADATA(istio.mixer:status)%\",\"method\":\"%REQ(:METHOD)%\",\"path\":\"%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%\",\"protocol\":\"%PROTOCOL%\",\"request_id\":\"%REQ(X-REQUEST-ID)%\",\"requested_server_name\":\"%REQUESTED_SERVER_NAME%\",\"response_code\":\"%RESPONSE_CODE%\",\"response_flags\":\"%RESPONSE_FLAGS%\",\"route_name\":\"%ROUTE_NAME%\",\"start_time\":\"%START_TIME%\",\"trace_id\":\"%REQ(X-B3-TRACEID)%\",\"upstream_cluster\":\"%UPSTREAM_CLUSTER%\",\"upstream_host\":\"%UPSTREAM_HOST%\",\"upstream_local_address\":\"%UPSTREAM_LOCAL_ADDRESS%\",\"upstream_service_time\":\"%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%\",\"upstream_transport_failure_reason\":\"%UPSTREAM_TRANSPORT_FAILURE_REASON%\",\"user_agent\":\"%REQ(USER-AGENT)%\",\"x_forwarded_for\":\"%REQ(X-FORWARDED-FOR)%\"}\n",
"defaultConfig": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"parentShutdownDuration": "60s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s",
"proxyStatsMatcher": {
"inclusionRegexps": [
".*"
]
}
},
"outboundTrafficPolicy": {
"mode": "ALLOW_ANY"
},
"configSources": [
{
"address": "k8s://"
},
{
"address": "xds://higress-controller.higress-system:15051"
}
],
"enableAutoMtls": false,
"trustDomain": "cluster.local",
"trustDomainAliases": [
],
"defaultServiceExportTo": [
"*"
],
"defaultVirtualServiceExportTo": [
"*"
],
"defaultDestinationRuleExportTo": [
"*"
],
"rootNamespace": "istio-system",
"localityLbSetting": {
"enabled": true
}
2022-11-07T02:22:11.940338Z info version: 1.12-dev-e9de7ac36deb19dcc4738a397ef3dc03579aa336-Modified
2022-11-07T02:22:11.993517Z info Adding Kubernetes registry adapter
2022-11-07T02:22:11.993539Z info handling remote clusters in *controller.Multicluster
2022-11-07T02:22:11.993557Z info initializing Istiod DNS certificates host: istiod.istio-system.svc, custom host:
2022-11-07T02:22:11.993606Z info adsc Received higress-controller.higress-system:15051 type core/v1alpha1/MeshConfig cnt=0 nonce=51897ff3-8583-4965-ada3-0c58b6e63867
2022-11-07T02:22:11.993672Z info adsc Received higress-controller.higress-system:15051 type extensions.istio.io/v1alpha1/WasmPlugin cnt=0 nonce=807490a6-10ec-4977-b21b-3ceca4eda34c
2022-11-07T02:22:11.993703Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/DestinationRule cnt=0 nonce=ab806b12-17e7-42da-9ab2-46ce16fdb362
2022-11-07T02:22:11.993718Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/EnvoyFilter cnt=0 nonce=c7ff2b1d-5a33-4c2e-aeac-d37cb1f4a1ec
2022-11-07T02:22:11.993731Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/Gateway cnt=0 nonce=f485bfb3-4bd9-4ff0-b069-99cdcd3849d0
2022-11-07T02:22:11.993744Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/ServiceEntry cnt=0 nonce=be689b82-6def-4979-9d40-538821e24328
2022-11-07T02:22:11.993760Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/ServiceSubscriptionList cnt=0 nonce=49ec0b57-a6e1-46ab-9ee0-99e5dfd1c758
2022-11-07T02:22:11.993773Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/Sidecar cnt=0 nonce=5756d43f-c8e5-44aa-bf55-38dd54911646
2022-11-07T02:22:11.993785Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/VirtualService cnt=0 nonce=ba88c77d-d062-4d5e-a61c-e2c98f5a4068
2022-11-07T02:22:11.993806Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/WorkloadEntry cnt=0 nonce=e6453814-dfff-474f-bf8d-85b190764a36
2022-11-07T02:22:11.993824Z info adsc Received higress-controller.higress-system:15051 type networking.istio.io/v1alpha3/WorkloadGroup cnt=0 nonce=c8b12853-306d-443d-89de-141a02c6f525
2022-11-07T02:22:11.993840Z info adsc Received higress-controller.higress-system:15051 type security.istio.io/v1beta1/AuthorizationPolicy cnt=0 nonce=b7de10db-dc94-4b15-8e02-e79173664372
2022-11-07T02:22:11.993861Z info adsc Received higress-controller.higress-system:15051 type security.istio.io/v1beta1/PeerAuthentication cnt=0 nonce=273eca9d-b148-4b85-bf09-3b2bd7585353
2022-11-07T02:22:11.993879Z info adsc Received higress-controller.higress-system:15051 type security.istio.io/v1beta1/RequestAuthentication cnt=0 nonce=d8b6b803-58f3-4467-a176-db911ab829f8
2022-11-07T02:22:11.993908Z info adsc Received higress-controller.higress-system:15051 type telemetry.istio.io/v1alpha1/Telemetry cnt=0 nonce=5d222fc3-f44a-4a5a-a8d8-38464ec9da90

从日志中,我们就可以比较明确的得到我们之前的判断了。

  1. getaddrinfo for host "higress-controller.higress-system" port 15051: Name or service not known testing higress controller is ready to connect... 在启动之前会等待 higress-controller 的启动(应该是魔改版的 istio chart)
  2. 在配置源中多了一个上游的 MCP XDS 服务
    1
    2
    3
    {
    "address": "xds://higress-controller.higress-system:15051"
    }
  3. higress 会向服务器推送 istio 的配置项
    1
    2
    3
    2022-11-07T02:22:11.993861Z	info	adsc	Received higress-controller.higress-system:15051 type security.istio.io/v1beta1/PeerAuthentication cnt=0 nonce=273eca9d-b148-4b85-bf09-3b2bd7585353
    2022-11-07T02:22:11.993879Z info adsc Received higress-controller.higress-system:15051 type security.istio.io/v1beta1/RequestAuthentication cnt=0 nonce=d8b6b803-58f3-4467-a176-db911ab829f8
    2022-11-07T02:22:11.993908Z info adsc Received higress-controller.higress-system:15051 type telemetry.istio.io/v1alpha1/Telemetry cnt=0 nonce=5d222fc3-f44a-4a5a-a8d8-38464ec9da90

从配置项认证

当我们已经有了明确的认知的时候,我们再来看 Envoy 的配置项

首先生成了一个 80 的 Listener

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
"name": "0.0.0.0_80",
"active_state": {
"version_info": "2022-11-07T02:41:43Z/8",
"listener": {
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
"name": "0.0.0.0_80",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 80
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.filters.network.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"stat_prefix": "outbound_0.0.0.0_80",
"rds": {
"config_source": {
"ads": {

},
"initial_fetch_timeout": "0s",
"resource_api_version": "V3"
},
"route_config_name": "http.80"
},

然后生成了 Router

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
{
"version_info": "2022-11-07T02:41:43Z/9",
"route_config": {
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
"name": "http.80",
"virtual_hosts": [
{
"name": "demo.bar.com:80",
"domains": [
"demo.bar.com",
"demo.bar.com:*"
],
"routes": [
{
"match": {
"case_sensitive": true,
"safe_regex": {
"google_re2": {

},
"regex": "/demo((\\/).*)?"
}
},
"route": {
"cluster": "outbound|80||echo-web.default.svc.cluster.local",
"timeout": "0s",
"retry_policy": {
"retry_on": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"num_retries": 2,
"retry_host_predicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"host_selection_retry_max_attempts": "5",
"retriable_status_codes": [
503
],
"retriable_request_headers": [
{
"name": ":method",
"invert_match": true,
"string_match": {
"safe_regex": {
"google_re2": {

},
"regex": "POST|PATCH|LOCK"
}
}
}
]
},
"max_grpc_timeout": "0s"
},
"metadata": {
"filter_metadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/higress-system/virtual-service/default-simple-example-demo-bar-com"
}
}
},
"decorator": {
"operation": "echo-web.default.svc.cluster.local:80/demo((\\/).*)?"
},
"name": "default-simple-example-34da3948"
},
],
"validate_clusters": false
},

这么看起来也挺明显的。并且因为 Higress Gateway 连接的地址是 istiod

1
2
3
4
$ kubectl logs -n higress-system higress-gateway-74cb5978cc-2sv8k

defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012

那和我们猜测的架构一致。

阅读代码

最后我们来看看代码里面,做了些什么,从项目的结构上,我们就知道他分为 数据面和 控制面,数据面对 Envoy 做了一些扩展,这部分都在 extensions 中。这部分的逻辑就不做展开了。

重点还是看看控制面的逻辑。这部分的代码都在 ingress 之中。

控制面启动 XDS

对于任何的 Operator 组件来说,我们第一步需要关心的是它关注什么资源的变化,这里在下面的代码中,我们可以看出来,Ingress Controller 关注下面的几个配置类型。

initRegistryEventHandlersgithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var IngressIR = collection.NewSchemasBuilder().
MustAdd(collections.IstioExtensionsV1Alpha1Wasmplugins).
MustAdd(collections.IstioNetworkingV1Alpha3Destinationrules).
MustAdd(collections.IstioNetworkingV1Alpha3Envoyfilters).
MustAdd(collections.IstioNetworkingV1Alpha3Gateways).
MustAdd(collections.IstioNetworkingV1Alpha3Serviceentries).
MustAdd(collections.IstioNetworkingV1Alpha3Virtualservices).
Build()

// initRegistryEventHandlers sets up event handlers for config updates
func (s *Server) initRegistryEventHandlers() error {
log.Info("initializing registry event handlers")
configHandler := func(prev config.Config, curr config.Config, event model.Event) {
// For update events, trigger push only if spec has changed.
pushReq := &model.PushRequest{
Full: true,
ConfigsUpdated: map[model.ConfigKey]struct{}{{
Kind: curr.GroupVersionKind,
Name: curr.Name,
Namespace: curr.Namespace,
}: {}},
Reason: []model.TriggerReason{model.ConfigUpdate},
}
s.xdsServer.ConfigUpdate(pushReq)
}
schemas := IngressIR.All()
for _, schema := range schemas {
s.configController.RegisterEventHandler(schema.Resource().GroupVersionKind(), configHandler)
}
return nil
}

对于这些资源的处理在

RegisterEventHandlergithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
func (m *IngressConfig) RegisterEventHandler(kind config.GroupVersionKind, f model.EventHandler) {
IngressLog.Infof("register resource %v", kind)
if kind != gvk.VirtualService && kind != gvk.Gateway &&
kind != gvk.DestinationRule && kind != gvk.EnvoyFilter {
return
}

switch kind {
case gvk.VirtualService:
m.virtualServiceHandlers = append(m.virtualServiceHandlers, f)

case gvk.Gateway:
m.gatewayHandlers = append(m.gatewayHandlers, f)

case gvk.DestinationRule:
m.destinationRuleHandlers = append(m.destinationRuleHandlers, f)

case gvk.EnvoyFilter:
m.envoyFilterHandlers = append(m.envoyFilterHandlers, f)
}

for _, remoteIngressController := range m.remoteIngressControllers {
remoteIngressController.RegisterEventHandler(kind, f)
}
}

而执行逻辑就是把这个 handlers 挨个执行一遍

1
2
3
for _, f := range c.gatewayHandlers {
f(config.Config{Meta: gatewaymetadata}, config.Config{Meta: gatewaymetadata}, event)
}

实际上,也只是通知下 istiod,Push 一个事件给 istiod,等 istiod 来拉取。

这是这个 Xds 的逻辑,我们来看看 higress 到底做了什么。

Controller

ControlleronEvent 事件中,我们发现也就是关注了 Ingress 这个 CR 资源,然后把所有的 Handlers 执行了一遍。

onEventgithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
func (c *controller) onEvent(namespacedName types.NamespacedName) error {
event := model.EventUpdate
ing, err := c.ingressLister.Ingresses(namespacedName.Namespace).Get(namespacedName.Name)
if err != nil {
if kerrors.IsNotFound(err) {
event = model.EventDelete
c.mutex.Lock()
ing = c.ingresses[namespacedName.String()]
delete(c.ingresses, namespacedName.String())
c.mutex.Unlock()
} else {
return err
}
}

// ingress deleted, and it is not processed before
if ing == nil {
return nil
}

// we should check need process only when event is not delete,
// if it is delete event, and previously processed, we need to process too.
if event != model.EventDelete {
shouldProcess, err := c.shouldProcessIngressUpdate(ing)
if err != nil {
return err
}
if !shouldProcess {
IngressLog.Infof("no need process, ingress %s", namespacedName)
return nil
}
}

drmetadata := config.Meta{
Name: ing.Name + "-" + "destinationrule",
Namespace: ing.Namespace,
GroupVersionKind: gvk.DestinationRule,
// Set this label so that we do not compare configs and just push.
Labels: map[string]string{constants.AlwaysPushLabel: "true"},
}
vsmetadata := config.Meta{
Name: ing.Name + "-" + "virtualservice",
Namespace: ing.Namespace,
GroupVersionKind: gvk.VirtualService,
// Set this label so that we do not compare configs and just push.
Labels: map[string]string{constants.AlwaysPushLabel: "true"},
}
efmetadata := config.Meta{
Name: ing.Name + "-" + "envoyfilter",
Namespace: ing.Namespace,
GroupVersionKind: gvk.EnvoyFilter,
// Set this label so that we do not compare configs and just push.
Labels: map[string]string{constants.AlwaysPushLabel: "true"},
}
gatewaymetadata := config.Meta{
Name: ing.Name + "-" + "gateway",
Namespace: ing.Namespace,
GroupVersionKind: gvk.Gateway,
// Set this label so that we do not compare configs and just push.
Labels: map[string]string{constants.AlwaysPushLabel: "true"},
}

for _, f := range c.destinationRuleHandlers {
f(config.Config{Meta: drmetadata}, config.Config{Meta: drmetadata}, event)
}

for _, f := range c.virtualServiceHandlers {
f(config.Config{Meta: vsmetadata}, config.Config{Meta: vsmetadata}, event)
}

for _, f := range c.envoyFilterHandlers {
f(config.Config{Meta: efmetadata}, config.Config{Meta: efmetadata}, event)
}

for _, f := range c.gatewayHandlers {
f(config.Config{Meta: gatewaymetadata}, config.Config{Meta: gatewaymetadata}, event)
}

return nil
}

当前的大部分代码逻辑都在 annotations 里面,也就是当前支持的各个功能

NewAnnotationHandlerManagergithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
func NewAnnotationHandlerManager() AnnotationHandler {
return &AnnotationHandlerManager{
parsers: []Parser{
canary{},
cors{},
downstreamTLS{},
redirect{},
rewrite{},
upstreamTLS{},
ipAccessControl{},
headerControl{},
timeout{},
retry{},
loadBalance{},
localRateLimit{},
fallback{},
auth{},
},
gatewayHandlers: []GatewayHandler{
downstreamTLS{},
},
virtualServiceHandlers: []VirtualServiceHandler{
ipAccessControl{},
},
routeHandlers: []RouteHandler{
cors{},
redirect{},
rewrite{},
ipAccessControl{},
headerControl{},
timeout{},
retry{},
localRateLimit{},
fallback{},
},
trafficPolicyHandlers: []TrafficPolicyHandler{
upstreamTLS{},
loadBalance{},
},
}
}

这里以 cors 为例子。

Parsegithub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
func (c cors) Parse(annotations Annotations, config *Ingress, _ *GlobalContext) error {
// 判断了anontations 中是否需要配置 cors
if !needCorsConfig(annotations) {
return nil
}

// cors enable
enable, _ := annotations.ParseBoolASAP(enableCors)
if !enable {
return nil
}

// 生成一个 CORS 的配置
corsConfig := &CorsConfig{
Enabled: enable,
AllowOrigin: []string{defaultAllowOrigin},
AllowMethods: splitStringWithSpaceTrim(defaultAllowMethods),
AllowHeaders: splitStringWithSpaceTrim(defaultAllowHeaders),
AllowCredentials: defaultAllowCredentials,
MaxAge: defaultMaxAge,
}

// 最终 Append 到配置中去,再下面就是各种配置
defer func() {
config.Cors = corsConfig
}()

// allow origin
if origin, err := annotations.ParseStringASAP(allowOrigin); err == nil {
corsConfig.AllowOrigin = splitStringWithSpaceTrim(origin)
}

// allow methods
if methods, err := annotations.ParseStringASAP(allowMethods); err == nil {
corsConfig.AllowMethods = splitStringWithSpaceTrim(methods)
}

// allow headers
if headers, err := annotations.ParseStringASAP(allowHeaders); err == nil {
corsConfig.AllowHeaders = splitStringWithSpaceTrim(headers)
}

// expose headers
if exposeHeaders, err := annotations.ParseStringASAP(exposeHeaders); err == nil {
corsConfig.ExposeHeaders = splitStringWithSpaceTrim(exposeHeaders)
}

// allow credentials
if allowCredentials, err := annotations.ParseBoolASAP(allowCredentials); err == nil {
corsConfig.AllowCredentials = allowCredentials
}

// max age
if age, err := annotations.ParseIntASAP(maxAge); err == nil {
corsConfig.MaxAge = age
}

return nil
}

处理逻辑衔接

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
func (m *IngressConfig) List(typ config.GroupVersionKind, namespace string) ([]config.Config, error) {
if typ != gvk.Gateway &&
typ != gvk.VirtualService &&
typ != gvk.DestinationRule &&
typ != gvk.EnvoyFilter {
return nil, common.ErrUnsupportedOp
}

// Currently, only support list all namespaces gateways or virtualservices.
if namespace != "" {
IngressLog.Warnf("ingress store only support type %s of all namespace.", typ)
return nil, common.ErrUnsupportedOp
}

if typ == gvk.EnvoyFilter {
m.mutex.RLock()
defer m.mutex.RUnlock()
IngressLog.Infof("resource type %s, configs number %d", typ, len(m.cachedEnvoyFilters))
return m.cachedEnvoyFilters, nil
}

var configs []config.Config
m.mutex.RLock()
for _, ingressController := range m.remoteIngressControllers {
configs = append(configs, ingressController.List()...)
}
m.mutex.RUnlock()

common.SortIngressByCreationTime(configs)
wrapperConfigs := m.createWrapperConfigs(configs)

// 根据 istiod 的 List 请求的内容就返回对应的数据
IngressLog.Infof("resource type %s, configs number %d", typ, len(wrapperConfigs))
switch typ {
case gvk.Gateway:
return m.convertGateways(wrapperConfigs), nil
case gvk.VirtualService:
return m.convertVirtualService(wrapperConfigs), nil
case gvk.DestinationRule:
return m.convertDestinationRule(wrapperConfigs), nil
}

return nil, nil
}

到这里其实我们把整个流程梳理下

到这里我们发现其实整个 Higress 的逻辑当前还是比较的简单的,就是处理一些 annotations 中的信息,转为 istio 的数据。

开发环境搭建

社区Readme 少了这部分,也就是初始化 submodules

1
make prebuild

即可。