跳至主要内容
版本: 最新版本 (v5.0.x)

推荐

推荐

本文档包含使用 Fastify 时的一些推荐。

使用反向代理

Node.js 是早期采用框架的语言之一,其标准库中自带易于使用的 Web 服务器。以前,在 PHP 或 Python 等语言中,需要一个支持该语言的 Web 服务器,或者能够设置某种与该语言配合使用的 CGI 网关。使用 Node.js,可以编写一个直接处理 HTTP 请求的应用程序。因此,很容易编写处理多个域名请求、监听多个端口(例如 HTTP 和 HTTPS)的应用程序,然后将这些应用程序直接暴露到互联网上以处理请求。

Fastify 团队强烈认为这是一种反模式,并且是极差的实践

  1. 它通过分散应用程序的焦点,增加了不必要的复杂性。
  2. 它阻止了 水平扩展

请参阅 为什么如果 Node.js 已准备好用于生产,我应该使用反向代理? 以更深入地了解为什么应该选择使用反向代理。

举个具体的例子,考虑以下情况

  1. 应用程序需要多个实例来处理负载。
  2. 应用程序需要 TLS 终止。
  3. 应用程序需要将 HTTP 请求重定向到 HTTPS。
  4. 应用程序需要服务多个域名。
  5. 应用程序需要提供静态资源,例如 jpeg 文件。

有很多可用的反向代理解决方案,您的环境可能会决定使用哪种解决方案,例如 AWS 或 GCP。鉴于上述情况,我们可以使用 HAProxyNginx 来解决这些需求

HAProxy

# The global section defines base HAProxy (engine) instance configuration.
global
log /dev/log syslog
maxconn 4096
chroot /var/lib/haproxy
user haproxy
group haproxy

# Set some baseline TLS options.
tune.ssl.default-dh-param 2048
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS

# Each defaults section defines options that will apply to each subsequent
# subsection until another defaults section is encountered.
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
# The following option makes haproxy close connections to backend servers
# instead of keeping them open. This can alleviate unexpected connection
# reset errors in the Node process.
option http-server-close
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000

# Enable content compression for specific content types.
compression algo gzip
compression type text/html text/plain text/css application/javascript

# A "frontend" section defines a public listener, i.e. an "http server"
# as far as clients are concerned.
frontend proxy
# The IP address here would be the _public_ IP address of the server.
# Here, we use a private address as an example.
bind 10.0.0.10:80
# This redirect rule will redirect all traffic that is not TLS traffic
# to the same incoming request URL on the HTTPS port.
redirect scheme https code 308 if !{ ssl_fc }
# Technically this use_backend directive is useless since we are simply
# redirecting all traffic to this frontend to the HTTPS frontend. It is
# merely included here for completeness sake.
use_backend default-server

# This frontend defines our primary, TLS only, listener. It is here where
# we will define the TLS certificates to expose and how to direct incoming
# requests.
frontend proxy-ssl
# The `/etc/haproxy/certs` directory in this example contains a set of
# certificate PEM files that are named for the domains the certificates are
# issued for. When HAProxy starts, it will read this directory, load all of
# the certificates it finds here, and use SNI matching to apply the correct
# certificate to the connection.
bind 10.0.0.10:443 ssl crt /etc/haproxy/certs

# Here we define rule pairs to handle static resources. Any incoming request
# that has a path starting with `/static`, e.g.
# `https://one.example.com/static/foo.jpeg`, will be redirected to the
# static resources server.
acl is_static path -i -m beg /static
use_backend static-backend if is_static

# Here we define rule pairs to direct requests to appropriate Node.js
# servers based on the requested domain. The `acl` line is used to match
# the incoming hostname and define a boolean indicating if it is a match.
# The `use_backend` line is used to direct the traffic if the boolean is
# true.
acl example1 hdr_sub(Host) one.example.com
use_backend example1-backend if example1

acl example2 hdr_sub(Host) two.example.com
use_backend example2-backend if example2

# Finally, we have a fallback redirect if none of the requested hosts
# match the above rules.
default_backend default-server

# A "backend" is used to tell HAProxy where to request information for the
# proxied request. These sections are where we will define where our Node.js
# apps live and any other servers for things like static assets.
backend default-server
# In this example we are defaulting unmatched domain requests to a single
# backend server for all requests. Notice that the backend server does not
# have to be serving TLS requests. This is called "TLS termination": the TLS
# connection is "terminated" at the reverse proxy.
# It is possible to also proxy to backend servers that are themselves serving
# requests over TLS, but that is outside the scope of this example.
server server1 10.10.10.2:80

# This backend configuration will serve requests for `https://one.example.com`
# by proxying requests to three backend servers in a round-robin manner.
backend example1-backend
server example1-1 10.10.11.2:80
server example1-2 10.10.11.2:80
server example2-2 10.10.11.3:80

# This one serves requests for `https://two.example.com`
backend example2-backend
server example2-1 10.10.12.2:80
server example2-2 10.10.12.2:80
server example2-3 10.10.12.3:80

# This backend handles the static resources requests.
backend static-backend
server static-server1 10.10.9.2:80

Nginx

# This upstream block groups 3 servers into one named backend fastify_app
# with 2 primary servers distributed via round-robin
# and one backup which is used when the first 2 are not reachable
# This also assumes your fastify servers are listening on port 80.
# more info: https://nginx.ac.cn/en/docs/http/ngx_http_upstream_module.html
upstream fastify_app {
server 10.10.11.1:80;
server 10.10.11.2:80;
server 10.10.11.3:80 backup;
}

# This server block asks NGINX to respond with a redirect when
# an incoming request from port 80 (typically plain HTTP), to
# the same request URL but with HTTPS as protocol.
# This block is optional, and usually used if you are handling
# SSL termination in NGINX, like in the example here.
server {
# default server is a special parameter to ask NGINX
# to set this server block to the default for this address/port
# which in this case is any address and port 80
listen 80 default_server;
listen [::]:80 default_server;

# With a server_name directive you can also ask NGINX to
# use this server block only with matching server name(s)
# listen 80;
# listen [::]:80;
# server_name example.tld;

# This matches all paths from the request and responds with
# the redirect mentioned above.
location / {
return 301 https://$host$request_uri;
}
}

# This server block asks NGINX to respond to requests from
# port 443 with SSL enabled and accept HTTP/2 connections.
# This is where the request is then proxied to the fastify_app
# server group via port 3000.
server {
# This listen directive asks NGINX to accept requests
# coming to any address, port 443, with SSL.
listen 443 ssl default_server;
listen [::]:443 ssl default_server;

# With a server_name directive you can also ask NGINX to
# use this server block only with matching server name(s)
# listen 443 ssl;
# listen [::]:443 ssl;
# server_name example.tld;

# Enable HTTP/2 support
http2 on;

# Your SSL/TLS certificate (chain) and secret key in the PEM format
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/private.pem;

# A generic best practice baseline for based
# on https://ssl-config.mozilla.org/
ssl_session_timeout 1d;
ssl_session_cache shared:FastifyApp:10m;
ssl_session_tickets off;

# This tells NGINX to only accept TLS 1.3, which should be fine
# with most modern browsers including IE 11 with certain updates.
# If you want to support older browsers you might need to add
# additional fallback protocols.
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;

# This adds a header that tells browsers to only ever use HTTPS
# with this server.
add_header Strict-Transport-Security "max-age=63072000" always;

# The following directives are only necessary if you want to
# enable OCSP Stapling.
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/chain.pem;

# Custom nameserver to resolve upstream server names
# resolver 127.0.0.1;

# This section matches all paths and proxies it to the backend server
# group specified above. Note the additional headers that forward
# information about the original request. You might want to set
# trustProxy to the address of your NGINX server so the X-Forwarded
# fields are used by fastify.
location / {
# more info: https://nginx.ac.cn/en/docs/http/ngx_http_proxy_module.html
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# This is the directive that proxies requests to the specified server.
# If you are using an upstream group, then you do not need to specify a port.
# If you are directly proxying to a server e.g.
# proxy_pass http://127.0.0.1:3000 then specify a port.
proxy_pass http://fastify_app;
}
}

Kubernetes

readinessProbe 使用 (默认情况下) pod IP 作为主机名。Fastify 默认监听 127.0.0.1。在这种情况下,探测器将无法访问应用程序。要使其工作,应用程序必须监听 0.0.0.0 或在 readinessProbe.httpGet 规范中指定自定义主机名,如下例所示

readinessProbe:
httpGet:
path: /health
port: 4000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 5

生产环境容量规划

为了调整 Fastify 应用程序的生产环境规模,强烈建议您针对环境的不同配置执行自己的测量,这些配置可能使用真实的 CPU 内核、虚拟 CPU 内核 (vCPU) 或甚至部分 vCPU 内核。在本建议中,我们将使用术语 vCPU 来表示任何类型的 CPU。

可以使用诸如 k6autocannon 等工具进行必要的性能测试。

也就是说,您也可以考虑以下经验法则

  • 为了获得尽可能低的延迟,建议每个应用程序实例 (例如,k8s pod) 使用 2 个 vCPU。第二个 vCPU 主要由垃圾回收器 (GC) 和 libuv 线程池使用。这将最大程度地减少用户延迟以及内存使用量,因为 GC 将更频繁地运行。此外,主线程不必停止以让 GC 运行。

  • 为了优化吞吐量(处理每个可用 vCPU 每秒处理尽可能多的请求),请考虑每个应用程序实例使用较少的 vCPU。使用 1 个 vCPU 运行 Node.js 应用程序完全没问题。

  • 您可以尝试使用更少的 vCPU,这在某些用例中可能会提供更好的吞吐量。有报告称,API 网关解决方案在 Kubernetes 中使用 100m-200m vCPU 运行良好。

请参阅 从内部了解 Node 的事件循环 以更详细地了解 Node.js 的工作原理,并更好地确定您的特定应用程序需要什么。

运行多个实例

在某些用例中,可能会考虑在同一服务器上运行多个 Fastify 应用程序。一个常见的例子是在使用反向代理或入口防火墙不可用时,在单独的端口上公开指标端点,以防止公共访问。

在同一个 Node.js 进程中启动多个 Fastify 实例并使其并发运行完全没问题,即使在高负载系统中也是如此。每个 Fastify 实例仅产生与其接收的流量一样多的负载,加上该 Fastify 实例使用的内存。