*NGINX Unit**:动态语言运行时(如 Python/Node.js),可直接在 NGINX 中运行应用代码

Caddy 实现自动 HTTPS 配置主要通过以下步骤完成:

1. 创建 Caddyfile

Caddyfile 是 Caddy 的配置文件,用于定义站点的配置。以下是一个简单的 Caddyfile 示例:

example.com {  
    root * /srv  
    file_server  
}
  • example.com:将域名替换为你的实际域名。
  • root * /srv:设置站点的根目录为 /srv
  • file_server:启用静态文件服务。

2. 安装 Caddy

可以通过以下命令在 Linux 系统上安装 Caddy:

sudo curl -o /usr/local/bin/caddy "https://caddyserver.com/api/download?os=linux&arch=amd64"
sudo chmod +x /usr/local/bin/caddy

3. 启动 Caddy

使用以下命令启动 Caddy:

docker run -d --name caddy \
    -p 80:80 \
    -p 443:443 \
    -v $(pwd)/Caddyfile:/etc/caddy/Caddyfile \
    -v $(pwd):/srv \
    caddy:2
  • -p 80:80-p 443:443:映射 HTTP 和 HTTPS 端口。
  • -v $(pwd)/Caddyfile:/etc/caddy/Caddyfile:挂载 Caddy 配置文件。
  • -v $(pwd):/srv:挂载站点文件目录。

4. 自动 HTTPS 配置

Caddy 会自动为配置文件中指定的域名申请 HTTPS 证书。当 Caddy 启动时,它会:

  • 使用 Let’s Encrypt 的 ACME 协议自动申请证书。
  • 自动续期证书,无需手动干预。
  • 将 HTTP 请求自动重定向到 HTTPS。

5. 验证 HTTPS

确保你的域名正确配置了 DNS 指向服务器的 IP 地址。打开浏览器访问 https://example.com,你应该能看到安全的 HTTPS 连接。

6. 高级配置

如果你需要更复杂的配置,如反向代理、日志记录等,可以在 Caddyfile 中添加更多指令。例如:

example.com {
    root * /srv
    file_server
    log {
        output file /var/log/caddy/example.com.access.log
    }
    encode gzip zstd
    reverse_proxy localhost:5001
}

7. 自动续期

Caddy 会自动在证书到期前进行续期,无需任何干预。

通过以上步骤,Caddy 能够实现自动 HTTPS 配置,极大地简化了 HTTPS 的部署和维护工作。

以下是一些类似 NGINX 的高性能 Web 服务器或可替代 NGINX 的软件,它们各有特点,适用于不同的场景:

1. LiteSpeed

  • 特点:高性能、事件驱动架构,有效处理并发连接,资源消耗少;提供与 Apache 配置的兼容性,便于从 Apache 过渡;支持 HTTP/2、负载均衡、SSL 加速等功能。
  • 适用场景:需要高性能和低资源占用的网站,尤其是从 Apache 迁移过来的场景。

2. HAProxy

  • 特点:专注于负载均衡,也可作为 Web 服务器使用;以高性能、高可用性和可扩展性著称。
  • 适用场景:需要强大负载均衡功能的场景,如大型分布式系统和高流量网站。

3. Caddy

  • 特点:配置简单,以自动 HTTPS 配置而闻名;采用 Go 语言编写,提供内存安全保障;支持通过 Admin API 动态修改配置。
  • 适用场景:开发环境、小型生产环境以及对 HTTPS 配置有较高要求的场景。

4. Pingora

  • 特点:基于 Rust 构建,高速、可靠且可编程;每秒可处理超过 4000 万请求,性能出色;提供高度可定制的负载均衡和故障转移策略。
  • 适用场景:对性能要求极高的场景,如大型互联网公司或高流量的云服务提供商。

5. Lighttpd

  • 特点:轻量级设计,低内存占用;异步处理请求,不影响网站速度;通过 PHP-FPM 提供与 PHP 的集成;支持反向代理和负载均衡。
  • 适用场景:对资源占用敏感的场景,如小型服务器或资源受限的环境。

6. OpenLiteSpeed

  • 特点:高性能、轻量级的模块化 HTTP 服务器;处理大量并发连接的能力强;提供现成的 PHP 集成、缓存模块和 HTTPS 集成;拥有基于 Web 的界面,简化服务器管理和配置。
  • 适用场景:需要快速加载网站且对服务器管理界面有需求的场景。

7. Traefik

  • 特点:专为微服务和云原生应用设计的反向代理服务器;提供动态配置和服务发现功能,适应云原生环境;易于使用,同时提供细粒度的控制。
  • 适用场景:微服务架构和云原生应用的场景。

8. Tengine

  • 特点:基于 Nginx 开发,提供了许多 Nginx 所没有的功能和优化;支持 Nginx 官方的 load_module 指令,提高了兼容性和稳定性。
  • 适用场景:需要更多 Nginx 扩展功能或优化的场景。

选择合适的替代软件时,需要根据实际需求考虑性能、易用性、安全性和特定功能等因素。

目录
  1. 阅读时间估算的意义
  2. NGINX 学习路径与重点
  3. 8分钟快速掌握 NGINX 核心基础
  4. 延伸学习资源与工具

一、阅读时间估算的意义

在技术学习中,阅读时间估算是高效管理学习节奏的关键:

  • 碎片化学习适配:8分钟可掌握一个核心概念(如反向代理原理)或完成一个基础配置。
  • 任务优先级排序:例如排查紧急故障时,优先阅读3分钟的错误代码指南,而非20分钟的架构教程。
  • 知识吸收效率:短时间聚焦单一主题,避免长文档导致的信息遗漏。

二、NGINX 学习路径与重点

1. 核心定位:高性能 HTTP 服务器与反向代理

NGINX 的两大核心能力:

  • 静态资源服务:直接返回 HTML/CSS/图片等文件,性能优于 Apache(测试显示静态资源处理效率高30%+)。
  • 反向代理与负载均衡:作为网关转发请求到后端服务(如 Node.js/Java 应用),支持轮询、最少连接数等策略。
2. 入门必学三大模块
模块核心功能学习时长建议
nginx.conf配置文件结构(全局块、events块、http块)2分钟
server虚拟主机配置(监听端口、域名绑定)2分钟
locationURL 路径匹配与请求处理(如根目录、反向代理)3分钟

三、8分钟快速掌握 NGINX 核心基础

1. 安装与启动(以 Ubuntu 为例)
# 1. 安装 NGINX  
sudo apt update && sudo apt install nginx  

# 2. 启动服务  
sudo systemctl start nginx  
sudo systemctl enable nginx  # 设置开机自启  

# 3. 验证运行  
curl http://localhost  # 应返回 "Welcome to nginx!"  
2. 配置文件结构解析
# /etc/nginx/nginx.conf 关键部分  
user www-data;          # 运行用户  
worker_processes auto;  # 工作进程数(建议与 CPU 核心数一致)  
pid /run/nginx.pid;     # 进程 ID 存储路径  

events {  
  worker_connections 1024;  # 每个进程最大连接数  
}  

http {  
  include /etc/nginx/mime.types;  # 媒体类型定义  
  default_type application/octet-stream;  

  server {  # 第一个虚拟主机(默认网站)  
    listen 80;  
    server_name _;  # 匹配所有域名  

    location / {  
      root /var/www/html;  # 资源根目录  
      index index.html;    # 默认索引页  
    }  
  }  
}  
3. 反向代理实战配置

需求:将 http://example.com 的请求转发到后端服务器 http://192.168.1.100:3000

# 在 http 块中添加新 server  
server {  
  listen 80;  
  server_name example.com;  

  location / {  
    proxy_pass http://192.168.1.100:3000;  # 反向代理地址  
    proxy_set_header Host $host;            # 传递原始主机头  
    proxy_set_header X-Real-IP $remote_addr; # 传递客户端真实 IP  
  }  
}  

操作步骤

  1. 保存配置后检查语法:sudo nginx -t
  2. 重新加载配置:sudo nginx -s reload
  3. 验证:访问 http://example.com,请求应转发到后端服务。
4. 常见错误代码与排查
状态码含义快速排查步骤
403禁止访问检查 location 是否配置 deny all,或文件权限是否为 www-data 可读
502坏网关确认后端服务器地址是否正确,检查后端服务是否运行
504网关超时增加 proxy_read_timeout 300;(默认60秒)

四、延伸学习资源与工具

1. 官方速查表
2. 可视化工具
  • NGINX Unit:动态语言运行时(如 Python/Node.js),可直接在 NGINX 中运行应用代码。
  • nginxconfig.io:在线生成 NGINX 配置文件,适合反向代理、SSL 等场景。
3. 性能测试工具
  • wrk:HTTP 性能基准测试工具,示例:wrk -t4 -c100 -d30s http://localhost
  • NGINX Amplify:官方监控工具,实时查看请求延迟、吞吐量、错误率等指标。

总结

8分钟足以完成 NGINX 从安装、基础配置到反向代理的全流程实践。关键在于聚焦核心配置块(server/location)通过实际需求驱动学习(如部署代理、排查错误)。后续可逐步扩展至 HTTPS 配置、负载均衡策略、动态上游等进阶主题,结合官方文档和实战案例,持续提升运维与开发效率。

Use bridge networks
Estimated reading time: 8 minutes
In terms of networking, a bridge network is a Link Layer device which forwards traffic between network segments. A bridge can be a hardware device or a software device running within a host machine’s kernel.

In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.

Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.

When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified. You can also create user-defined custom bridge networks. User-defined bridge networks are superior to the default bridge network.
Differences between user-defined bridges and the default bridge

User-defined bridges provide automatic DNS resolution between containers.

Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.

Imagine an application with a web front-end and a database back-end. If you call your containers web and db, the web container can connect to the db container at db, no matter which Docker host the application stack is running on.

If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy --link flag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the /etc/hosts files within the containers, but this creates problems that are difficult to debug.

User-defined bridges provide better isolation.

All containers without a --network specified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.

Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.

Containers can be attached and detached from user-defined networks on the fly.

During a container’s lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from the default bridge network, you need to stop the container and recreate it with different network options.

Each user-defined network creates a configurable bridge.

If your containers use the default bridge network, you can configure it, but all the containers use the same settings, such as MTU and iptables rules. In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.

User-defined bridge networks are created and configured using docker network create. If different groups of applications have different network requirements, you can configure each user-defined bridge separately, as you create it.

Linked containers on the default bridge network share environment variables.

Originally, the only way to share environment variables between two containers was to link them using the --link flag. This type of variable sharing is not possible with user-defined networks. However, there are superior ways to share environment variables. A few ideas:

    Multiple containers can mount a file or directory containing the shared information, using a Docker volume.

    Multiple containers can be started together using docker-compose and the compose file can define the shared variables.

    You can use swarm services instead of standalone containers, and take advantage of shared secrets and configs.

Containers connected to the same user-defined bridge network effectively expose all ports to each other. For a port to be accessible to containers or non-Docker hosts on different networks, that port must be published using the -p or --publish flag.
Manage a user-defined bridge

Use the docker network create command to create a user-defined bridge network.

$ docker network create my-net

You can specify the subnet, the IP address range, the gateway, and other options. See the docker network create reference or the output of docker network create --help for details.

Use the docker network rm command to remove a user-defined bridge network. If containers are currently connected to the network, disconnect them first.

$ docker network rm my-net

What’s really happening?

When you create or remove a user-defined bridge or connect or disconnect a container from a user-defined bridge, Docker uses tools specific to the operating system to manage the underlying network infrastructure (such as adding or removing bridge devices or configuring iptables rules on Linux). These details should be considered implementation details. Let Docker manage your user-defined networks for you.

Connect a container to a user-defined bridge

When you create a new container, you can specify one or more --network flags. This example connects a Nginx container to the my-net network. It also publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the my-net network has access to all ports on the my-nginx container, and vice versa.

$ docker create --name my-nginx
–network my-net
–publish 8080:80
nginx:latest

To connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:

$ docker network connect my-net my-nginx

Disconnect a container from a user-defined bridge

To disconnect a running container from a user-defined bridge, use the docker network disconnect command. The following command disconnects the my-nginx container from the my-net network.

$ docker network disconnect my-net my-nginx

Use IPv6

If you need IPv6 support for Docker containers, you need to enable the option on the Docker daemon and reload its configuration, before creating any IPv6 networks or assigning containers IPv6 addresses.

When you create your network, you can specify the --ipv6 flag to enable IPv6. You can’t selectively disable IPv6 support on the default bridge network.
Enable forwarding from Docker containers to the outside world

By default, traffic from containers connected to the default bridge network is not forwarded to the outside world. To enable forwarding, you need to change two settings. These are not Docker commands and they affect the Docker host’s kernel.

Configure the Linux kernel to allow IP forwarding.

$ sysctl net.ipv4.conf.all.forwarding=1

Change the policy for the iptables FORWARD policy from DROP to ACCEPT.

$ sudo iptables -P FORWARD ACCEPT

These settings do not persist across a reboot, so you may need to add them to a start-up script.
Use the default bridge network

The default bridge network is considered a legacy detail of Docker and is not recommended for production use. Configuring it is a manual operation, and it has technical shortcomings.
Connect a container to the default bridge network

If you do not specify a network using the --network flag, and you do specify a network driver, your container is connected to the default bridge network by default. Containers connected to the default bridge network can communicate, but only by IP address, unless they are linked using the legacy --link flag.
Configure the default bridge network

To configure the default bridge network, you specify options in daemon.json. Here is an example daemon.json with several options specified. Only specify the settings you need to customize.

{
“bip”: “192.168.1.5/24”,
“fixed-cidr”: “192.168.1.5/25”,
“fixed-cidr-v6”: “2001:db8::/64”,
“mtu”: 1500,
“default-gateway”: “10.20.1.1”,
“default-gateway-v6”: “2001:db8🔡:89”,
“dns”: [“10.20.1.2”,“10.20.1.3”]
}

Restart Docker for the changes to take effect.
Use IPv6 with the default bridge network

If you configure Docker for IPv6 support (see Use IPv6), the default bridge network is also configured for IPv6 automatically. Unlike user-defined bridges, you can’t selectively disable IPv6 on the default bridge.
Next steps

Go through the standalone networking tutorial
Learn about networking from the container’s point of view
Learn about overlay networks
Learn about Macvlan networks

network, bridge, user-defined, standalone
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Bol5261

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值