道阻且长

道阻且长

问君西游何时还,畏途巉岩不可攀。
twitter
github

Front-end project OSS access deployment pitfall record

Background#

There are multiple businesses within the department at the same time. In order to facilitate access and management, the entire system adopts a "Qiankun" architecture, where each business develops its own sub-application and then mounts it to the base application for use. The general structure is as follows:

➜  Project tree
.
├── ASCM
│   └── index.html
├── OSCM
│   └── index.html
├── PSCM
│   └── index.html
├── TMS
│   └── index.html
├── VSCM
│   └── index.html
├── assets
│   ├── dist.css
│   └── dist.js
└── index.html

7 directories, 8 files

For a considerable period of time in the past, all of our applications were compiled and packaged on the compilation machine, and then uploaded. Nginx directly specified the static file directory for access. The configuration is as follows:

server {
  listen 80;
  server_name xxx.test.demo.local;

  client_max_body_size 15M;
  access_log logs/halley.access.log log_access; # Log output directory
  gzip on;
  gzip_min_length 1k;

  location ^~ / {
    root /mnt/work/h5/project-dist/;
    index index.html index.htm;
    try_files $uri $uri/ /index.html;
  }
}

Originally, this was a very common and commonly used solution. However, due to some issues with the internal operation and maintenance platform, when deploying the new main application to the production environment (there were no issues with testing and pre-release), the entire Project folder would be cleared and then the new resources would be released, rather than simply being overwritten. This means that every update would clear the internal sub-applications, making "base application release === re-release all applications".

Although the release frequency is not high (about twice a month), as the number of sub-applications increases, this still becomes a heavy burden. Not to mention that some applications related to factories cannot be interrupted during the day and can only be updated together in the middle of the night. Therefore, the idea of completely solving the problem emerged.

However, after communicating with the operation and maintenance team, it was learned that the current operation and maintenance platform is almost unattended and there is a need to migrate to another system.

Migration Process#

Release Principle#

Use containers as the build environment on the middleware, read the user-written build.sh in the repository for building, push the artifacts under the dist directory to oss/s3, and then resolve the URL to the oss/s3 address of index.html through internal routing.

Project Update#

Write Script#

Add build.sh to the project, with the following content:

#! /bin/sh
. ~/.profile
yarn install --registry http://npm.xxxx.local:7001 --ignore-engines
export CICD_STATIC_PATH=$(echo $CICD_STATIC_PATH) && yarn build

The build system will store the oss address of the content of this build as the CICD_STATIC_PATH variable, and obtain it in the script to pass it to webpack for building resources.

Update Webpack#

Correct the relevant resource paths in webpack.config.js

const publicPath = {
  dev: `http://${process.env.HOST}:${process.env.PORT}/`,
  production: process.env.CICD_STATIC_PATH
};
{
  path: __dirname + '/dist/', // Place the packaged files in this path. In dev mode, they will only exist in memory and will not be actually packaged to this path
  publicPath: publicPath[process.env.NODE_ENV], // File resolution path. The paths referenced in index.html will be set relative to this path
  filename: process.env.NODE_ENV === 'dev' ? 'bundle.js' : 'bundle-[contenthash].js' // Compiled file name
}

Update Nginx Configuration#

server {
  listen 80;
  server_name xxx.test.demo.local;

  client_max_body_size 15M;
  access_log /var/log/nginx/halley.access.log ; # Log output directory
  gzip on;
  gzip_min_length 1k;

  location ^~ / {
    proxy_set_header X-Scheme $scheme; # Pass the protocol
    proxy_set_header Host apisix-area.test.demo.com; # Pass the domain name
    proxy_set_header X-Real-IP $remote_addr; # Pass the IP
    proxy_set_header REMOTE-HOST $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    add_header X-Rewrite-Result $uri;
    proxy_intercept_errors on;
    error_page 400 403 404 500 = 200 /index.html;
    proxy_pass http://apisix-area.test.demo.com/xxx/xxx-project/;
  }
}

Where http://apisix-area.test.demo.com is provided by the operation and maintenance team, and the static resource path is read from the CICD_STATIC_PATH variable by the shell script during the build.

After the update, static resources are no longer used, and all requests that fail to access are fully proxied to index.html on the remote oss for access. Since the project is built with React, it has its own front-end routing, and static resources such as js/css will be automatically uploaded to the domain under CICD_STATIC_PATH. Therefore, it is only necessary to proxy all failed requests to index.html on oss.

Off-topic#

When configuring Nginx, it is more convenient to test it locally using a docker service, so as to avoid endless discussions about updating the server configuration. It feels like a lot of time is wasted on finding people and going through the process, which is quite unpleasant.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.