Netease Smart Enterprise Node.js Practice (2) | Smooth release and front-end code

original
2020/05/20 11:08
Reading number 199

health examination

 

As mentioned earlier, we forward traffic to node applications through the gateway. How does the gateway determine the availability of node applications?

 

If the node application forwards the traffic during the publishing process, the request will fail. Therefore, our gateway will perform a health check on the node application. First, we should make sure that the node application is healthy, that is, it can be used for external services. Specifically, the gateway will call the HTTP interface for health check of the node application every 30 seconds. If the code returned by the interface is 200, it means the node application is available. The user's request will be forwarded before the next check. If other codes are returned, it means the application is unavailable and the request will not be forwarded. Repeat the process after 30 seconds.

 

Schematic Diagram

 

This scheme is very simple to implement. Just add an HTTP interface to the node that can request normally. For example, the interface we use '/health/check' has' this. ctx. body='OK '' in its controller. If the node application starts normally and can accept user requests, the returned code of this interface will be 200. If the interface cannot be accessed normally and the returned code is not 200, it means that the entire application cannot be accessed.

 

Is there no problem with the above scheme? There must be. For example, when we publish, we must first make the node application offline. If the node application happens to be offline just after passing the health check, it will cause the traffic forwarded to the node application in the next 30 seconds to fail to access. Therefore, we have an upgrade scheme - smooth publishing.

 

Smooth publishing

 

Smooth publishing needs to cooperate with the publishing system. That is, when we publish an application, the publishing system will automatically call the offline interface of the Node application. After publishing, it will call the online interface of the Node application. In this way, we can control the status of the application through a global variable, which has nothing to do with the actual status of the application. After the offline interface is called, the application status is set to offline, and then wait for a period of time to actually offline the application. So if there is traffic coming in at this time, the application can still service normally.

 

Schematic Diagram

 

The logic is very simple, but the implementation should consider the multi process model of Egg.js, Egg.js usually starts a corresponding number of worker processes according to the number of CPU cores of the server, so that multi-core resources can be used perfectly. Each process runs the same source code. These processes listen to a port at the same time, so when the publishing system calls the offline interface, only one of the processes will receive the request. If only the global variable of the process receiving the request is set to offline, other processes will still return to the online status when receiving the health check, which is wrong, Therefore, interprocess communication should be used to tell all processes to offline.

 

Based on these analyses, we implemented the Egg.js plug-in 'pp ndp'. In addition, since routing is not allowed in the Egg.js plug-in, we implemented it in the form of middleware. The main code is as follows:

 

```

const { request } = ctx;

const { path, hostname } = request;

if (path === online) {

    app.messenger.sendToApp(ONLINE, '');

    ctx.body = 'NDP: Nodejs Is Online';

} else if (path === offline) {

    app.messenger.sendToApp(OFFLINE, '');

    ctx.body = 'NDP: Nodejs Is Offline';

} else if (path === check) {

  ctx.body = 'NDP: Nodejs Start Success';

} else if (path === status) {

  if (app[ISONLINE]) {

    ctx.body = 'NDP: Nodejs Is Online';

  } else {

    ctx.status = 500;

  }

} else {

  await next();

}

```

 

Of course, the premise of this scheme is that there are multiple node service machines and they are published by groups. If there is only one machine, there is no need to be so troublesome. Anyway, the release will definitely lead to the suspension of service.

 

Plug in 'pp ndp' To meet different business needs, The online, offline, check, and status URLs support custom configuration.

 

This solution not only solves the problem of smooth release, making the release less scary, but also can be used to enable better services after the application goes online. For example, you can set the application to online status after the application gets the configuration, or you can set the application to online status after the application successfully registers or connects to a service. Let applications ensure the most healthy state for external services.

 

CDN and code discovery on code

 

It may be strange to see CDN, The reason why node applications need CDN is that we put the front-end code and node code in one application to facilitate the use of isomorphic rendering. Although this solves the problem of accessing the server rendering code, it is reasonable to use CDN for the client code. There are many articles about webpack using CDN. I mainly introduce how to find front-end code, including CDN on the code and inserting front-end code URL in the template.

 

Mainly use the 'webpack manifest plugin' to generate a file, such as' manifest. json ', which includes the front-end code resource name and corresponding path, similar to:

 

```

{

  "vendor.js": "/static/f5e0281b/js/vendor.chunk.js",

  "vendor.js.map": "/static/f5e0281b/js/vendor.chunk.js.map",

  "Page.css": "/static/f2065164/css/Page.chunk.css",

  "Page.js": "/static/f2065164/js/Page.chunk.js",

  "Page.js.map": "/static/f2065164/js/Page.chunk.js.map",

}

```

 

You only need to upload the files listed in this file to the CDN, instead of manually packing the directories one by one. Keep the same path for each file when uploading the CDN. Use our implementation tool 'pp cdn' to upload after the code compilation in the publishing process is completed. When referencing code in the Node template, use the Egg.js plug-in we developed 'pp just', using the following method:

 

```html

<script src='{{ctx.just.use("Page.js")}}'></script>

```

 

The plug-in also reads the 'manifest. json' file internally, and outputs the URL after adding the CDN domain name. For example, the above code is transformed into:

 

```html

<script src=' https://qiyukf.nosdn.127.net/huke/static/f2065164/js/Page.chunk.js '></script>

```

 

In fact, the reason for doing so is to take advantage of the advantages of multiple versions of front-end code. We use file hashes as part of the file path as multi version control, so that after each release and compilation, the newly generated file path will be written into 'manifest. json', and then the latest version of code can be obtained through the above method.

 

Of course, it is unreasonable for the node and front-end code to be together at present, which may lead to unnecessary release. Later, they should be completely separated. However, using 'manifest. json' can also be one of our solutions after subsequent code separation.

 

summary

 

In general, the selection of technical solutions should be based on the team's existing technical solutions and business needs. The smooth release scheme introduced in this article has indeed solved our release problem in the early stage of our business, making the release more secure.

 

But with the development of business, we need a grayscale environment to better ensure the health of applications and find problems in applications in advance. In addition, we also need to know the running status of our applications, so in the next lecture, we will share the content related to grayscale publishing and application monitoring.

Expand to read the full text
Loading
Click to lead the topic 📣 Post and join the discussion
Reward
zero comment
zero Collection
zero fabulous
 Back to top
Top