Let Docker use npm cache to speed up installation when building nodejs applications

Under the original deployment script, each deployment of my node application takes more than 10 minutes, and my project is not suitable for multi-stage construction, so I tried various ways to let it use the image layer to cache part of the installation process, but there was something wrong with it.

It seems that if you want to avoid problems, you should run the installation process completely at the time of deployment. In this way, you can only use the npm cache to shorten the deployment time.

Step 1: Hang an external cache

I thought it was impossible to bind a volume like a container to make the process file persistent during the construction of the dockerfile. But yesterday I found that there is a docker that needs to be updated, which is part of the Buildkit feature. This feature is only available in dockers after 18.09. In short, it can be used after upgrading the docker to the latest version.

The method is simple, as shown in Dockerfile:

 FROM node:18.18.1-buster-slim RUN apt update &&\ apt install -y git openssh-client python3 curl COPY ./ deploy /deploy COPY ./ app/ /app RUN -- mount=type=cache, target=/root/. npm Sh/deploy/install.sh # Then execute your installation script WORKDIR "/deploy" ENTRYPOINT ["/bin/sh"] CMD [ "./start.sh" ] EXPOSE 80

In this way, the '/root/. npm' of the construction container will be mapped to the general docker cache during the construction of the docker file. I tested that if the same cache is attached to other images, the cache can also be used during the construction of other images, but the FROM images of other images are the same, I don't know whether the cache mount will be affected if the source image changes.

be careful: ~/. npm is the default cache directory of npm under Linux, but the default cache directory can be changed. If you modify the default cache directory address of npm in your project, you should also change it here. Or if your platform is special and the default directory of npm is not here, you should also modify it accordingly.

In addition to attaching the cache (type=cache), you can also bind external files or directories (type=bind). The relevant information is here https://docs.docker.com/build/guide/mounts/ However, this document is a bit confusing. I didn't write the source and target which are inside and which are outside. I wrote the external path in the source and it told me that the file does not exist. Then I used the cache directly and didn't try again.

If you think there is a problem with the files cached during the construction process, or simply want to clear these caches, you can According to the method here , execute the following command:

 docker builder prune --filter type=exec.cachemount

If there is no effect, you can try to remove the – filter and its parameters.

Step 2: Let npm use cache first (optional)

When installing dependencies, npm will send a request to the server to check whether each local cache has expired, even if there is a local cache. This process is also very long. In fact, even if the cache expires, it is not a problem. Normally, the contents of packages with the same version number will not change. Therefore, you can let NPM give priority to the local cache and skip checking its online status, which can greatly reduce the installation time.

The method is very simple. Just add a '- preference offline' parameter to the installation command:

 npm i --prefer-offline #You can also add a -- verbose parameter to confirm whether the cache is really used npm i --prefer-offline --verbose

After this, the subsequent deployment time of the application is shortened from the original ten minutes to ten minutes without any change More than one minute , which can be called rocket acceleration, finally solved a problem that has puzzled me for more than a year.

[docker]GDBus. Error:org.freedesktop. DBus. Error. ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files

This is a strange problem. I deployed several images here, and when one of them arrived, I suddenly reported this error:

 failed to solve: node:18.18.1-buster-slim: error getting credentials - err: exit status 1, out: `GDBus. Error:org.freedesktop. DBus. Error. ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files`

Then I checked that running the command installation depends on 'apt install gnome keying'. I tried to solve the problem, but why it suddenly failed is still a mystery.

[docker]ERROR: Service ‘***’ failed to build: the –mount option requires BuildKit.

I used the – mount parameter in the dockerfile. At the beginning, I always reported 'ERROR: Dockerfile parse error line 8: Unknown flag: mount'. Then I found that my docker version was too low. After upgrading the docker, I found another error:

[docker]ERROR: Service '***' failed to build: the --mount option requires BuildKit. Refer to https://docs.docker.com/go/buildkit/ to learn how to build images with BuildKit enabled

I have already set two environment variables

 export DOCKER_BUILDKIT=1 export COMPOSE_DOCKER_CLI_BUILD=1

After a while of research, I found that the reason lies in the program I used. I executed Docker Compose, which is still 1.17.1. But Docker also has a built-in Compose, which is 2.18.1. So I directly changed the command to docker compose Just fine.

[nodejs] Use devDependencies and dependencies at the same time to process local dependency links

I made a tool kit for my own use, but I want to put some of them separately into other projects, so I put them in a namespace and published them.

So there is a problem. These packages originally belong to the whole project, and the modules in them are interdependent. Since they are to be released separately package.json The original local path can no longer be written to the target of the dependent module in. It needs to be changed to the released version number or project address.

During local development, depending on the path, fill in the local path, and execute npm i It will help you directly create a directory to link to the target directory, so that your changes in the target dependency can be tested directly in the current project without publishing or copying. But if you need to change the dependency target to the release version number of the module, even if you manually node_module Delete the modules installed in, and manually create directory links. In some cases, executing the npm command will also delete the links that have been created and re download the release version from npm, which will lead to the unwitting use of incorrect dependencies during development.

I just tested it and found that a module can appear in the package.json Of devDependencies and dependencies By default, npm will give priority to searching devDependencies If the target of the module is valid, this installation will be used. So to solve the problem I encountered above, just write the release module that needs to be developed locally in these two fields at the same time devDependencies Write the local path of the target in dependencies You can write the release version of the target in, so that when the target path exists locally, npm will give priority to creating the path for you, rather than downloading the release version from npm. Reference is as follows:

 { "devDependencies": { "@jialibs/utils": "file:../Utils" }, "dependencies": { "@jialibs/utils": "^1" } }

There are two points to note: if the dependent module version changes, you should pay attention to synchronously modifying the dependent version. Otherwise, you will install the dependency of the old version when you download it from NPM elsewhere, or another solution is to directly write a large version number or a minor version like me, so as long as the large version or minor version does not change, Then the latest version will always be downloaded elsewhere (you need to clear the package lock. json if it has been installed, otherwise npm will still install according to the version in the lock).

Another thing to note If the npm i command is explicitly added in the production environment --omit=dev If this parameter is used, the packages in devDependencies will not be installed. In this case, you can try to put the packages in the local path into 'peerDependencies'.

[Jiajia Disassembly] Yamaha YAS-107 Audio

Today, in order to check whether the subwoofer connector on the back of the stereo has been plugged in by me, I also checked whether there are any strange animals living in it in recent years, so I took it apart to check.

After all the screws were unscrewed, I found that they still could not be opened. I found a video to see that there was a little bit of rubber or shock absorption foam around, and the whole card was very tight. Don't expect to break it by hand. I used a round crowbar to pry around a little bit before finally opening.

Read on [Jiajia Disassembly] Yamaha YAS-107 Audio

QQNT will create duplicate resource files every month

QQNT is a new architecture QQ client released by Tencent last year. I updated this new version in December last year, which has been used for three months. Today, I found that there seems to be a problem.
This is a problem about duplicate files generated from received pictures and video resources.

The original PC QQ is to put all the received pictures in a directory, and all the videos in a directory, respectively named according to QQ's own hash rules. When the client receives a message and needs to load pictures or videos, it will first search in the directory where the resource files are stored according to the resource hash, and if not, it will download a copy from the server. This is a very popular practice.

The new version of QQNT is basically similar in the process of receiving resource files, but there is one difference: It adds a layer of directory in the format of "year month" in each type of resource directory At present, this "month and year" refers to the sending month and year of the message where the attachment resource is located (I even have the resource directory of 2019-06). This will lead to the non sharing of message attachment resources (pictures, emoticons, videos) every month. The emoticons received in the messages of last month will no longer exist for QQ this month, so a new copy should be downloaded. The same is true for thumbnails, because there is also a separate thumbnail directory in each month's directory.

The path for QQNT to store these resources is "C: Users user name Documents Tenant Files QQ number nt_qq nt_data", where "Emoji" contains emoticons, "Pic" contains pictures other than emoticons, and "Video" is any video received. It can be found that all of them are separated by month and month subdirectories. I use Everything to directly search the Emoji directory, Sorted by file size, a large number of identical emoticon files have been found in directories in different months.

The following is the same emoticon file found in January and February directories in Everything




There are still many duplicate expression files and image files. Over time, this feature of not sharing attachment resources every month may lead to rapid growth in space consumption depending on the number of messages received. A large number of duplicate files are generated every month (it has been confirmed that the same files that appear every month are separate files, not hard links).

At present, I don't understand why we should do this. If it is to avoid a large number of files in a directory that will reduce the efficiency of the file system operation, it is good to follow the practice in its "avatar" directory. You can directly divide the subdirectory by the beginning of the file hash.
If QQNT insists on storing files in this way later, users should pay attention to cleaning up local duplicate files regularly. As for the waste of server resources caused by repeated downloads, it's not about users anyway 🤔。

Read on QQNT will create duplicate resource files every month

TypeError: Descriptors cannot not be created directly. Downgrade the protobuf package to 3.20.x or lower

This error is reported when starting sd webui today. Since all extensions have been updated before starting, it is possible that the dependent version of which extension conflicts. At this time, downgrade the module according to its prompts.

The first step is to use the cmd and activate the venv of the project. If the venv is not activated, it will be installed to the global system and then executed

 pip install protobuf==3.20.2

After execution, you may be prompted that it is still incompatible with a module, but I can start the webui normally after trying

Game frame rate plummets after upgrading to Windows 11

After upgrading from win10 to win11, at first I found that the number of vr frames was extremely low. Then I found that not only vr, but also the frame rate of the battleship world was extremely low, but the GPU occupied almost 90% in the task manager. Then I found solutions online. What graphics card settings related to GPU hardware acceleration, the optimization of closed window games, and the closed Game Bar, I tried it all again but it didn't work.

Until I accidentally turned on Afterburner, I found that the video card was running at a very low frequency, and then I directly reset its frequency configuration, which instantly restored normal performance.

It is doubted that some energy-saving function of win11 has modified the relevant settings of the graphics card.

[TrueNAS] Solve the problem of using USB hard disk cabinet to identify multiple hard disks as one

This article is not intended to help you find the hard disk that is not displayed in the TrueNAS GUI, but to use the command to create a Pool.

Because most USB hard disk cabinets or hard disk boxes will hard code a serial number, and all connected hard disks will provide the same serial number to the system (I don't know what this is cerebral infarction logic), and because the TrueNAS platform itself distinguishes hard disks by their serial numbers, only one of the hard disks you insert will be displayed in its hard disk list.

Although TrueNAS does not display other hard disks with the same serial number, they have disk numbers in the system, so you can actually create pools directly with commands.

The following operations need to execute shell commands. Please open the ssh service and connect through ssh. Do not use the gui terminal because the session time of the gui is too short.

First, find the hard disk that cannot be seen in the list. Execute the following command, and you will see a stack of disk numbers assigned to the hard disk by Linux, such as sd [abcdefg...], which are listed alphabetically

 lsblk

If you plug in a lot of hard disks and can't figure out which one is missing, you can go to the GUI hard disk list and scan the disk number. For example, my list is missing sdc, and then check the space size of the sdc device in the lsblk command result. If it is consistent with the disk you added, that is the one.

Once you know the drive letter, you can find a zfs instruction for reference and create a pool manually, such as this: https://docs.oracle.com/cd/E19253-01/819-7065/gaynr/index.html

Reference: I made a raid1 here. Execute the following command to create a pool and create a raid1 pool, where sdc and sd d disks are used as images

 Zpool create pool name mirror sdc sdd

Now that your pool has been created, you will find that it cannot be seen in the storage tab of the GUI, but it can be seen in the dataset, but it cannot be shared after the dataset is created. This is because the pool created by using the command directly lacks some steps. As a result, not only the mount point of the pool is wrong, but also the pool does not exist in the system pool record. Therefore, most operations are not allowed.

To solve this problem, we need to export the pool first:

 Zpool export pool name

Then use the GUI import pool function to import it, and all functions of the pool will be normal.

Finally, note that since TrueNAS uses serial numbers to distinguish hard disks, the pool created using this method should not perform hard disk replacement and other operations in the GUI, which may cause problems! Please use the zfs and zpool commands for related operations.

 

Other evil ways:

If you like me let TrueNAS run in the virtual machine hosted by Windows, you can first format the new hard disk into NTFS in the host, then generate the vhd and put it in it, and then let the virtual machine mount the vhd into the hard disk of the virtual machine. This will have a different serial number, but doing so will add a layer of processing overhead and increase the risk of unexpected errors, Personally, I suggest that you should try not to do this unless you use vhd for other benefits. In fact, hosts on other platforms can use other virtual disks to figure out different serial numbers.

[TrueNAS] Replace the bad disk in the system disk array

The system disk pool in TrueNAS cannot be operated directly in the GUI, so you need to click the command to replace the bad disk. Here is a record

Check the zfs status first

 # zpool status pool: boot-pool state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0B in 00:00:39 with 0 errors on Sat Nov  4 03:45:40 2023 config: NAME        STATE     READ WRITE CKSUM boot-pool   DEGRADED     0     0     0 mirror-0  DEGRADED     0     0     0 sdc3    FAULTED      2    17     0  too many errors sdd3    ONLINE       0     0     0

We can see that the SDC disk is not working, so we need to replace it.

First, offline it:

 zpool offline boot-pool sdc3

Then unplug this disk, insert a new disk, go to the disk page of the web gui to see what the disk number of the new disk is. I assign it to sde here, so I use sde to replace sdc3

 zpool replace -f boot-pool sdc3 sde

Since the contents of the boot pool are very few, you can wait a little while to synchronize. During this time, you can continue to use 'zpool status' to view the synchronization status.

Prevent the automatic upgrade of Google Play framework from causing the "protection mechanism authentication" error to appear again

Recently, I accidentally got a Huawei Mate60Pro from my father. Although I would never use this kind of mobile phone that can neither root nor swipe the phone, I can't just use it as a device when I get it. Just as my Xiaomi Mix2s has swiped the lineageos, the problem that the voice of Bluetooth WeChat calls doesn't come from the headset for a long time has puzzled me for a long time, This time, the Mate60Pro was used as the main machine and the Mix2s was replaced with the standby machine. The Pixel XL, which was originally used as the standby machine, was officially retired because its flash memory began to have problems, and some photos or videos would be damaged.

After the change, the first thing is to install the Google framework, otherwise even Chrome will not work. Although it is not root, it is good that there is a way to restore the Google framework to the mobile phone by using the backup recovery mechanism on the Internet. It is simply mentioned here that after using the Huagu suite APP to install everything that should be installed, the problem that cannot pass the play protection verification is finally solved. The last step in Huagu is to charge fees. In fact, every online search has tutorials, such as this After solving this problem, I will find that the error prompt reappears one day later. I repeatedly toss these apps and torment me for several days.

The main problem is that our own method of registering GSF ID can be verified in the "Google Play Service" of version 20 (maybe 21), but the service APP will be automatically upgraded to the latest version (currently 23), and the GSF ID will change at the same time. I don't know whether the ID has changed or the verification method has changed. It is useless to register new ID clearing app data according to the process under version 23, You must return to the old play service to register before verification.

After checking, someone also found this problem. He tried to prevent the play service from being updated automatically by using various settings that prohibit relevant apps from accessing the network, so that the play service would remain in the old version. However, there are also problems. Sometimes the new version of the play service is required to complete some operations, and it is not guaranteed that the play service will be updated again by any accident, which is very troublesome. I also tried it. No matter whether the play service or the play framework is prohibited from accessing the network, it will be automatically updated. I can never disconnect the play store from the Internet.

Then I found out This method , I haven't found any problems in the past few days. In short After solving the problem of "protection mechanism authentication", directly disable the "Google Service Framework" with the following adb command Has

 adb shell pm disable-user 'com.google.android.gsf'

Then the play service can no longer obtain the GSF ID. Similarly, we can confirm that the GSF ID can no longer be obtained in the "Device IDs" app. I guess the principle is this: if the play service can't get the ID, it can't enter the next verification process, so the original verification status will be kept, and the version of the play service can also be updated.

Read on Prevent the automatic upgrade of Google Play framework from causing the "protection mechanism authentication" error to appear again

[nginx]the “listen … http2” directive is deprecated, use the “http2” directive instead

The new version of nginx has a prompt on the title. In fact, it separates the http2 configuration from the listen command. The modification method is simple. Just remove the original http2, and then add a 'http2 on` Just do it.

original:

 server { listen [::]:443 http2 ssl; ..... }

Change to:

 server { listen [::]:443 ssl; http2 on; ..... }

It's so simple, but why should I write this article? Because when I searched for this warning, I found that the search results were all copied from the same wrong article. The garbage article inexplicably made you delete http2 and then finished. I didn't mention how to add this new command, which made me angry.

Solve the problem that hard disk devices are not displayed in CrystalDiskInfo after they become SCSI devices

A few days ago, I bought a version of Acer Predator GM7 4T and installed it in the second m.2 hard disk on the motherboard. It was found that the system did not display it in CrystalDiskInfo (hereinafter referred to as CDI), no matter how I adjusted the settings or rescanned it. So I looked at DiskGenius and found that the interface of this disk was SATA, I know this must be the wrong judgment of the software, but I don't know exactly where the problem is.

After checking on the Internet, I said that it is related to AHCI and RAID mode. After checking on the bios, I found that the AHCI settings are only related to the hard disk of the SATA interface, and the problem that the nvme ssd is recognized as a SCSI device has not been solved. So I wondered whether the connection of the second m.2 interface would be like this. I swapped the system disk of the main m.2 slot with it, OK, the two disks have become SCSI devices. Then I tried desperately to delete the driver and let it be recognized as an nvme device again. As a result, I was repairing the system in the afternoon and three hours in the evening, because the computer would restart after turning to loading. After checking the startup log, all other drivers could not be loaded after loading disk.sys, Obviously, after the disk drive is loaded, there is an error in reading and writing the hard disk. Of course, it is related to the drive I deleted.

Later, I managed to install the system in this disk by opening a virtual machine on another computer. After I successfully installed it in the host, I opened CDI and found that all disks could not be scanned. I remembered that ACHI was cut into RAID in the bios, but now I can not access my system after switching back to ACHI, I began to doubt whether there was actually a driver that could make the hard disk work normally under the SCSI controller. So I checked the drive list of my motherboard chipset on the AMD official website and found that there were indeed drivers related to raid. After installation, all the hard disks can be displayed in CDI. I think it is necessary to criticize MSI's list of mainboard drivers here, but not all of them. If they had this driver, I would have hit it at the beginning, and there would not be so many things to do later.

Although all my hard disks have been turned into SCSI devices by accident, as long as the drivers are installed, they can still be used as usual, and SMART information can also be obtained correctly. At the beginning, I found that many people on the Internet also asked how to change the hard disk into a SCSI device, because CDI and other software cannot obtain the SMART information of the hard disk when there is no relevant driver, so I have no idea about the health status of the hard disk. Although I still do not know how to change it back, installing the RAID driver is also a solution, Although we can't use its raid function, it provides an interface to obtain this information.

Daily life of older single dogs