The efficiency of typecho native search is really low. See the plug-in of Panda A Typecho search enhancement: ExSearch , the effect is really great!

So he started some research and prepared to use it on his own theme.

The specific implementation principle of Panda A's plug-in is:

  1. The plug-in generates the json data of the entire database article (and uses the hooks of typecho to ensure that the data is updated synchronously when publishing new articles and modifying articles). This data can be stored in two ways (file form, database)
  2. When the blog is opened, the json data will be loaded to the front end
  3. When searching, match the json data directly

But when I used the code on my blog, there were some problems.


When there are a large number of articles (my blog currently has 217 articles), the generated json data size is very large. After I generate it, the size is 1.9M. Using jquery's getjson method is time-consuming (my test is 19s, which is related to the server, although it does not hinder page rendering)

Two problems arise:

  1. In the process of requesting json data, the search function is unavailable
  2. Traffic problems caused by too large json data

(Panda A started to request the json file as soon as he opened the blog. I personally think that searching is not a common function, so I clicked the search box to request the json data.)


Of course, the above two problems are not particularly serious.

But if your blog has many articles! This json data will be very large, which may lead to a very long request reading time and unnecessary traffic loading.


My solution is: the front-end does not load the entire article's json data at one time, but through php Use the SEESION variable to store the read cached json data

 if ($_SESSION['search_cache'] === false){ $object['status'] = false; //If the data is stored in the database, it will be read from the database. Here, it is read from the cache file $filePath = __TYPECHO_ROOT_DIR__ . __ TYPECHO_PLUGIN_DIR__ . DIRECTORY_SEPARATOR.'Handsome'.DIRECTORY_SEPARATOR.'cache'.DIRECTORY_SEPARATOR.'search.json'; $file = file_get_contents($filePath); if ($file === false){ $fail = "{}"; $_SESSION['search_cache'] = $fail; echo $fail; }else{ $_SESSION['search_cache'] = $file; } }else{ $object['status'] = true; } //Process search data and return results ...

By monitoring the value of the input box, you can continuously launch ajax requests. The backend can directly use the value of the SEESION variable to return results.

Advantages: multiple searches, only the consumption of reading the file once (then use the SEESION variable)
Disadvantages: Multiple searches will result in multiple ajax requests (but this ajax request will not read the database or the file, and the overall consumption is very small, so the speed is also very fast)

(The previous implementation also monitored the input value to initiate an ajax request, but each ajax request will operate the database, which is very slow.)

Of course, this method is just a trade-off. After all, no matter how fast the Ajax request is, it will be affected by whether the server is blocked or not and network reasons. However, loading all the json data at the beginning will really achieve "instant search".


The specific effect can be seen in my blog now.

If you have a better idea, please discuss it in the comment area~

Last modification: July 30, 2019
Do you like my article?
Don't forget to praise or appreciate, let me know that you accompany me on the way of creation.