The flower fades and the red and green apricots are small. When swallows fly, green water surrounds others.

Record a large ZBLOG database conversion operation

ZBLOG Tutorial Birds on the 15th floor 60498 Browse 3 Comments

4 according to customer's requirements ZBLOG ASP sites, merging to the same ZBLOGPHP , the article URL is required to remain unchanged.

Briefly introduce the data of the four stations.

1. The main station is mssql, and the three sub stations use access.

2. The articles add up to more than 6000

3. Less than 1 million comments

4、 label Probably less than a thousand.

5. It looks like a dozen in total.

The first thought is always simple

When receiving orders, the idea was relatively simple, but I didn't expect that when the data volume was relatively large, the official conversion tools were all reduced to:

1. A2p Z-BlogPHP conversion tools plug-in unit When exporting data, random exhaust and random code. Unable to use.

2. MovableType data format export. Basically, it is OK to export 100 articles at a time. But when there is too much data... it cannot be used.

It seems that there are only two plug-ins?

Ask questions

However, what is more troubling is that no one has operated too many ASPs to transfer to the same PHP station, which has many problems that need to be introduced.

1. The article ID is repeated, and multiple site IDs are accumulated from 1, including classification, comment, tag user There is also this problem.

2. Simple and mindless modification of article ID will lead to disordered comments. The comments of article A may go to the bottom of article B. This problem also exists in classification and tag.

3. The article url of ZBLOG is strongly associated with the article ID when the article alias Alias is not set. If the article ID is modified, the Url will also change.

solve the problem

In view of these problems reflection And verification can be combined in the following ways.

1. Backup the articles, classifications, comments, labels, and primary IDs in the databases of the four stations. Because access may not be able to execute multiple sql statements at one time, it can only copy them one by one

 //Backup article ID
alter TABLE  blog_Article add COLUMN  log_ID_backup char(255);
update blog_Article set log_ID_backup = log_ID;

alter TABLE  blog_Article add COLUMN  log_from char(255);
update blog_Article set log_from = 'lusongsong';


//Backup classification ID
alter TABLE  blog_Category add COLUMN  cate_ID_backup char(255);
update blog_Category set cate_ID_backup = cate_ID;

alter TABLE  blog_Category add COLUMN  cate_from char(255);
update blog_Category set cate_from = 'lusongsong';


//Backup comment ID
alter TABLE  blog_Comment add COLUMN  comm_ID_backup char(255);
update blog_Comment set comm_ID_backup = comm_ID;

alter TABLE  blog_Comment add COLUMN  log_ID_backup char(255);
update blog_Comment set log_ID_backup = log_ID;

alter TABLE  blog_Comment add COLUMN  comm_from char(255);
update blog_Comment set comm_from = 'lusongsong';



//Backup Tags ID
alter TABLE  blog_Tag add COLUMN  tag_ID_backup char(255);
update blog_Tag set tag_ID_backup = tag_ID;

alter TABLE  blog_Tag add COLUMN  tag_from char(255);
update blog_Tag set tag_from = 'lusongsong';

2. Because the number of article classification and publishing users is relatively small, it is handled manually. The added classification needs to be modified and replaced.

 //Add from site classification as new classification
blog
update blog_Article set log_CateID = 8 where log_CateID=1;
update blog_Article set log_CateID = 9 where log_CateID=2;
update blog_Article set log_CateID = 10 where log_CateID=3;
info
update blog_Article set log_CateID = 11 where log_CateID=1;
yulu
update blog_Article set log_CateID = 12 where log_CateID=1;
update blog_Article set log_CateID = 13 where log_CateID=2;
update blog_Article set log_CateID = 14 where log_CateID=3;
update blog_Article set log_CateID = 15 where log_CateID=4;
//Processing from the user's home of the site article
update blog_Article set log_AuthorID = 15 where log_AuthorID=2;
update blog_Article set log_AuthorID = 19 where log_AuthorID=3;
update blog_Article set log_AuthorID = 18 where log_AuthorID=4;
update blog_Article set log_AuthorID = 20 where log_AuthorID=5;
update blog_Article set log_AuthorID = 21 where log_AuthorID=6;
update blog_Article set log_AuthorID = 22 where log_AuthorID=7;

3. Expand the main ID of articles, comments and tags in the three sub station databases to prevent overwriting when importing.

 Update blog_Article set log_ID=log_ID+3333//Note that the primary key attribute and automatic number should be removed first.
Update blog_Commment set comm_ID2=comm_ID+555555//Create a new field. There are too many comments. The local test cannot modify the field properties
update blog_Tag set tag_ID = tag_ID + 222

4. Use navicat to import the master station to MySQL first.

5. Use the plug-in to migrate the corresponding database to the table of zblogphp (the plug-in doesn't say how to write it, but mainly the idea of writing the database). The main problem that the plug-in should pay attention to is that the operation of hundreds of thousands of comments should be paginated, and not all of them should be turned at once. During the local test, it is directly transferred in one breath. It is not a memory burst or a CPU burst, but phpcgi hangs up directly

6. For the data of three sub stations, repeat steps 4 and 5 to complete the data conversion. Check the converted data, and no exception is found.

7. The processing of article Url includes two parts:

7.1 First, you need to intercept the interface of Url output (the current version of ZBLOGPHP 1.4 is not available, so you have to write one yourself first), and modify the source of the output format.

7.2 Intercept the ViewAuto part of the system. When the url cannot match the system url rules, modify the input and throw it to the viewpost function.

So far, the data conversion is completed, and the Url remains the same as that of the original four sites.

summary

It is rare to have the opportunity to operate a station with such a large amount of data once. Compared with ZBLOG, it is generally a small-scale site, especially the level with nearly one million comments. However, this also reflects a problem from the side. The official tools provided by ZBLOG include other tools in the official application center theme Plug ins, few of which have been specialized in big data optimization Once encountering the problem of a large amount of data, they can't work normally. For specific analysis of specific problems, grasp the main problems and deal with them one by one. As for database conversion and Url adjustment, everyone may have different methods. The technology is a layer of window paper.

Please specify: Bird Blog » Record a large ZBLOG database conversion operation

 tourist
Post my comments Change your identity
Cancel comment

Hi, you need to fill in your nickname and email!

  • Nickname (required)
  • Email (required)
  • website

Latest comments from netizens (3)

  1.  visitor
    Ha ha ha ha ha ha, this kind of thing is really rare.
    Tianxing Studio 8 years ago (January 24, 2016) reply