data management

Terminology in computer field
Collection
zero Useful+1
zero
Data management uses computer Hardware and software technologies are effective for data collect storage handle and application Process. Its purpose is to fully and effectively play the role of data. The key to effective data management is data organization.
Chinese name
data management
Foreign name
data management
data management
It is the process of collection, storage, processing and application
Purpose
Give full play to the role of data

definition

Announce
edit
data management yes [1] The process of using computer hardware and software technology to effectively collect, store, process and apply data. Its purpose is to fully and effectively play the role of data. The key to effective data management is data organization.
With the development of computer technology, data management has experienced manual management, file system [2] database system Three stages of development. The data structure established in the database system more fully describes the internal relationship between the data Data modification , update and expansion, while ensuring data independence reliable Security And Integrity , reduced data redundancy , so it is improved data sharing Degree and data management efficiency.

Management phase

Announce
edit
Manual management stage
Before the mid-1950s, computers were mainly used for scientific computing. The main characteristics of data management in this stage are:
(1) Cannot save data for a long time Before the mid-1950s, computers were generally owned in information research institutions( Paper tape magnetic tape )The capacity space of is limited, and the experimental data is temporarily stored when doing experiments. After doing experiments, the data results are printed on paper tape or taken away on tape, so it is generally not necessary to save the data for a long time.
(2) Data is not managed by special application software, but by the application using data itself. As a programmer, when writing software, you should not only design the logical structure of the program, but also design [3] Physical structure And data access mode.
(3) Data cannot be shared. In the manual management phase, it can be said that the data is application oriented. Since each application is independent, a group of data can only correspond to one program. Even though the data to be used already exists in other programs, the data between programs cannot be shared, because there is a large amount of data between programs data redundancy
(4) Data is not independent. As long as the application changes [4] Logical structure Or the physical structure will change accordingly, so programmers must make corresponding changes to modify the program, which brings a lot of burden to the work of programmers.
File System Phase
From the late 1950s to the mid-1960s, computers began to be used in data management. At this time, the computer's storage device is no longer a tape or a card. In terms of hardware, there are disks, magnetic drums and other storage devices that can be directly accessed. In terms of software, the operating system already has a special data management software, which is generally called the file system. The file system is generally composed of three parts: software related to file management, managed files, and data structures required for implementing file management. File system stage storage data is stored in the form of files and managed by the operating system. The file system stage is also the primary stage of database development. Using the file system to store and manage data has the following four characteristics:
(1) Data can be stored for a long time. With large capacity disks as storage devices, computers began to be used to process large amounts of data and store data.
(2) It has simple data management function. The logical structure of the file is decoupled from the physical structure, and the program and data are separated, which makes the data and program independent to a certain extent and reduces the workload of programmers.
(3) data sharing Poor ability. Since each file is independent, when the same data needs to be used, each file must be created, and the data cannot be shared, which will also result in a large number of data redundancy
(4) Data is not independent. At this stage, the data is still not independent. When the data structure changes, the application program and the file structure definition must also be modified; The change of application will also change the data structure.
Database system stage
Since the late 1960s, computer management With the increasing scale of objects, the wider application range and the rapid growth of data volume, and the increasingly strong demand for multiple applications and languages to share data sets, database technology came into being, and specialized software systems for unified data management emerged—— Database management System.
use database system To manage data has obvious advantages over file system. From file system to database system, it marks a leap in database management technology.

Data oriented application

Announce
edit
As mentioned earlier data management It has gone through three stages of manual management, file management and database management, mainly the process of effective collection, storage, processing and application of data using computer hardware and software technology. With the progress of information technology, management information systems It will provide business support to large-scale organizations, not only covering all types of businesses of the entire organization, but also covering the entire organization (global or national). Therefore, as the core function of the management information system, data management will enter a new stage, that is, data application oriented data management.
Data management concept for data application
Data management refers to the management of data resources. according to en:DAMA "Data resource management is committed to developing appropriate structures, strategies, practices and procedures for processing the enterprise data life cycle". This is a high-level and broad definition, which does not necessarily directly involve the specific operation of data management (from Wikipedia). Compared with the definition of Baidu Encyclopedia, the definition of Baidu Encyclopedia is aimed at data management in the process of data application, that is Traditional data management , and Wikipedia The definition of is a higher level. It is aimed at the management of application process data involved in the whole life cycle of enterprise data, that is, the management of data changes, or the management of data (metadata) describing data. Here we call it Application oriented data management
According to management theory, a team of several people can rely on self-consciousness and self-discipline, dozens of people need to be managed, hundreds of people need to be managed by a team, and thousands or tens of thousands of people need to rely on computer assisted team management. Enterprises and institutions generally cover the whole country, and the management of the whole organization is divided into headquarters, provincial institutions, municipal institutions, grass-roots institutions and other institutions at all levels; The management and functional departments directly engaged in the corresponding business and the management and functional departments not directly engaged in the business (such as personnel, office, logistics, audit, etc.) are also set in each level of organization; Each department is composed of several employees as the management object. At the same time, a series of systems have been formulated to regulate and restrict the activities and behaviors of institutions, departments, personnel and other management objects.
Similarly, with the increase of management objects - data, the management method (stage) will also be improved. Generally, the whole project of a large management information system is divided into management levels such as general integration, sub projects, sub projects, and each sub project has several internal project teams; In each management level, it involves Business function (such as business transaction, accounting treatment, administration, result presentation, etc.) and Non business function (such as definition, configuration, monitoring, analysis, recording, scheduling, etc.); Each business and non business function consists of several data sets (such as processes form data item , algorithms, metadata, logs, etc.). At the same time, it is also necessary to formulate a series of systems, rules and standards to constrain the activities and changes of management objects such as projects, functions and data.
Figure 1 Schematic diagram of data space
It can be seen that traditional data management focuses on data object Process, form, data item, algorithm and other data directly facing specific business requirements; The data objects involved in application oriented data management also add standardized means to describe processes, forms, data items, algorithms, etc Application Object Data (that is, their corresponding metadata), as well as files that record the results of various data changes, logs that record the running status, and other non directly business oriented data, so as to manage the loading, change, recording, reuse, and other processes of various application business requirements. See Figure 1 below
Data management object for data application
Data management object for data application. Data oriented application Data management of Managed data object , mainly those descriptions that constitute the application system component Attribute metadata. These application system components include processes, files, archives, data elements (items), codes, algorithms (rules, scripts), models, indicators, physical tables, ETL processes, running status records, and so on.
Metadata in general refers to data about data, mainly describe Data Properties (property) information This information includes identification attributes of data, such as naming identifier , synonyms, context, etc; Technical attributes, such as data type, data format, threshold, UoM, etc; Management attributes, such as version, registration authority, submission authority, status, etc; Relationship attributes, such as classification, relationship, constraint, rule, standard, specification, process, etc. The metadata involved in data application oriented data management is mainly Describe those application systems component Property. In addition to the traditional metadata attributes, each component has its own unique attributes, such as the process should have the attributes of participants and phases, the physical table should have the attributes of deployment, the ETL should have the attributes of source and target, and the indicators should have the attributes of algorithm and factor.
Each component must correspond to one or more (different classifications of a component) Metamodel Meta model is the standard of metadata. Each metadata should follow the definition of its corresponding meta model. For example, each data item (yuan) have their own names identifier , data type, data format, publishing status, registration authority and other attributes. The collection of these attributes is the metadata of this data item. The metadata of each data item is constrained by which attributes are described, how each attribute should be described, and the rules described, which are called meta models. The e-government data element standard (GB/T 19488.1-2004) is the meta model of e-government data items (elements).
The traditional metadata management usually loads metadata through the extraction function of the special metadata management system after the relevant business is realized. This way, because the process of loading or maintaining metadata (adding business attributes afterwards) needs to be manually started afterwards, it is often difficult to obtain changes in metadata in time to ensure the consistency of metadata with the actual situation. When implementing application oriented data management active metadata Management mode, that is to follow the standard of meta model, load metadata (local metadata) through human-computer interaction process, and generate it at the same time when possible data object (Application system component) configuration or executable script (if conditions are not met, metadata generated by human-computer interaction should also be used as the basis for other related tools to generate executable script). Whenever it is necessary to change the configuration or modify the script, it is also realized through this human-computer interaction process to synchronously generate new metadata, ensuring the consistency of metadata with the actual. See Figure 2 below:
Figure 2 Active Metadata Management Mode
Significance and method of data management for data application
Traditional application systems are often targeted at specific applications, need to solidify requirements, and are difficult to support changing management information systems. The golden tax phase III project is to establish a management information system for national organizations, covering all management businesses and all users of the entire organization. In such an application system, the "change" of business requirements is normal, and the "unchanged" is transient; In the face of the whole organization, the business of each department and level is "different". The "unity" is gradually realized, and then continues to expand (starting new differences). To this end, it is necessary to have an enterprise production that can not only provide the realization of business requirements, but also support the changes of business requirements, track and manage their changes, and support the continuous optimization of user experience New application system (AS2.0) The product set serves as the support. AS2.0 must control, record and manage the change process and results of the entire organization's business requirements. Data management for data applications is a product of AS2.0's key infrastructure, and is the basis for its feasibility.
The data management of traditional application systems focuses on the value-added process of data, and the realization of its functions focuses on and emphasizes the loading of business demand content, the ETL of content, the organization of content, the processing of content, and the reflection of content. These functions are realized and solidified through coding Software code The data management of AS2.0 focuses on increasing the collection of metadata, historical data and state data, and using active metadata management tools to configure and load software code. At the same time, the corresponding local metadata is collected to form a metadata collection, which enables loading, capturing, recording and tracking of changes in various business requirements Change management Standardized packaging of historical records related to content and changes to form archives, so as to realize the organization, reuse and unloading of historical data History management Capture, record, comprehensively analyze and timely reflect the running status information of various AS2.0 components in real time to realize the whole system Comprehensive management of runtime status
To sum up, with the data object The expansion of change records, history records and status records marks that data management has entered a new stage - data management oriented to data applications, and also marks that application systems have entered the AS2.0 era.

Telemarketing

Announce
edit
In telemarketing, the sales team, products and marketing database constitute the three essential elements of "who will sell", "what to sell" and "who to sell". Marketing data, as the convergence of target sales objects, plays a vital role in telemarketing. How to manage and use these precious data resources in a scientific and standard way should become a problem that every telemarketing manager needs to seriously consider and implement. Now let's start from the theory and look at the links that the "data management" needs to focus on in telemarketing!
First concern: data import
Data needs to be disposed before importing to ensure that it can be maintained, counted and analyzed during application.
First, the original data attributes need to be analyzed and defined. Usually, telemarketing will call all kinds of data from different channels, which have their own characteristics. This makes us need to first identify similar geographical attributes (local and non local), gender attributes (male and female), age attributes (different age groups), income attributes (high-school and low-income groups), industry attributes (financial and IT industries), etc. Then, according to these different characteristics, the data attributes are classified and coded, and these data are further processed through telemarketing. Then, we can analyze and find the most suitable user group for product sales, so as to complete the priority acquisition and selection of data information and maximize the application of data resources.
Secondly, this seems to be a simple but very meaningful job. It is to dispose of the data in advance before importing the data, and delete some invalid data, such as data with few contact numbers, missing contact numbers, or data that is not different from the target customer group attributes. Since these tasks are arranged before data import, they can be processed in batches to obtain data that is more consistent with the call specification in the most efficient way, and ensure that the data allocated to front-line TSRs is accurate and effective, saving their time and work efficiency.
Finally, before the data is officially put into use, it is also recommended to stop numbering and backing up the original data. Because once the data is allocated to TSR, the maintenance and update of data information must be stopped from time to time with the promotion of sales work. When you need to view the original information of the data, you need to backup the original database. Since the original data was numbered earlier, we only need to use the data number to make a simple correspondence query in the original database. After the above disposal, we can now import data resources and wait for telemarketing to bring us rich profits!
The second concern: the use of data
The processed data appears uniform and orderly after import, which is a good start.
Next, let's understand the application process of data. When the marketing data is used by TSR, a series of maintenance will be carried out on the data, including recording and changing the dialing status and sales status. Now let's take a look at several dialing statuses and sales statuses, as well as the significance of these statuses for us.
Dialing status: Dialing status refers to the connection status of the phone number and other contact methods in the marketing data after contact. Usually, we can label according to the status shown in the figure below.
The data marked with the dial status has a further meaning ---- the vitality of the data. All the data that can never be connected will be "Cancel" from the TSR. Do not call it out to occupy the time of the TSR; The data demand of "busy tone/call in progress" gives priority to "wrong time call". Because this state indicates that the call is still in use, the possibility of connecting will be the greatest! By the way, the data about the need to "continue to contact" should be "call at wrong time". The so-called wrong time dialing is mainly to stagger the working days and non working days, or stagger the daytime time and evening time. The effective application of data resources can only be achieved after staggered dialing of "weekday dialing", "non weekday dialing", "daylight time" and "night time".
Let's look at "Sales Status". Sales status refers to three statuses that stop identification only when the phone is connected and the data of the contact person is found: 
  • Victory: Telemarketing victory
  • To be followed up: the contact person needs to think, or the sales is not completed, and further follow-up is required
  • Rejection: the contact person does not accept the products or services sold, and the telemarketing fails
The above three statuses can easily be identified in the process of telemarketing. What we need to pay attention to here is the attention to the two states of "to be followed up" and "rejection". Looking at the follow-up data, we would like to understand the main factors that cause users to think about their needs? What is the production quality? Product price? Or after-sales service? As long as we control this information, we can become more familiar with the data attributes, and design targeted sales scripts to deal with this kind of demand "to be followed up" users.
Similarly, for rejected users, we also need to find out the main reasons for rejection, and take effective measures to improve the success rate of sales through the correspondence with data attributes.
Third concern: data application
Experience has informed us that data does not need to be evenly distributed to each TSR, because different TSRs have different applications of data. When allocating data, we should stop effective control in real time according to the use of data by each TSR.
At this time, there are two parameters that can help us complete the regulation of marketing data: "victory contact rate" and "to be followed up rate". See below for introduction.
Successful contact rate=sum of user data contacted/sum of connected data × 100%. Victory contact rate is the judgment Data validity An indicator of. Through the success contact rate to understand the dialed data, how many data can find the contact person and sales target. The victory contact rate is a changing status value. With the second, third and even more frequent data calls, the victory contact rate will improve. In order to improve the effective application of data at a certain level, the "minimum victory contact rate" can be set. When the "victory contact rate" of the allocated data is lower than the set target value, the allocation of new data will be reduced. At the same time, the TSR will be requested to stop repeatedly dialing the "busy tone/call in progress" and "no answer" in the disconnected data to improve the "victory contact rate" The purpose of applying data more effectively.
The rate to be followed up=the total amount of data to be followed up/the total amount of data to contact the contact person × 100%. According to the formula, it is not difficult to understand that the "rate to be followed up" focuses on how many data needs to be followed up among the data that can find contacts. In the process of controlling data distribution, it is required to set the "maximum rate to be followed up" for this indicator.
Set the "maximum rate to be followed up". In order to make the data resources can be used well, stop the secondary sales with the contact person who is thinking in time, and seize the best follow-up opportunity, we need TSR to view the follow-up data regularly and stop calling. When the "maximum rate of waiting for follow-up" is exceeded, it indicates that there was too much data in the status of waiting for follow-up in the marketing data called by the TSR. At this time, it is necessary to reduce the allocation of new data so that it can focus on following up the sales target who is interested but still hesitating.
Through the control of the "victory contact rate" indicator in the marketing data, we can find more contacts, and through the control of the "to be followed up rate" indicator, we can find more opportunities for successful sales. Attention to these two indicators is an important part of telemarketing "data management".

Related works

Announce
edit
Cover of Data Management
title: Data management
Also known as: data management: insight into retail and e-commerce operations
Author: Huang Chengming
Category: e-commerce, data, management
Pages: 306
Price: 59.90 yuan
Press: Electronic Industry Press
Published on: July 2014
Binding: paperback
Format: 16
ISBN:9787121234064
Data Management: Insight into Retail and E-commerce Operation It tells the story of two young people working in the sales, commodity, e-commerce, data and other departments of large companies, and explains data awareness and retail thinking in simple terms through a large number of cases. The author integrates various data analysis methods into specific business scenarios, and finally forms a data management model to help enterprises improve their operation management capabilities.
Data management: insight into retail and e-commerce operations 》All cases are based on Excel, so everyone can quickly start and implement them.