one hundred and fifty-one Star nine hundred and sixty-four Fork one hundred and seventy-nine

sagframe / sagacity-sqltoy

Join Gitee
Discover and participate in excellent open source projects with more than 12 million developers, and the private warehouse is completely free:)
Join for free
Clone/Download
Contribution code
Synchronization code
cancel
Tips: Since Git does not support empty folders, empty. keep files will be generated after creating folders
Loading...
README
Apache-2.0

 maven

Detailed document in WORD version (complete)

Please refer to: docs/Smart Platform SqlToy5.6 User Manual. doc

Complete SQL configuration in xml

Github address

sqltoy Lambda

Sqltoy management system scaffold

Sqltoy idea plug-in

Example demonstration project

Quick integration demonstration project

Quick start project

POJO and DTO strictly layered demonstration project

Sharding sub database and sub table demonstration

Dynamic datasource example

Nosql demonstration (mongodb and elastic search)

Demonstration of sqltoy configuration based on xml

QQ communication group: 531812227

Latest version

  • 5.6.10 LTS (jdk17+/springboot3. x)/5.6.10.jre8 (decoupled spring, compatible with 5.1. x, 5.2. x, 5.3. x versions) Release date: May 27, 2024
 <dependency>
     <groupId> com.sagframe </groupId>
     <artifactId> sagacity-sqltoy-spring-starter </artifactId>
     <!--  The solon adaptation version<artifactId>sagacity sqltoy solon plugin</artifactId>-->
     <!--  The version number of jdk8 is 5.6.10. jre8 -->
     <version> 5.6.10 </version>
 </dependency>
  • 5.2.105 LTS (jdk1.8+) Release date: June 6, 2024

1. Preface

1.1 What is sqltoy orm

Sqltoy orm is an orm framework that is more suitable for the project than JPA+MyBatis. It has a jpa type object CRUD and a more intuitive, concise and powerful query function than myBatis (plus).

JPA part

  • JPA like objectified CRUD, object cascade loading, addition and update
  • Support DDL generation through POJO and table creation directly to the database
  • Strengthen the update operation and provide flexible field modification capability. Unlike hibernate, which loads first and then modifies, it completes the modification in one database interaction, ensuring the accuracy of data in high concurrency scenarios
  • Improved cascading modification, providing the operation option of deleting first or setting invalid first, then overwriting
  • The updateFetch and updateSaveFetch functions have been added to strengthen the processing of high concurrency scenarios for strong transactions. Similar to the inventory account and fund account, the database interaction can be realized once to complete the lock query, insert if not existing, modify if existing, and return the modified results
  • The tree structure encapsulation is added to facilitate the recursive query of different database tree structure data
  • Support sub databases and sub tables, support multiple primary key policies (additionally support the business primary key based on Redis to generate specific rules), encrypted storage, and data version verification
  • It provides public attribute assignment (creator, modifier, creation time, modification time, tenant), extension type processing, etc
  • It provides multi tenant unified filtering and assignment, data permission parameter import and unauthorized verification

Query section

  • A very intuitive way to write SQL, which is convenient for rapid two-way migration of<-->code from the client, and convenient for later change and maintenance
  • Support cache translation, reverse cache matching key instead of like fuzzy query
  • Cross database support capability is provided: functions of different databases are automatically converted and adapted, multi dialect sql is automatically matched according to the actual environment, and multi database synchronous testing has greatly improved the productization capability
  • It provides the query function of taking top records, random records and other special scenarios
  • It provides the most powerful paging query mechanism: 1) automatically optimize count statements; 2) Provides cache based paging optimization to avoid executing count queries every time; 3) Provide unique fast paging; 4) Provides parallel paging
  • It provides the ability of sub database and sub table
  • It provides natural integration of related algorithms that are extremely valuable in management projects: group summary calculation, row and column conversion (row to column, column to row), month on year comparison, tree sorting, and tree summary
  • Provides query based hierarchical data structure encapsulation
  • It provides a lot of auxiliary functions: data desensitization, formatting, condition parameter pre-processing, etc

Support multiple databases

  • Regular mysql, oracle, db2, postgresql, sqlserver, dm, kingbase, sqlite, h2, oceanBase, polarb, gaussdb, tidb, oscar, Hangao
  • Support distributed olap databases: clickhouse, StarRocks, greenplum, impala (kudu)
  • Supports elasticsearch and mongodb
  • All kinds of database queries based on sql and jdbc

2. Quick feature description

2.1 Object operation is similar to jpa and targeted (including cascading)

  • Generate the corresponding POJO from the database through quickvo tool, and inject the lightdao provided by sqltoy to complete all operations
    //Three steps: 1. Quickvo generates pojo; 2. Complete yml configuration; 3. Inject dao into service (no need to customize various dao)
    @Autowired
    LightDao  lightDao ;
    StaffInfoVO  staffInfo  =  new  StaffInfoVO (); 
    //Save
    lightDao . save ( staffInfo );
    //Delete
    lightDao . delete ( new  StaffInfoVO ( "S2007" ));

    //public Long update(Serializable entity,  String... forceUpdateProps);
    //Here, the photo attribute is forced to be modified. If it is null, it will be skipped automatically
    lightDao . update ( staffInfo ,  "photo" );

    //Deep modification, regardless of whether all fields are null
    lightDao . updateDeeply ( staffInfo );

    List < StaffInfoVO >  staffList  =  new  ArrayList < StaffInfoVO >();
    StaffInfoVO  staffInfo  =  new  StaffInfoVO ();
    StaffInfoVO  staffInfo1  =  new  StaffInfoVO ();
    staffList . add ( staffInfo );
    staffList . add ( staffInfo1 );
    //Batch save or modify
    lightDao . saveOrUpdateAll ( staffList );
    //Batch save
    lightDao . saveAll ( staffList );
    ...............
    lightDao . loadByIds ( StaffInfoVO . class , "S2007" )
    //Uniqueness verification
    lightDao . isUnique ( staffInfo ,  "staffCode" );

2.2 Support Object Query in Code

  • The unified rule in sqltoy is that the code can be directly transferred to sql or the sqlId in the corresponding xml file
 /**
 *@ todo simplifies paramName [], paramValue [] mode parameter transfer through object
 * @param <T>
 *@ param sqlOrNamedSql can be specific sql or sqlId in the corresponding xml
 *@ param entity passes parameters through objects and returns results by object type
 */
  public  < T  extends  Serializable >  List < T >  find ( final  String  sqlOrNamedSql ,  final  T  entity );
  • Single table query based on objects, with cached translation
 public  Page < StaffInfoVO >  findStaff ( Page < StaffInfoVO >  pageModel ,  StaffInfoVO  staffInfoVO )  {
      //SQL can be written directly in code. It is recommended to define complex SQL in xml
      //In the single table entity query scenario, sql fields can be written as attribute names of java classes
      return  findPageEntity ( pageModel ,  StaffInfoVO . class ,  EntityQuery . create ()
	 . where ( "#[staffName like :staffName]#[and createTime>=:beginDate]#[and createTime<=:endDate]" )
	 . values ( staffInfoVO ));
 }
  • Modify or delete after object-oriented query
 //The indirect sql mode in the demo code sets the conditional mode for record modification
 public  Long  updateByQuery ()  {
      return  lightDao . updateByQuery ( StaffInfoVO . class ,
		 EntityUpdate . create (). set ( "createBy" ,  "S0001" )
                      . where ( "staffName like ?" ). values ( Zhang ));
 }

 //Set the conditional mode to delete records in the indirect sql mode in the code
 lightDao . deleteByQuery ( StaffInfoVO . class ,  EntityQuery . create (). where ( "status=?" ). values ( zero ));

2.3 Extremely simple SQL writing method

  • The writing method of sqltoy (you can see the original meaning of sql at a glance, and it is also very convenient to make changes and adjustments later. Copy to the database client and make minor adjustments to execute)
  • The principle of sqltoy conditional organization is simple: for example, # [order_id=: orderId] equals if (: orderId<>null) sql.append (order_id=: orderId)# Any parameter in [] that is null will be rejected
  • Support multi-level nesting: for example, # [and t.order_id=: orderId # [and t.order_type=: orderType]]
  • Condition judgment retains the # [@ if (: param>=xx | |: param<=xx1) sql statement], a highly flexible @ if() mode, to facilitate special complex scenarios
 //1. Condition value processing is separated from specific sql
 //2. Precede the condition value and process it regularly through the general method defined by filters (most of them do not need additional processing)
 <sql  id= "show_case" >
 <filters>
    <!--  As long as the parameter statusAry contains - 1 (representing all), set statusAry to null and not participate in condition retrieval -->
    <eq  params= "statusAry"  value= "-1"  />
 </filters>
 <value> <! [CDATA[
 select 	*
 from sqltoy_device_order_info t
 where #[t.status in (:statusAry)]
 #[and t.ORDER_ID=:orderId]
 --If sqltoy's in exceeds 1000, it will be automatically cut into (t.field in (1~1000) or t.field in (1001~2000))
 #[and t.ORGAN_ID in (:authedOrganIds)]
 #[and t.STAFF_ID in (:staffIds)]
 #[and t.TRANS_DATE> =:beginAndEndDate[0]]
 #[and t.TRANS_DATE <:beginAndEndDate [1]]  
            #[and  (t.TECH_GROUP,t.PROD_GROUP)  in  (:techGroups,:prodGroups)]
            #[order  by  @value(:orderField)  @value(:orderWay)]  
	 ]] ></value>
 </sql>

 alt

  • Same function mybatis writing method
 <select  id= "show_case"  resultMap= "BaseResultMap" >
 select *
 from sqltoy_device_order_info t
  <where>
      <if  test= "statusAry!=null" >
 and t.status in
	 <foreach  collection= "statusAry"  item= "status"  separator= ","  open= "("  close= ")" >  
 #{status}
 	 </foreach>  
     </if>
     <if  test= "orderId!=null" >
 and t.ORDER_ID=#{orderId}
     </if>
     <if  test= "authedOrganIds!=null" >
 and t.ORGAN_ID in
	 <foreach  collection= "authedOrganIds"  item= "organ_id"  separator= ","  open= "("  close= ")" >  
 #{order_id}
 	 </foreach>  
     </if>
     <if  test= "staffIds!=null" >
 and t.STAFF_ID in
	 <foreach  collection= "staffIds"  item= "staff_id"  separator= ","  open= "("  close= ")" >  
 #{staff_id}
 	 </foreach>  
     </if>
     <if  test= "beginDate!=null" >
 and t.TRANS_DATE>=#{beginDate}
     </if>
     <if  test= "endDate!=null" >
 and t.TRANS_DATE < #{endDate}
     </if>
 </where>
 </select>

2.4 SQL injection is naturally prevented. The execution process is as follows:

  • Assume that the SQL statement is as follows
 select 	 *
 from  sqltoy_device_order_info  t 
 where  # [ t . ORGAN_ID  in  (: authedOrganIds )]
       # [ and  t . TRANS_DATE >= : beginDate ]
       # [ and  t . TRANS_DATE < : endDate ] 
  • Java call procedure
 lightDao . find ( sql ,  MapKit . keys ( "authedOrganIds" , "beginDate" ,  "endDate" ). values ( authedOrganIdAry , beginDate , null ),
                           DeviceOrderInfoVO . class );
  • The final SQL executed is as follows:
 select 	 *
 from  sqltoy_device_order_info  t 
 where  t . ORDER_ID =?
       and  t . ORGAN_ID  in  ( ? , ? , ? )
       and  t . TRANS_DATE >=?	
  • Then set the condition value through: pst. set (index, value)

2.5 The most perfect paging

2.5.1 Description of paging features

  • 1. Fast pagination: @ fast() enables you to retrieve single page data first and then query by association, greatly improving the speed.
  • 2. Paging optimizer: page optimize enables paging queries to change from twice to 1.3~1.5 times (the total number of records with the same query criteria implemented by cache does not need to be queried repeatedly in a certain period)
  • 3. The process of fetching total records by page in sqltoy is not a simple select count (1) from (original sql); Instead, it intelligently judges whether it becomes: select count (1) from 'from the following statement', And automatically eliminate the outermost order by
  • 4. Sqltoy supports parallel queries: parallel="true", and queries the total number of records and single page data at the same time, greatly improving performance
  • 5. In very special cases, sqltoy paging is optimized, such as with t1 as(), t2 as @ fast (select * from table1) select * from xxx For pagination processing of such complex queries, the count query of sqltoy will be: with t1 as () select count (1) from table1, If it is: with t1 as @ fast (select * from table1) select * from t1, count sql is: select count (1) from table1

2.5.2 Paging SQL Example

 <!--  Quick Paging and Paging Optimization Demo -->
 <sql  id= "sqltoy_fastPage" >
	 <!--  The paging optimizer caches the total number of records in a certain period of time when the query conditions are consistent through caching, thus eliminating the need to query the total number of records every time -->
	 <!--  Parallel: Whether to query the total records and single page data in parallel, and turn off cache optimization when live max=1 -->
	 <!--  Alive max: the maximum number of total records stored for different query conditions; Alive seconds: the survival time of the query condition record quantity (for example, 120 seconds, if the threshold value is exceeded, the query will be repeated) -->
	 <page-optimize  parallel= "true"  alive-max= "100"  alive-seconds= "120"  />
	 <value> <! [CDATA[
 select t1.*,t2.ORGAN_NAME
 --@ fast() implements first to fetch 10 pieces by page (the specific number is determined by pageSize), and then to associate
 from @fast(select t.*   from sqltoy_staff_info t
 where t.STATUS=1
 #[and t.STAFF_NAME like :staffName]
 order by t.ENTRY_DATE desc ) t1
 left join sqltoy_organ_info t2 on  t1.organ_id=t2.ORGAN_ID
 ]]>
	 </value>
	 <!--  Here, a custom count sql is provided for extremely special cases to achieve extreme performance optimization -->
	 <!-- < count-sql></count-sql> -->
 </sql>

2.5.3 Paging Java Code Call

 /**
 *Object based parameter transfer mode
 */
 public  void  findPageByEntity ()  {
	 StaffInfoVO  staffVO  =  new  StaffInfoVO ();
	 //Transfer parameters as query criteria
	 staffVO . setStaffName ( Chen );
	 //Paging optimizer used
	 //The first call: execute count and record fetching queries twice
         //The second call: within a specific time effective range, count will be obtained from the cache, and only the query of fetching single page records will be executed
	 Page  result  =  lightDao . findPage ( new  Page (),  "sqltoy_fastPage" ,  staffVO );
 }

2.6 Very ingenious cache translation, try to turn multi table associated query into a single table

  • 1. Translation through cache: convert code to name, avoid associated query, greatly simplify sql and improve query efficiency
  • 2. Fuzzy matching by cache name: obtain accurate code as a condition to avoid association like fuzzy query
 //Support object attribute annotation mode for cache translation
 @Translate ( cacheName  =  "dictKeyName" ,  cacheType  =  "DEVICE_TYPE" ,  keyField  =  "deviceType" )
 private  String  deviceTypeName ;

 @Translate ( cacheName  =  "staffIdName" ,  keyField  =  "staffId" )
 private  String  staffName ;
 <sql  id= "sqltoy_order_search" >
	 <!--  Cache translation device type
 Cache: the name of the specific cache definition,
 Cache type: generally provides a classification condition filter for the data dictionary
 Columns: the name of the query field in sql. Multiple fields can be translated separated by commas
 Cache indexes: the column corresponding to the cache data name. If it is not filled in, it defaults to the second column (starting from 0, 1 means the second column),
 For example, if the cached data structure is: key, name, fullName, the third column represents the full name
 -->
	 <translate  cache= "dictKeyName"  cache-type= "DEVICE_TYPE"  columns= "deviceTypeName"  cache-indexs= "1" />
	 <!--  Employee name translation. If the same cache is used, several fields can be translated at the same time -->
	 <translate  cache= "staffIdName"  columns= "staffName,createName"  />
	 <filters>
		 <!--  Reverse the use of cache to match the ID by name for accurate query -->
		 <cache-arg  cache-name= "staffIdNameCache"  param= "staffName"  alias-name= "staffIds" />
	 </filters>
	 <value>
	 <! [CDATA[
 select 	ORDER_ID,
 DEVICE_TYPE,
 DEVICE_TYPE deviceTypeName, -- device classification name
 STAFF_ID,
 STAFF_ID staffName, -- employee name
 ORGAN_ID,
 CREATE_BY,
 CREATE_BY createName -- creator name
 from sqltoy_device_order_info t
 where #[t.ORDER_ID=:orderId]
 #[and t.STAFF_ID in (:staffIds)]
 ]]>
	 </value>
 </sql>

2.7 Parallel Query

  • interface specification
 //ParalQuery is query oriented (not used in the process of transaction operation). sqltoy provides powerful methods, but whether it is properly used requires the user to make a reasonable judgment
 /**
 *@ TODO queries in parallel and returns a one-dimensional list. Several query lists contain several result objects. ParamNames and ParamValues are a collection of all sql conditional parameters
 * @param parallQueryList
 * @param paramNames
 * @param paramValues
 */
 public  < T >  List < QueryResult < T >>  parallQuery ( List < ParallQuery >  parallQueryList ,  String []  paramNames ,
			 Object []  paramValues );
  • Example of use
 //Define parameters
 String []  paramNames  =  new  String []  {  "userId" ,  "defaultRoles" ,  "deployId" ,  "authObjType"  };
 Object []  paramValues  =  new  Object []  {  userId ,  defaultRoles ,  GlobalConstants . DEPLOY_ID ,
		 SagacityConstants . TempAuthObjType . GROUP  };
 //Use parallel queries to execute two SQL statements at the same time. The condition parameter is a collection of two queries
 List < QueryResult < TreeModel >>  list  =  super . parallQuery (
		 Arrays . asList (
		         ParallQuery . create (). sql ( "webframe_searchAllModuleMenus" ). resultType ( TreeModel . class ),
				 ParallQuery . create (). sql ( "webframe_searchAllUserReports" ). resultType ( TreeModel . class )),
		 paramNames ,  paramValues );
		

2.8 Cross database support

  • 1. Provide hibernate like object operations, and automatically generate the dialect of the corresponding database.
  • 2. Common queries such as paging, top fetching, and random record fetching are provided to avoid different writing methods for different databases.
  • 3. The standard drill through query mode of tree structured table is provided to replace the previous recursive query. One mode is suitable for all databases.
  • 4. Sqltoy provides a large number of auxiliary implementations based on algorithms. To a large extent, it replaces the previous sql with algorithms, realizing cross database
  • 5. Sqltoy provides the function replacement function. For example, oracle statements can be executed on mysql or sqlserver (functions are replaced with mysql functions when sql is loaded), which largely realizes the production of code. Default: SubStr Trim Instr Concat Nvl function; See org. sagacity. sqltoy. plugins. function Nvl code implementation
 #Enable the default function adaptation conversion function of sqltoy
 spring.sqltoy.functionConverts = default
 #For example, in the MySQL scenario, test other types of databases at the same time to verify that SQL is suitable for different databases, mainly for production software
 spring.sqltoy.redoDataSources[0]=pgdb
 #You can also customize functions to replace Nvl
 # spring.sqltoy.functionConverts=default,com.yourpackage.Nvl
 #Enable Nvl and Instr of the framework
 # spring.sqltoy.functionConverts=Nvl,Instr
 #Enable custom Nvl, Instr
 # spring.sqltoy.functionConverts=com.yourpackage. Nvl,com.yourpackage.Instr
  • 6. Through the sqlId+dialect mode, you can write sql for a specific database. sqltoy obtains the actual executed sql according to the database type. The order is: dialect_sqlId->sqlId_dialect->sqlId, If the database is MySQL, call sqlId: sqltoy_showcase, and the actual execution is: sqltoy_showcase_mysql
	 <sql  id= "sqltoy_showcase" >
		 <value>
			 <! [CDATA[
 select * from sqltoy_user_log t
 where t.user_id=:userId
 ]]>
		 </value>
	 </sql>
         <!--  SqlId_Database dialect (lower case) -->
	 <sql  id= "sqltoy_showcase_mysql" >
		 <value>
			 <! [CDATA[
 select * from sqltoy_user_log t
 where t.user_id=:userId
 ]]>
		 </value>
	 </sql>

2.9 Provide row/column conversion, grouping summary, year-on-year/month on month comparison, tree sorting summary, etc

  • Fruit Sales Record
category Sales month Number of sales Sales quantity (ton) Sales amount (10000 yuan)
Apple May 2019 twelve two thousand two thousand and four hundred
Apple April 2019 eleven one thousand and nine hundred two thousand and six hundred
Apple March 2019 thirteen two thousand two thousand and five hundred
Banana May 2019 ten two thousand two thousand
Banana April 2019 twelve two thousand and four hundred two thousand and seven hundred
Banana March 2019 thirteen two thousand and three hundred two thousand and seven hundred

2.9.1 Row to row transfer (column to row transfer is also supported)

 <!--  Row to column -->
 <sql  id= "pivot_case" >
	 <value>
	 <! [CDATA[
 select t.fruit_name,t.order_month,t.sale_count,t.sale_quantity,t.total_amt
 from sqltoy_fruit_order t
 order by t.fruit_name ,t.order_month
 ]]>
	 </value>
	 <!--  Row to column, with order_month as the horizontal title of classification, and three indicators from sale_count column to total_amt rotated into rows -->
	 <pivot  start-column= "sale_count"  end-column= "total_amt"	 group-columns= "fruit_name"  category-columns= "order_month"  />
 </sql>
  • effect
category March 2019 April 2019 May 2019
Number of transactions number Total amount Number of transactions number Total amount Number of transactions number Total amount
Banana thirteen two thousand and three hundred two thousand and seven hundred twelve two thousand and four hundred two thousand and seven hundred ten two thousand two thousand
Apple thirteen two thousand two thousand and five hundred eleven one thousand and nine hundred two thousand and six hundred twelve two thousand two thousand and four hundred

2.9.2 Group summary and average (at any level)

 <sql  id= "group_summary_case" >
	 <value>
		 <! [CDATA[
 select t.fruit_name,t.order_month,t.sale_count,t.sale_quantity,t.total_amt
 from sqltoy_fruit_order t
 order by t.fruit_name ,t.order_month
 ]]>
	 </value>
	 <!--  Reverse -->	
	 <summary  columns= "sale_count,sale_quantity,total_amt"  reverse= "true" >
		 <!--  The hierarchy order is kept from high to low -->
		 <global  sum-label= Total  label-column= "fruit_name"  />
                 <!--  Order column: group sort column (sort the same group), order with sum: default is true, order-way:desc/asc -->
		 <group  group-column= "fruit_name"  sum-label= Subtotal  label-column= "fruit_name"  />
	 </summary>
 </sql>
  • effect
category Sales month Number of sales Sales quantity (ton) Sales amount (10000 yuan)
total seventy-one twelve thousand and six hundred fourteen thousand and nine hundred
Subtotal thirty-six five thousand and nine hundred seven thousand and five hundred
Apple May 2019 twelve two thousand two thousand and four hundred
Apple April 2019 eleven one thousand and nine hundred two thousand and six hundred
Apple March 2019 thirteen two thousand two thousand and five hundred
Subtotal thirty-five six thousand and seven hundred seven thousand and four hundred
Banana May 2019 ten two thousand two thousand
Banana April 2019 twelve two thousand and four hundred two thousand and seven hundred
Banana March 2019 thirteen two thousand and three hundred two thousand and seven hundred

2.9.3 First transfer and then month on month calculation

 <!--  Column to column ratio demonstration -->
 <sql  id= "cols_relative_case" >
	 <value>
	 <! [CDATA[
 select t.fruit_name,t.order_month,t.sale_count,t.sale_amt,t.total_amt
 from sqltoy_fruit_order t
 order by t.fruit_name ,t.order_month
 ]]>
	 </value>
	 <!--  Data rotation, row to column, and order_month are displayed in columns. There are three indicators below each month -->
	 <pivot  start-column= "sale_count"  end-column= "total_amt"	 group-columns= "fruit_name"  category-columns= "order_month"  />
	 <!--  Circular ratio calculation between columns -->
	 <cols-chain-relative  group-size= "3"  relative-indexs= "1,2"  start-column= "1"  format= "#.00%"  />
 </sql>
  • effect
category March 2019 April 2019 May 2019
Number of transactions number Compared with last month Total amount Compared with last month Number of transactions number Compared with last month Total amount Compared with last month Number of transactions number Compared with last month Total amount Compared with last month
Banana thirteen two thousand and three hundred two thousand and seven hundred twelve two thousand and four hundred 4.30% two thousand and seven hundred 0% ten two thousand -16.70% two thousand -26.00%
Apple thirteen two thousand two thousand and five hundred eleven one thousand and nine hundred -5.10% two thousand and six hundred 4% twelve two thousand 5.20% two thousand and four hundred -7.70%

2.9.4 Tree Sorting Summary

 <!--  Tree sorting, summary -->
 <sql  id= "treeTable_sort_sum" >
	 <value>
	 <! [CDATA[
 select t.area_code,t.pid_area,sale_cnt from sqltoy_area_sales t
 ]]>
	 </value>
	 <!--  Organize the upper and lower subordinate structure of the tree, and summarize the value of the bottom node to the parent node level by level, and arrange the same level in descending order -->
	 <tree-sort  id-column= "area_code"  pid-column= "pid_area"	 sum-columns= "sale_cnt"  level-order-column= "sale_cnt"  order-way= "desc" />
 </sql>
  • effect
region Belonging area sales volume
Shanghai China three hundred
Songjiang Shanghai     120
Yangpu Shanghai     116
Pudong Shanghai     64
Jiangsu China two hundred and seventy
Nanjing Jiangsu     110
Suzhou Jiangsu     90
Wuxi Jiangsu     70

2.10 Warehouse and table

2.10.1 Query database and table splitting (database and table splitting strategies can be used at the same time)

 For sql, see the quickstart project: com/sqltoy/quickstart/sqltoy-quickstart.sql.xml file
    <!--  Demo sub database -->
	 <sql  id= "qstart_db_sharding_case" >
		 <sharding-datasource  strategy= "hashDataSource"
			 params= "userId"  />
		 <value>
			 <! [CDATA[
 select * from sqltoy_user_log t
 --UserId is a prerequisite as a sub database key field
 where t.user_id=:userId
 #[and t.log_date> =:beginDate]
 #[and t.log_date < =:endDate]
 ]]>
		 </value>
	 </sql>

	 <!--  Demo sub table -->
	 <sql  id= "qstart_sharding_table_case" >
		 <sharding-table  tables= "sqltoy_trans_info_15d"
			 strategy= "realHisTable"  params= "beginDate"  />
		 <value>
			 <! [CDATA[
 select * from sqltoy_trans_info_15d t
 where t.trans_date> =:beginDate
 #[and t.trans_date < =:endDate]
 ]]>
		 </value>
	 </sql>
        

2.10.2 Operation sub database and sub table (the vo object is automatically generated by quickvo tool according to the database, and the user-defined annotation will not be overwritten)

@Sharding implements policy configuration of database and table by annotations on objects

See: com. sqltoy. quickstart ShardingSearchTest for demonstration

 package  com.sqltoy.showcase.vo ;
 import  java.time.LocalDate ;
 import  java.time.LocalDateTime ;
 import  org.sagacity.sqltoy.config.annotation.Sharding ;
 import  org.sagacity.sqltoy.config.annotation.SqlToyEntity ;
 import  org.sagacity.sqltoy.config.annotation.Strategy ;
 import  com.sagframe.sqltoy.showcase.vo.base.AbstractUserLogVO ;

 /*
 *DB is the sub database policy configuration, and table is the sub table policy configuration, which can be configured simultaneously or independently
 *The policy name should be consistent with the bean definition name in spring. Fields means which field values of the object should be used as the basis for judgment. One or more fields can be used
 *MaxConcurrents: optional configuration, representing the maximum number of parallels maxWaitSeconds: optional configuration, representing the maximum number of seconds to wait
 */
 @Sharding ( db  =  @Strategy ( name  =  "hashBalanceDBSharding" ,  fields  =  {  "userId"  }),
		 // table = @Strategy(name = "hashBalanceSharding", fields = {"userId" }),
		 maxConcurrents  =  ten ,  maxWaitSeconds  =  one thousand and eight hundred )
 @SqlToyEntity
 public  class  UserLogVO  extends  AbstractUserLogVO  {
	
	 private  static  final  long  serialVersionUID  =  1296922598783858512L ;

	 /** default constructor */
	 public  UserLogVO ()  {
		 super ();
	 }
 }

3. Integration description

3.1 See quickstart under trunk, and read readme.md to get started

 package  com.sqltoy.quickstart ;

 import  org.springframework.boot.SpringApplication ;
 import  org.springframework.boot.autoconfigure.SpringBootApplication ;
 import  org.springframework.context.annotation.ComponentScan ;
 import  org.springframework.transaction.annotation.EnableTransactionManagement ;

 /**
 *
 * @project sqltoy-quickstart
 *@ description quickstart main program entry
 * @author zhongxuchen
 *@ version v1.0, Date: July 17, 2020
 *@ modify July 17, 2020, modification description
 */
 @SpringBootApplication
 @ComponentScan ( basePackages  =  {  "com.sqltoy.config" ,  "com.sqltoy.quickstart"  })
 @EnableTransactionManagement
 public  class  SqlToyApplication  {
	 /**
 * @param args
 */
	 public  static  void  main ( String []  args )  {
		 SpringApplication . run ( SqlToyApplication . class ,  args );
	 }
 }

3.2 Configuration of application.properties sqltoy

 # sqltoy config
 spring.sqltoy.sqlResourcesDir = classpath:com/sqltoy/quickstart
 spring.sqltoy.translateConfig = classpath:sqltoy-translate.xml
 spring.sqltoy.debug = true
 #spring.sqltoy.reservedWords=status,sex_type
 #dataSourceSelector: org.sagacity.sqltoy.plugins.datasource.impl. DefaultDataSourceSelector
 #spring.sqltoy.defaultDataSource=dataSource
 #Provide unified public field assignment (see quickstart for source code)
 spring.sqltoy.unifyFieldsHandler = com.sqltoy.plugins.SqlToyUnifyFieldsHandler
 #spring.sqltoy.printSqlTimeoutMillis=200000

3.3 Cache the translated configuration file sqltoy-translate.xml

 <? xml version="1.0" encoding="UTF-8"?>
 <sagacity
	 xmlns= " http://www.sagframe.com/schema/sqltoy-translate "
	 xmlns:xsi= " http://www.w3.org/2001/XMLSchema-instance "
	 xsi:schemaLocation= " http://www.sagframe.com/schema/sqltoy-translate   http://www.sagframe.com/schema/sqltoy/sqltoy-translate.xsd " >
	 <!--  The cache has a default expiration time of 1 hour, so only frequent caches need to be detected in time -->
	 <cache-translates>
		 <!--  Get cache based on sql direct query -->
		 <sql-translate  cache= "dictKeyName"
			 datasource= "dataSource" >
			 <sql>
			 <! [CDATA[
 select t.DICT_KEY,t.DICT_NAME,t.STATUS
 from SQLTOY_DICT_DETAIL t
 where t.DICT_TYPE=:dictType
 order by t.SHOW_INDEX
 ]]>
			 </sql>
		 </sql-translate>

		 <!--  Cache of employee ID and name -->
		 <sql-translate  cache= "staffIdName"
			 datasource= "dataSource" >
			 <sql>
			 <! [CDATA[
 select STAFF_ID,STAFF_NAME,STATUS
 from SQLTOY_STAFF_INFO
 ]]>
			 </sql>
		 </sql-translate>
		 <!--  Cache of organization number and name -->
		 <sql-translate  cache= "organIdName"
			 datasource= "dataSource" >
			 <sql>
			 <! [CDATA[
 select ORGAN_ID,ORGAN_NAME from SQLTOY_ORGAN_INFO order by SHOW_INDEX
 ]]>
			 </sql>
		 </sql-translate>
	 </cache-translates>

	 <!--  Cache refresh detection can provide multiple SQL, service, and rest based service detection -->
	 <cache-update-checkers>
		 <!--  SQL based cache update detection -->
		 <sql-increment-checker  cache= "organIdName"
			 check-frequency= "60"  datasource= "dataSource" >
			 <sql> <! [CDATA[
 --#not_debug#--
 select ORGAN_ID,ORGAN_NAME
 from SQLTOY_ORGAN_INFO
 where UPDATE_TIME > =:lastUpdateTime
 ]]> </sql>
		 </sql-increment-checker>

		 <!--  Incremental update. When changes are detected, the cache is directly updated -->
		 <sql-increment-checker  cache= "staffIdName"
			 check-frequency= "30"  datasource= "dataSource" >
			 <sql> <! [CDATA[
 --#not_debug#--
 select STAFF_ID,STAFF_NAME,STATUS
 from SQLTOY_STAFF_INFO
 where UPDATE_TIME > =:lastUpdateTime
 ]]> </sql>
		 </sql-increment-checker>

		 <!--  Incremental update. The first column of query results with internal classification is classification -->
		 <sql-increment-checker  cache= "dictKeyName"
			 check-frequency= "15"  has-inside-group= "true"  datasource= "dataSource" >
			 <sql> <! [CDATA[
 --#not_debug#--
 select t.DICT_TYPE,t.DICT_KEY,t.DICT_NAME,t.STATUS
 from SQLTOY_DICT_DETAIL t
 where t.UPDATE_TIME > =:lastUpdateTime
 ]]> </sql>
		 </sql-increment-checker>
	 </cache-update-checkers>
 </sagacity>
  • In actual business development, the SqlToyCRUDService can be used directly for regular operations, and simple object operations can be avoided, In addition, for complex logic, write your own service and directly call the LightDao provided by sqltoy to complete database interaction!
 @RunWith ( SpringRunner . class )
 @SpringBootTest ( classes  =  SqlToyApplication . class )
 public  class  CrudCaseServiceTest  {
	 @Autowired
	 private  SqlToyCRUDService  sqlToyCRUDService ;

	 /**
 *Create an employee record
 */
	 @Test
	 public  void  saveStaffInfo ()  {
		 StaffInfoVO  staffInfo  =  new  StaffInfoVO ();
		 staffInfo . setStaffId ( "S190715005" );
		 staffInfo . setStaffCode ( "S190715005" );
		 staffInfo . setStaffName ( "Test Employee 4" );
		 staffInfo . setSexType ( "M" );
		 staffInfo . setEmail ( " test3@aliyun.com " );
		 staffInfo . setEntryDate ( LocalDate . now ());
		 staffInfo . setStatus ( one );
		 staffInfo . setOrganId ( "C0001" );
		 staffInfo . setPhoto ( FileUtil . readAsBytes ( "classpath:/mock/staff_photo.jpg" ));
		 staffInfo . setCountry ( "86" );
		 sqlToyCRUDService . save ( staffInfo );
	 }
  }

4. Description of sqltoy key codes

4.1 The sqltoy orm is mainly divided into the following parts:

  • SqlToyDaoSupport: provides developers with the basic Dao inherited by Dao, which integrates all methods of database operation.
  • LightDao: It provides developers with quick use Dao, so that developers can only focus on writing service business logic code and directly call lightDao in service
  • DialectFactory: the database dialect factory class. sqltoy calls different databases based on the currently connected dialect.
  • SqlToyContext: sqltoy context configuration is the core configuration and exchange area of the whole framework. Spring configuration is mainly used to configure sqltoyContext.
  • EntityManager: Encapsulated in SqlToyContext, it is used to host POJO objects and establish the relationship between objects and database tables. Sqltoy scans and loads objects through the SqlToyEntity annotation.
  • ScriptLoader: The sql configuration file loads the parser, which is encapsulated in the SqlToyContext. The sql file is named in strict accordance with the *. sql.xml rule.
  • TranslateManager: cache translation manager, which is used to load the xml configuration file and cache implementation class of cache translation, Sqltoy provides an interface and provides a default local cache implementation based on ehcache. This is the most efficient. Redis's distributed cache has too much IO overhead. Cache translation is a high frequency call. Generally, it caches and injects stable data that changes relatively infrequently, such as employees, institutions, data dictionaries, product categories, and regions.
  • ShardingStrategy: The policy manager of sub database and sub table does not need to be explicitly defined after the 4. x version. Only through spring definition, sqltoy will be dynamically managed when used.

4.2 Quick reading and understanding of sqltoy:

  • Learn all the functions provided by sqltoy from LightDao
  • SqlToyDaoSupport is the specific function implementation of LightDao.
  • From DialectFactory, you will enter the implementation portal of different database dialects. You can trace and see the implementation logic of the specific database. You will see the encapsulation of oracle, mysql and other paging, random record fetching, and fast paging.
  • EntityManager: You will find out how to scan POJOs and build models, and know that operating databases through POJOs will actually turn into corresponding sql for interaction.
  • ParallelUtils: Object sub database and sub table parallel executor. Through this class, you can see how to group sets into different tables in different libraries and perform parallel scheduling during sub database and sub table batch operations.
  • SqlToyContext: the context of sqltoy configuration. This class can show the full picture of sqltoy.
  • PageOptimizeUtils: You can see the default implementation principle of paging optimization.
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution. " "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

brief introduction

Java's truly smart ORM framework, which integrates JPA functions and the best sql writing and query mode, original cache translation, optimized paging, and provides unlimited hierarchical grouping summary, year-on-year and month on month comparison, row column conversion, tree sorting summary SQL self adapts to different databases, sub databases and sub tables, multi tenants, data encryption and decryption, desensitization, as well as a one-stop solution for sharing practical experience of complex business and large-scale data analysis and other pain points and difficult problems! open Stow
Apache-2.0
cancel

contributor

whole

Recent developments

Load more
Can't load more
Ma Jiancang AI Assistant
Try more
Code interpretation
Code fault finding
Code optimization
Java
one
https://gitee.com/sagacity/sagacity-sqltoy.git
git@gitee.com :sagacity/sagacity-sqltoy.git
sagacity
sagacity-sqltoy
sagacity-sqltoy
five point six

Search Help

 344bd9b3 5694891  D2dac590 5694891