Using aiop to implement event waiting mode

original
2014/08/20 10:18
Reading number 597

The event waiting mode, also known as reactor mode, is simply to listen to the events waiting for IO, and then distribute the events that can be handled after waiting to be ready to the corresponding IO handler for processing. This mode itself is not asynchronous IO, but the asio library of TBOX is also implemented based on this model on unix like systems, So before we explain the real asynchronous io, we will first introduce it a little to get a general understanding of the underlying mechanism of asio.

Unix like systems, such as epoll in linux, kqueue in mac, select, poll, dev/poll, etc., can be used to implement reactor

Although the functions are basically the same, in terms of efficiency, epoll and kqueue are more efficient, because they are not implemented through polling like select and poll

In terms of interface design alone, the design efficiency of kqueue is higher, because it can process multiple events in batches at one time, so it has less interaction with the system kernel. However, it is hard to say that epoll and kqueue are more efficient.

The asio underlying implementation under TBOX can be roughly divided into two types:

  1. Based on the reactor model, the aiop interface (which is described in this chapter) implemented by subpackaging epoll, kqueue, poll, select and other apis monitors events in a separate thread to distribute and process various asynchronous io events, which also implements the actor mode.
  2. Proactor models that are natively supported by the direct system, such as the iocp of windows, are implemented through some encapsulation.

In this way, for the upper layer applications, the same set of asio asynchronous callback interfaces are used, and no changes need to be made. However, the implementation mechanism of the lower layer is quite different according to different platforms. iocp is used on windows, epoll is used on linux and android, kqueue is used on mac, and poll is used on ios..

To get to the point, the aiop interface in this chapter is the upper encapsulation of reactor's various poll interfaces. For some applications that do not need to make do with high performance, using aiop directly is simpler and easier to maintain. The callback based actor mode will be explained in detail in the next chapter.

First, let me describe the following object types:

  1. Aiop: event waiting object pool
  2. Aioo: waiting object, association and maintenance: socket handle, user private data
  3. Aioe: event object. An aioo object can wait for multiple aioe event objects of different types at a time, such as send, recv, acpt, conn, ...
  4. Code: event code

Next, let's take a look at the simple server code that directly uses aiop. (For the sake of simplicity, I will omit resource management and release here):

 tb_int_t main(tb_int_t argc, tb_char_t** argv) { //Initialize a tcp socket for listening tb_socket_ref_t listen_sock = tb_socket_init(TB_SOCKET_TYPE_TCP); tb_assert_and_check_return_val(sock, 0); //Initialize the aiop polling pool. The object size is 16 sockets, which can be mixed. If 0 is passed, the default value is used tb_aiop_ref_t aiop = tb_aiop_init(16); tb_assert_and_check_return_val(aiop, 0); //Bind IP and port. IP is not bound here, and null is passed. The listening port is 9090 if (! tb_socket_bind(listen_sock,  tb_null, 9090)) return 0; //Monitor socket if (! tb_socket_listen(listen_sock, 5)) return 0; //Add this listening socket to aiop and attach the accept wait event if (! tb_aiop_addo(aiop,  listen_sock, TB_AIOE_CODE_ACPT, tb_null)) return 0; //Initialize an aioe event object list, which is used to obtain the events waiting to be returned tb_aioe_t list[16]; //Open cycle while (1) { /*Wait for the event to arrive. It is similar to epoll and select * *16: Maximum number of events to wait *- 1: Wait timeout value, here is permanent wait * *Objn is the number of valid events returned. If it fails, it returns: - 1. If it timeout, it returns: 0 */ tb_long_t objn = tb_aiop_wait(aiop, list, 16, -1); tb_assert_and_check_break(objn >= 0); //Enumerate the list of events waiting for tb_size_t i = 0; for (i = 0;  i < objn; i++) { //Obtain the aioo object handle corresponding to the event. aioo maintains the socket handle, event type, and associated private data in the same way tb_aioo_ref_t aioo = list[i].aioo; //Get the private data pointer associated with aioe events //It can also be obtained through tb_aioo_priv (aioo) tb_cpointer_t priv = list[i].priv; //Get the socket handle corresponding to aioo tb_socket_ref_t sock = tb_aioo_sock(aioo); //Is there an accept event? if (list[i].code & TB_AIOE_CODE_ACPT) { //Receive the other party's connection and return the corresponding client socket tb_socket_ref_t client_sock = tb_socket_accept(sock, tb_null, tb_null); tb_assert_and_check_break(client_sock); /*Add the client socket to the aiop pool and wait for its recv event * *The last parameter here can pass a private data pointer and be associated with sock *It is used to facilitate the maintenance of the session data corresponding to each connection. A string is randomly passed here * *The returned aioo object can be saved, and the waiting events can be flexibly modified later * *Note: The client_sock here needs to be released in its own application, and aiop will not automatically release it *Because this is created by the peripheral code itself, the examples here are simply to save time and are ignored directly. */ tb_aioo_ref_t aioo = tb_aiop_addo(aiop,  client_sock, TB_AIOE_CODE_RECV,  "private data"); tb_assert_and_check_break(aioo); } //Is there a recv event? else if (list[i].code & TB_AIOE_CODE_RECV) { //Non blocking receiving of a piece of data tb_byte_t data[8192]; tb_long_t real = tb_socket_recv(sock, data, sizeof(data)); //Omit some codes after receiving the specified data // ... //Change the socket to wait for sending if (! tb_aiop_sete(aiop,  aioo, TB_AIOE_CODE_SEND, tb_null)) break; //Attempt to send data, which may not be finished tb_socket_send(sock, "hello", sizeof("hello")); } //Send event? else if (list[i].code & TB_AIOE_CODE_SEND) { //Continue to send the last unsent data // tb_socket_send(..); //Delete the corresponding aioo event object and cancel listening to the sock event tb_aiop_delo(aiop, aioo); } //Is there a connection event? else if (list[i].code & TB_AIOE_CODE_CONN) { //It will not enter here. This event only occurs after tb_socket_connect and aioo is registered // ... } //Error code handling else  { tb_trace_e("unknown code: %lu",list[i].code); break; } } } return 0; }

The code here only describes the interface calling process of the AIOP. There is no actual server business logic. It is only for reference. Please do not copy it.

The aioo object mentioned in the code also has a separate wait interface, which is used to directly wait for a single socket object. Generally, it is used in the wait of basic_stream:

 /*!  Wait for the event of a single socket handle * *@ param socket socket handle *@ param code aioe Wait for event code *@ param timeout: - 1 * *@ return>0: waiting event, 0: timeout, - 1: failure */ tb_long_t           tb_aioo_wait(tb_socket_ref_t socket,  tb_size_t code, tb_long_t timeout);

There are not many AIOP interfaces. I will not describe them here, but simply list them:

 /*!  Initialize aiop event waiting pool * *@ param maxn The number of waiting objects * *@ return aiop Pool */ tb_aiop_ref_t       tb_aiop_init(tb_size_t maxn); /*!  Exit aiop * *@ param aiop aiop pool */ tb_void_t           tb_aiop_exit(tb_aiop_ref_t aiop); /*!  Clear all aioo waiting objects in aiop * *@ param aiop aiop pool */ tb_void_t           tb_aiop_cler(tb_aiop_ref_t aiop); /*!  Forcibly exit the wait of aiop, tb_aiop_wait will return: - 1, and exit the loop, unable to wait again * *@ param aiop aiop pool */ tb_void_t           tb_aiop_kill(tb_aiop_ref_t aiop); /*!  Exiting the wait of aiop, tb_aiop_wait will return: 0, but it will not exit the wait cycle, and can continue to wait * *@ param aiop aiop pool */ tb_void_t           tb_aiop_spak(tb_aiop_ref_t aiop); /*!  Add a socket waiting object aioo, and associate the waiting event code and private data priv * *@ param aiop aiop pool *@ param socket socket handle *@ param code Event code waiting *@ param priv associated user private data * *@ return aioo object */ tb_aioo_ref_t       tb_aiop_addo(tb_aiop_ref_t aiop,  tb_socket_ref_t socket, tb_size_t code, tb_cpointer_t priv); /*!  Delete an aioo waiting object and never wait for it again * *@ param aiop aiop pool *@ param aioo aioo object handle * */ tb_void_t           tb_aiop_delo(tb_aiop_ref_t aiop,  tb_aioo_ref_t aioo); /*!  Post an aioe wait event object * *@ param aiop aiop pool *@ param aioe aioe event object * *@ return The successful delivery returns: tb_true, and the failure returns: tb_false */ tb_bool_t           tb_aiop_post(tb_aiop_ref_t aiop,  tb_aioe_t const* aioe); /*!  Set and modify wait events for aioo objects * *@ param aiop aiop pool *@ param aioo aioo object *@ param code Wait for event code *@ param priv user private data * *@ return Successful return: tb_true, unsuccessful return: tb_false */ tb_bool_t           tb_aiop_sete(tb_aiop_ref_t aiop,  tb_aioo_ref_t aioo, tb_size_t code, tb_cpointer_t priv); /*!  Wait for a certain amount of aioe events * *@ param aiop aiop pool *@ param list aioe event list, used to save the event objects returned successfully *@ param maxn specifies how many events to wait for *@ param timeout: - 1 * *@ return>0: The number of aioe event objects actually waiting, 0: timeout, - 1: failure */ tb_long_t           tb_aiop_wait(tb_aiop_ref_t aiop,  tb_aioe_t* list, tb_size_t maxn, tb_long_t timeout);

Although the reactor mode of aiop is quite convenient for processing concurrent io, the underlying system has different degrees of implementation. For example, on Windows, it can only be implemented through select. Therefore, it is still powerless to process some high-performance concurrent io. In order to make better use of the io processing features provided by different systems and achieve better concurrency, It is also necessary to ensure the simplicity and unity of the upper calling interface, and realize cross platform and high portability.

At this time, it is a better solution to encapsulate the higher-level interface with the actor mode and completely adopt the asynchronous callback notification mode.


Expand to read the full text
Loading
Click to lead the topic 📣 Post and join the discussion 🔥
Reward
zero comment
zero Collection
zero fabulous
 Back to top
Top