-
It is as easy to use as pytorch transformers -
Powerful and simple like Keras -
High performance when processing NLU and NLG tasks -
The entry threshold for educators and practitioners is low -
With low computing cost, researchers can share training models without further training. There are eight built-in architectures, including more than 30 pre training models, some of which use more than 100 languages -
Three lines of code can train the most advanced model -
The deep interoperability between TensorFlow 2.0 and PyTorch models allows you to move a single model between two frameworks at will