open source

XLNet

Provided by: Framework: TensorFlow

XLNet is a powerful Apache‑2.0 open‑source LLM by Google AI & CMU featuring permutation‑based pretraining and Transformer‑XL backbone. It outperforms BERT on 20+ NLP benchmarks like QA, inference, sentiment, and more. Fully supported in Hugging Face for easy integration.

Model Performance Statistics

31

Views

June 19, 2019

Released

Jul 20, 2025

Last Checked

Large

Version

Capabilities
  • Text Classification
  • Question Answering
Performance Benchmarks
GLUE88.4
Technical Specifications
Parameter Count
N/A
Training & Dataset

Dataset Used

BooksCorpus, Wikipedia, Giga5, ClueWeb, Common Crawl

Related AI Models

Discover similar AI models that might interest you

Modelopen source

BERT

BERT

BERT

Google

BERT is a groundbreaking open-source transformer model developed by Google that enables bidirectional understanding of text, improving many NLP tasks like question answering and sentiment analysis.

Natural Language Processingnlp
54
Modelopen source

T5

T5

T5

Google

T5 (Text-to-Text Transfer Transformer) is Google’s powerful open-source model that converts all NLP problems into a text-to-text format, enabling flexible language understanding and generation.

Natural Language Processingnlp
34
Modelopen source

GPT-Neo

GPT-Neo

GPT-Neo

EleutherAI

GPT-Neo is an open-source large language model developed by EleutherAI, designed as an alternative to OpenAI’s GPT-3. It uses the Transformer architecture to generate coherent, human-like text based on a given prompt. GPT-Neo is trained on the Pile dataset, which is a diverse and large-scale text corpus, making it capable of many NLP tasks such as text generation, summarization, translation, and question answering. GPT-Neo models come in different sizes, the most popular being the 1.3B and 2.7B parameter versions.

Natural Language Processingnlp
42