CogAgent
Tsinghua University
• Framework: PyTorchCogAgent is a powerful open-source AI agent framework developed by Tsinghua University. It supports multimodal understanding, integrating text, image, and other data types for comprehensive AI reasoning and interaction. Built with PyTorch and licensed under Apache 2.0, CogAgent enables researchers and developers to build intelligent systems combining multiple data modalities.
CogAgent AI Model

Model Performance Statistics
Views
Released
Last Checked
Version
- Screen Understanding
- Workflow Automation
- Parameter Count
- N/A
Dataset Used
Web screenshots, app UIs
Related AI Models
Discover similar AI models that might interest you
Granite 3.3

Granite 3.3
IBM
Granite 3.3 is IBM’s latest open-source multimodal AI model, offering advanced reasoning, speech-to-text, and document understanding capabilities. Trained on diverse datasets, it excels in enterprise applications requiring high accuracy and efficiency. Available under Apache 2.0 license.
CLIP

CLIP
OpenAI
CLIP (Contrastive Language–Image Pretraining) is an open-source multimodal model developed by OpenAI that learns visual concepts from natural language supervision. Built with PyTorch and released under the MIT license, it enables powerful image and text embeddings for applications such as zero-shot classification, semantic search, and cross-modal retrieval. It remains actively used in research and AI product development.
DeepSeek-VL

DeepSeek-VL
DeepSeek AI
DeepSeek-VL is a cutting-edge open-source multimodal AI model that integrates vision and language processing to enable tasks like image captioning, semantic search, and cross-modal retrieval. Developed using PyTorch under the MIT license, it is suitable for building advanced AI systems requiring deep understanding across visual and textual data.