CogVLM
Tsinghua University
• Framework: PyTorchCogVLM is an advanced open-source vision-language model developed by Tsinghua University. Built with PyTorch and released under the Apache 2.0 license, it supports tasks such as image captioning, visual question answering (VQA), cross-modal retrieval, and semantic understanding. Designed for efficiency and accuracy, CogVLM enables developers to build multimodal AI applications with ease.
CogVLM AI Model

Model Performance Statistics
Views
Released
Last Checked
Version
- Visual Reasoning
- Image QA
- Parameter Count
- N/A
Dataset Used
LAION, COCO, VGQA
Related AI Models
Discover similar AI models that might interest you
CLIP

CLIP
OpenAI
CLIP (Contrastive Language–Image Pretraining) is an open-source multimodal model developed by OpenAI that learns visual concepts from natural language supervision. Built with PyTorch and released under the MIT license, it enables powerful image and text embeddings for applications such as zero-shot classification, semantic search, and cross-modal retrieval. It remains actively used in research and AI product development.
DeepSeek-VL

DeepSeek-VL
DeepSeek AI
DeepSeek-VL is a cutting-edge open-source multimodal AI model that integrates vision and language processing to enable tasks like image captioning, semantic search, and cross-modal retrieval. Developed using PyTorch under the MIT license, it is suitable for building advanced AI systems requiring deep understanding across visual and textual data.
Emu2-Chat

Emu2-Chat
Beijing Academy of AI
Emu2-Chat is a conversational AI model designed for engaging and context-aware chat interactions. It is optimized for natural language understanding and generating human-like responses across various domains. Ideal for chatbots, virtual assistants, and customer support automation.