Crynux Network
HomeBlogGitHubTwitterDiscordNetstats
  • Crynux Network
  • Releases
    • Helium Network
    • Hydrogen Network
  • System Design
    • Network Architecture
    • Consensus Protocol
      • Inference Task Validation
      • Training/FT Task Validation
    • Verifiable Secret Sampling
    • Task Lifecycle
      • Task State Transitions
    • Task Dispatching
    • Task Pricing
    • Quality of Service (QoS)
    • Model Distribution
  • Node Hosting
    • Start a Node
      • Start a Node - Windows
      • Start a Node - Mac
      • Start a Node - Linux
      • Start a Node - Docker
      • Start a Node - Vast
    • Get the Test CNX Tokens
    • Private Key Security
    • Assign GPU to the Node
    • Proxy Settings
    • Docker Compose Options
    • Advanced Configuration
  • Application Development
    • How to Run LLM using Crynux Network
      • Supported Models
    • Application Workflow
    • Execute Tasks
      • Text-to-Image Task
      • Text-to-Text Task
      • Text-to-Music Task
      • Text-to-Video Task
      • Fine-Tuning Task
    • Crynux Bridge
    • API Specification of the Relay
    • Crynux SDK
  • Crynux Token
    • Wallet Configuration
      • Metamask
  • Troubleshooting
    • FAQ
    • Locate the Error Message
    • Exceptions in WebUI
  • Misc
    • Privacy Policy
Powered by GitBook
On this page
  • DeepSeek Models
  • Qwen Models
  • NousResearch Models
Edit on GitHub
  1. Application Development
  2. How to Run LLM using Crynux Network

Supported Models

PreviousHow to Run LLM using Crynux NetworkNextApplication Workflow

Last updated 3 hours ago

Crynux theoretically supports any model that can be executed by the Hugging Face transformers library. To use a specific model, you simply need to specify its Hugging Face model ID in the task configuration. The Crynux Nodes will then automatically fetch the model from Hugging Face and execute the task.

The primary practical limitation on the number and size of models Crynux can support is the maximum available VRAM on the nodes within the Crynux Network. Currently, the nodes with the largest VRAM capacity offer 24GB.

For example, below is a list of popular models that can be used in the Crynux Network. Each entry includes the model name and a direct link to its Hugging Face repository.

If a model isn't on this list, feel free to try it out as long as you're confident it's compatible with the transformers library and there's sufficient VRAM available on the network.

DeepSeek Models

Model ID
Hugging Face Link

deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

deepseek-ai/DeepSeek-R1-Distill-Llama-8B

Qwen Models

Model ID
Hugging Face Link

Qwen/Qwen3-4B

Qwen/Qwen3-8B

Qwen/Qwen2.5-7B

Qwen/Qwen2.5-7B-Instruct

NousResearch Models

Model ID
Hugging Face Link

NousResearch/Hermes-3-Llama-3.1-8B

NousResearch/Hermes-3-Llama-3.2-3B

deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Qwen/Qwen3-4B
Qwen/Qwen3-8B
Qwen/Qwen2.5-7B
Qwen/Qwen2.5-7B-Instruct
NousResearch/Hermes-3-Llama-3.1-8B
NousResearch/Hermes-3-Llama-3.2-3B