LobeChat
Ctrl K
Back to Discovery
Hunyuan

Hunyuan Lite

hunyuan-lite
Upgraded to a MOE structure with a context window of 256k, leading many open-source models in various NLP, coding, mathematics, and industry benchmarks.
256K

Providers Supporting This Model

Hunyuan
HunyuanHunyuan
Hunyuanhunyuan-lite
Maximum Context Length
256K
Maximum Output Length
6K
Input Price
--
Output Price
--

Model Parameters

Randomness
temperature

This setting affects the diversity of the model's responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. When set to 0, the model always gives the same response to a given input. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 2.00
Nucleus Sampling
top_p

This setting limits the model's selection to a certain proportion of the most likely vocabulary: only selecting those top words whose cumulative probability reaches P. Lower values make the model's responses more predictable, while the default setting allows the model to choose from the entire range of vocabulary. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 1.00
Topic Freshness
presence_penalty

This setting aims to control the reuse of vocabulary based on its frequency in the input. It attempts to use less of those words that appear more frequently in the input, with usage frequency proportional to occurrence frequency. Vocabulary penalties increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Frequency Penalty
frequency_penalty

This setting adjusts the frequency at which the model reuses specific vocabulary that has already appeared in the input. Higher values reduce the likelihood of such repetition, while negative values have the opposite effect. Vocabulary penalties do not increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Single Response Limit
max_tokens

This setting defines the maximum length that the model can generate in a single response. Setting a higher value allows the model to produce longer replies, while a lower value restricts the length of the response, making it more concise. Adjusting this value appropriately based on different application scenarios can help achieve the desired response length and level of detail. View Documentation

Type
INT
Default Value
--
Range
0 ~ 6K

Related Models

Hunyuan

Hunyuan Standard

hunyuan-standard
Utilizes a superior routing strategy while alleviating issues of load balancing and expert convergence. For long texts, the needle-in-a-haystack metric reaches 99.9%. MOE-32K offers a relatively higher cost-performance ratio, balancing effectiveness and price while enabling processing of long text inputs.
32K
Hunyuan

Hunyuan Standard 256K

hunyuan-standard-256K
Utilizes a superior routing strategy while alleviating issues of load balancing and expert convergence. For long texts, the needle-in-a-haystack metric reaches 99.9%. MOE-256K further breaks through in length and effectiveness, greatly expanding the input length capacity.
256K
Hunyuan

Hunyuan Turbo

hunyuan-turbo
The preview version of the next-generation Hunyuan large language model, featuring a brand-new mixed expert model (MoE) structure, which offers faster inference efficiency and stronger performance compared to Hunyuan Pro.
32K
Hunyuan

Hunyuan Pro

hunyuan-pro
A trillion-parameter scale MOE-32K long text model. Achieves absolute leading levels across various benchmarks, capable of handling complex instructions and reasoning, with advanced mathematical abilities, supporting function calls, and optimized for applications in multilingual translation, finance, law, and healthcare.
32K
Hunyuan

Hunyuan Code

hunyuan-code
The latest code generation model from Hunyuan, trained on a base model with 200B high-quality code data, iteratively trained for six months with high-quality SFT data, increasing the context window length to 8K. It ranks among the top in automatic evaluation metrics for code generation across five major programming languages, and performs in the first tier for comprehensive human quality assessments across ten aspects of coding tasks.
8K