Released Dec 10, 2023Knowledge cutoff Dec 31, 202332,768 context$0.54/M input tokens$0.54/M output tokens
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.
Instruct model fine-tuned by Mistral. #moe
Recent activity on Mixtral 8x7B Instruct
Total usage per day on OpenRouter
Prompt
16.1M
Completion
2.86M
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.