feat: add model update script and generate README from Mammouth and Artificial Analysis APIs

This commit is contained in:
laurent
2026-02-22 16:37:52 +01:00
commit 4f4a1a0bc7
2 changed files with 510 additions and 0 deletions

358
README.md Normal file
View File

@@ -0,0 +1,358 @@
# Table des Modèles Mammouth.ai
*Mise à jour automatique via Artificial Analysis & Mammouth API*
Dernière mise à jour : 2026-02-22 16:36:58
## Coding
| Modèle | Prix (In / Out / 1M) | Performance (AA Index) | Vitesse (TPS) |
| :--- | :--- | :--- | :--- |
| Qwen: Qwen3 Coder Next | $0.12 / $0.75 | **25.0** | 32.2 |
| OpenAI: GPT-5.2-Codex | $1.75 / $14.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.1-Codex-Max | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.1-Codex | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.1-Codex-Mini | $0.25 / $2.00 | **25.0** | 32.2 |
| Kwaipilot: KAT-Coder-Pro V1 | $0.21 / $0.83 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder Plus | $1.00 / $5.00 | **25.0** | 32.2 |
| OpenAI: GPT-5 Codex | $1.25 / $10.00 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder Flash | $0.30 / $1.50 | **25.0** | 32.2 |
| xAI: Grok Code Fast 1 | $0.20 / $1.50 | **25.0** | 32.2 |
| Mistral: Codestral 2508 | $0.30 / $0.90 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder 30B A3B Instruct | $0.07 / $0.27 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder 480B A35B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder 480B A35B | $0.22 / $1.00 | **25.0** | 32.2 |
| Qwen: Qwen3 Coder 480B A35B (exacto) | $0.22 / $1.80 | **25.0** | 32.2 |
| Arcee AI: Coder Large | $0.50 / $0.80 | **25.0** | 32.2 |
| Inception: Mercury Coder | $0.25 / $1.00 | **25.0** | 32.2 |
| Qwen: Qwen2.5 Coder 7B Instruct | $0.03 / $0.09 | **25.0** | 32.2 |
| AlfredPros: CodeLLaMa 7B Instruct Solidity | $0.80 / $1.20 | **25.0** | 32.2 |
| Qwen2.5 Coder 32B Instruct | $0.20 / $0.20 | **25.0** | 32.2 |
## Agents
| Modèle | Prix (In / Out / 1M) | Performance (AA Index) | Vitesse (TPS) |
| :--- | :--- | :--- | :--- |
| Nous: Hermes 4 70B | $0.13 / $0.40 | **25.0** | 32.2 |
| Nous: Hermes 4 405B | $1.00 / $3.00 | **25.0** | 32.2 |
| Nous: Hermes 3 70B Instruct | $0.30 / $0.30 | **25.0** | 32.2 |
| Nous: Hermes 3 405B Instruct (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Nous: Hermes 3 405B Instruct | $1.00 / $1.00 | **25.0** | 32.2 |
| NousResearch: Hermes 2 Pro - Llama-3 8B | $0.14 / $0.14 | **25.0** | 32.2 |
## General
| Modèle | Prix (In / Out / 1M) | Performance (AA Index) | Vitesse (TPS) |
| :--- | :--- | :--- | :--- |
| Google: Gemini 3.1 Pro Preview | $2.00 / $12.00 | **25.0** | 32.2 |
| Anthropic: Claude Sonnet 4.6 | $3.00 / $15.00 | **25.0** | 32.2 |
| Qwen: Qwen3.5 Plus 2026-02-15 | $0.40 / $2.40 | **25.0** | 32.2 |
| Qwen: Qwen3.5 397B A17B | $0.15 / $1.00 | **25.0** | 32.2 |
| MiniMax: MiniMax M2.5 | $0.30 / $1.10 | **25.0** | 32.2 |
| Z.ai: GLM 5 | $0.95 / $2.55 | **25.0** | 32.2 |
| Qwen: Qwen3 Max Thinking | $1.20 / $6.00 | **25.0** | 32.2 |
| Anthropic: Claude Opus 4.6 | $5.00 / $25.00 | **25.0** | 32.2 |
| Free Models Router | $0.00 / $0.00 | **25.0** | 32.2 |
| StepFun: Step 3.5 Flash (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| StepFun: Step 3.5 Flash | $0.10 / $0.30 | **25.0** | 32.2 |
| Arcee AI: Trinity Large Preview (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| MoonshotAI: Kimi K2.5 | $0.45 / $2.20 | **25.0** | 32.2 |
| Upstage: Solar Pro 3 (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| MiniMax: MiniMax M2-her | $0.30 / $1.20 | **25.0** | 32.2 |
| Writer: Palmyra X5 | $0.60 / $6.00 | **25.0** | 32.2 |
| LiquidAI: LFM2.5-1.2B-Thinking (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| LiquidAI: LFM2.5-1.2B-Instruct (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| OpenAI: GPT Audio | $2.50 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT Audio Mini | $0.60 / $2.40 | **25.0** | 32.2 |
| Z.ai: GLM 4.7 Flash | $0.06 / $0.40 | **25.0** | 32.2 |
| AllenAI: Molmo2 8B | $0.20 / $0.20 | **25.0** | 32.2 |
| AllenAI: Olmo 3.1 32B Instruct | $0.20 / $0.60 | **25.0** | 32.2 |
| ByteDance Seed: Seed 1.6 Flash | $0.07 / $0.30 | **25.0** | 32.2 |
| ByteDance Seed: Seed 1.6 | $0.25 / $2.00 | **25.0** | 32.2 |
| MiniMax: MiniMax M2.1 | $0.27 / $0.95 | **25.0** | 32.2 |
| Z.ai: GLM 4.7 | $0.38 / $1.70 | **25.0** | 32.2 |
| Google: Gemini 3 Flash Preview | $0.50 / $3.00 | **25.0** | 32.2 |
| Mistral: Mistral Small Creative | $0.10 / $0.30 | **25.0** | 32.2 |
| AllenAI: Olmo 3.1 32B Think | $0.15 / $0.50 | **25.0** | 32.2 |
| Xiaomi: MiMo-V2-Flash | $0.09 / $0.29 | **25.0** | 32.2 |
| NVIDIA: Nemotron 3 Nano 30B A3B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| NVIDIA: Nemotron 3 Nano 30B A3B | $0.05 / $0.20 | **25.0** | 32.2 |
| OpenAI: GPT-5.2 Chat | $1.75 / $14.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.2 Pro | $21.00 / $168.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.2 | $1.75 / $14.00 | **25.0** | 32.2 |
| Mistral: Devstral 2 2512 | $0.40 / $2.00 | **25.0** | 32.2 |
| Relace: Relace Search | $1.00 / $3.00 | **25.0** | 32.2 |
| Z.ai: GLM 4.6V | $0.30 / $0.90 | **25.0** | 32.2 |
| Nex AGI: DeepSeek V3.1 Nex N1 | $0.27 / $1.00 | **25.0** | 32.2 |
| EssentialAI: Rnj 1 Instruct | $0.15 / $0.15 | **25.0** | 32.2 |
| Body Builder (beta) | $-1000000.00 / $-1000000.00 | **25.0** | 32.2 |
| Amazon: Nova 2 Lite | $0.30 / $2.50 | **25.0** | 32.2 |
| Mistral: Ministral 3 14B 2512 | $0.20 / $0.20 | **25.0** | 32.2 |
| Mistral: Ministral 3 8B 2512 | $0.15 / $0.15 | **25.0** | 32.2 |
| Mistral: Ministral 3 3B 2512 | $0.10 / $0.10 | **25.0** | 32.2 |
| Mistral: Mistral Large 3 2512 | $0.50 / $1.50 | **25.0** | 32.2 |
| Arcee AI: Trinity Mini (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Arcee AI: Trinity Mini | $0.04 / $0.15 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.2 Speciale | $0.40 / $1.20 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.2 | $0.26 / $0.38 | **25.0** | 32.2 |
| Prime Intellect: INTELLECT-3 | $0.20 / $1.10 | **25.0** | 32.2 |
| Anthropic: Claude Opus 4.5 | $5.00 / $25.00 | **25.0** | 32.2 |
| AllenAI: Olmo 3 32B Think | $0.15 / $0.50 | **25.0** | 32.2 |
| AllenAI: Olmo 3 7B Instruct | $0.10 / $0.20 | **25.0** | 32.2 |
| AllenAI: Olmo 3 7B Think | $0.12 / $0.20 | **25.0** | 32.2 |
| Google: Nano Banana Pro (Gemini 3 Pro Image Preview) | $2.00 / $12.00 | **25.0** | 32.2 |
| xAI: Grok 4.1 Fast | $0.20 / $0.50 | **25.0** | 32.2 |
| Google: Gemini 3 Pro Preview | $2.00 / $12.00 | **25.0** | 32.2 |
| Deep Cogito: Cogito v2.1 671B | $1.25 / $1.25 | **25.0** | 32.2 |
| OpenAI: GPT-5.1 | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-5.1 Chat | $1.25 / $10.00 | **25.0** | 32.2 |
| MoonshotAI: Kimi K2 Thinking | $0.47 / $2.00 | **25.0** | 32.2 |
| Amazon: Nova Premier 1.0 | $2.50 / $12.50 | **25.0** | 32.2 |
| Perplexity: Sonar Pro Search | $3.00 / $15.00 | **25.0** | 32.2 |
| Mistral: Voxtral Small 24B 2507 | $0.10 / $0.30 | **25.0** | 32.2 |
| OpenAI: gpt-oss-safeguard-20b | $0.07 / $0.30 | **25.0** | 32.2 |
| NVIDIA: Nemotron Nano 12B 2 VL (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| NVIDIA: Nemotron Nano 12B 2 VL | $0.07 / $0.20 | **25.0** | 32.2 |
| MiniMax: MiniMax M2 | $0.26 / $1.00 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 32B Instruct | $0.10 / $0.42 | **25.0** | 32.2 |
| LiquidAI: LFM2-8B-A1B | $0.01 / $0.02 | **25.0** | 32.2 |
| LiquidAI: LFM2-2.6B | $0.01 / $0.02 | **25.0** | 32.2 |
| IBM: Granite 4.0 Micro | $0.02 / $0.11 | **25.0** | 32.2 |
| OpenAI: GPT-5 Image Mini | $2.50 / $2.00 | **25.0** | 32.2 |
| Anthropic: Claude Haiku 4.5 | $1.00 / $5.00 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 8B Thinking | $0.12 / $1.36 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 8B Instruct | $0.08 / $0.50 | **25.0** | 32.2 |
| OpenAI: GPT-5 Image | $10.00 / $10.00 | **25.0** | 32.2 |
| OpenAI: o3 Deep Research | $10.00 / $40.00 | **25.0** | 32.2 |
| OpenAI: o4 Mini Deep Research | $2.00 / $8.00 | **25.0** | 32.2 |
| NVIDIA: Llama 3.3 Nemotron Super 49B V1.5 | $0.10 / $0.40 | **25.0** | 32.2 |
| Baidu: ERNIE 4.5 21B A3B Thinking | $0.07 / $0.28 | **25.0** | 32.2 |
| Google: Gemini 2.5 Flash Image (Nano Banana) | $0.30 / $2.50 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 30B A3B Thinking | $0.00 / $0.00 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 30B A3B Instruct | $0.13 / $0.52 | **25.0** | 32.2 |
| OpenAI: GPT-5 Pro | $15.00 / $120.00 | **25.0** | 32.2 |
| Z.ai: GLM 4.6 | $0.35 / $1.71 | **25.0** | 32.2 |
| Z.ai: GLM 4.6 (exacto) | $0.44 / $1.76 | **25.0** | 32.2 |
| Anthropic: Claude Sonnet 4.5 | $3.00 / $15.00 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.2 Exp | $0.27 / $0.41 | **25.0** | 32.2 |
| TheDrummer: Cydonia 24B V4.1 | $0.30 / $0.50 | **25.0** | 32.2 |
| Relace: Relace Apply 3 | $0.85 / $1.25 | **25.0** | 32.2 |
| Google: Gemini 2.5 Flash Lite Preview 09-2025 | $0.10 / $0.40 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 235B A22B Thinking | $0.00 / $0.00 | **25.0** | 32.2 |
| Qwen: Qwen3 VL 235B A22B Instruct | $0.20 / $0.88 | **25.0** | 32.2 |
| Qwen: Qwen3 Max | $1.20 / $6.00 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.1 Terminus (exacto) | $0.21 / $0.79 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.1 Terminus | $0.21 / $0.79 | **25.0** | 32.2 |
| xAI: Grok 4 Fast | $0.20 / $0.50 | **25.0** | 32.2 |
| Tongyi DeepResearch 30B A3B | $0.09 / $0.45 | **25.0** | 32.2 |
| OpenGVLab: InternVL3 78B | $0.15 / $0.60 | **25.0** | 32.2 |
| Qwen: Qwen3 Next 80B A3B Thinking | $0.15 / $1.20 | **25.0** | 32.2 |
| Qwen: Qwen3 Next 80B A3B Instruct (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Qwen: Qwen3 Next 80B A3B Instruct | $0.09 / $1.10 | **25.0** | 32.2 |
| Meituan: LongCat Flash Chat | $0.20 / $0.80 | **25.0** | 32.2 |
| Qwen: Qwen Plus 0728 | $0.40 / $1.20 | **25.0** | 32.2 |
| Qwen: Qwen Plus 0728 (thinking) | $0.40 / $1.20 | **25.0** | 32.2 |
| NVIDIA: Nemotron Nano 9B V2 (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| NVIDIA: Nemotron Nano 9B V2 | $0.04 / $0.16 | **25.0** | 32.2 |
| MoonshotAI: Kimi K2 0905 | $0.40 / $2.00 | **25.0** | 32.2 |
| MoonshotAI: Kimi K2 0905 (exacto) | $0.60 / $2.50 | **25.0** | 32.2 |
| Qwen: Qwen3 30B A3B Thinking 2507 | $0.05 / $0.34 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3.1 | $0.15 / $0.75 | **25.0** | 32.2 |
| OpenAI: GPT-4o Audio | $2.50 / $10.00 | **25.0** | 32.2 |
| Mistral: Mistral Medium 3.1 | $0.40 / $2.00 | **25.0** | 32.2 |
| Baidu: ERNIE 4.5 21B A3B | $0.07 / $0.28 | **25.0** | 32.2 |
| Baidu: ERNIE 4.5 VL 28B A3B | $0.14 / $0.56 | **25.0** | 32.2 |
| Z.ai: GLM 4.5V | $0.60 / $1.80 | **25.0** | 32.2 |
| AI21: Jamba Large 1.7 | $2.00 / $8.00 | **25.0** | 32.2 |
| OpenAI: GPT-5 Chat | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-5 | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-5 Mini | $0.25 / $2.00 | **25.0** | 32.2 |
| OpenAI: GPT-5 Nano | $0.05 / $0.40 | **25.0** | 32.2 |
| OpenAI: gpt-oss-120b (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| OpenAI: gpt-oss-120b | $0.04 / $0.19 | **25.0** | 32.2 |
| OpenAI: gpt-oss-120b (exacto) | $0.04 / $0.19 | **25.0** | 32.2 |
| OpenAI: gpt-oss-20b (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| OpenAI: gpt-oss-20b | $0.03 / $0.14 | **25.0** | 32.2 |
| Anthropic: Claude Opus 4.1 | $15.00 / $75.00 | **25.0** | 32.2 |
| Qwen: Qwen3 30B A3B Instruct 2507 | $0.09 / $0.30 | **25.0** | 32.2 |
| Z.ai: GLM 4.5 | $0.55 / $2.00 | **25.0** | 32.2 |
| Z.ai: GLM 4.5 Air (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Z.ai: GLM 4.5 Air | $0.13 / $0.85 | **25.0** | 32.2 |
| Qwen: Qwen3 235B A22B Thinking 2507 | $0.00 / $0.00 | **25.0** | 32.2 |
| Z.ai: GLM 4 32B | $0.10 / $0.10 | **25.0** | 32.2 |
| ByteDance: UI-TARS 7B | $0.10 / $0.20 | **25.0** | 32.2 |
| Google: Gemini 2.5 Flash Lite | $0.10 / $0.40 | **25.0** | 32.2 |
| Qwen: Qwen3 235B A22B Instruct 2507 | $0.07 / $0.10 | **25.0** | 32.2 |
| Switchpoint Router | $0.85 / $3.40 | **25.0** | 32.2 |
| MoonshotAI: Kimi K2 0711 | $0.50 / $2.40 | **25.0** | 32.2 |
| Mistral: Devstral Medium | $0.40 / $2.00 | **25.0** | 32.2 |
| Mistral: Devstral Small 1.1 | $0.10 / $0.30 | **25.0** | 32.2 |
| Venice: Uncensored (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| xAI: Grok 4 | $3.00 / $15.00 | **25.0** | 32.2 |
| Google: Gemma 3n 2B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Tencent: Hunyuan A13B Instruct | $0.14 / $0.57 | **25.0** | 32.2 |
| TNG: DeepSeek R1T2 Chimera | $0.25 / $0.85 | **25.0** | 32.2 |
| Morph: Morph V3 Large | $0.90 / $1.90 | **25.0** | 32.2 |
| Morph: Morph V3 Fast | $0.80 / $1.20 | **25.0** | 32.2 |
| Baidu: ERNIE 4.5 VL 424B A47B | $0.42 / $1.25 | **25.0** | 32.2 |
| Baidu: ERNIE 4.5 300B A47B | $0.28 / $1.10 | **25.0** | 32.2 |
| Inception: Mercury | $0.25 / $1.00 | **25.0** | 32.2 |
| Mistral: Mistral Small 3.2 24B | $0.06 / $0.18 | **25.0** | 32.2 |
| MiniMax: MiniMax M1 | $0.40 / $2.20 | **25.0** | 32.2 |
| Google: Gemini 2.5 Flash | $0.30 / $2.50 | **25.0** | 32.2 |
| Google: Gemini 2.5 Pro | $1.25 / $10.00 | **25.0** | 32.2 |
| OpenAI: o3 Pro | $20.00 / $80.00 | **25.0** | 32.2 |
| xAI: Grok 3 Mini | $0.30 / $0.50 | **25.0** | 32.2 |
| xAI: Grok 3 | $3.00 / $15.00 | **25.0** | 32.2 |
| Google: Gemini 2.5 Pro Preview 06-05 | $1.25 / $10.00 | **25.0** | 32.2 |
| DeepSeek: R1 0528 (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| DeepSeek: R1 0528 | $0.40 / $1.75 | **25.0** | 32.2 |
| Anthropic: Claude Opus 4 | $15.00 / $75.00 | **25.0** | 32.2 |
| Anthropic: Claude Sonnet 4 | $3.00 / $15.00 | **25.0** | 32.2 |
| Google: Gemma 3n 4B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Google: Gemma 3n 4B | $0.02 / $0.04 | **25.0** | 32.2 |
| Mistral: Mistral Medium 3 | $0.40 / $2.00 | **25.0** | 32.2 |
| Google: Gemini 2.5 Pro Preview 05-06 | $1.25 / $10.00 | **25.0** | 32.2 |
| Arcee AI: Spotlight | $0.18 / $0.18 | **25.0** | 32.2 |
| Arcee AI: Maestro Reasoning | $0.90 / $3.30 | **25.0** | 32.2 |
| Arcee AI: Virtuoso Large | $0.75 / $1.20 | **25.0** | 32.2 |
| Qwen: Qwen3 4B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Meta: Llama Guard 4 12B | $0.18 / $0.18 | **25.0** | 32.2 |
| Qwen: Qwen3 30B A3B | $0.08 / $0.28 | **25.0** | 32.2 |
| Qwen: Qwen3 8B | $0.05 / $0.40 | **25.0** | 32.2 |
| Qwen: Qwen3 14B | $0.06 / $0.24 | **25.0** | 32.2 |
| Qwen: Qwen3 32B | $0.08 / $0.24 | **25.0** | 32.2 |
| Qwen: Qwen3 235B A22B | $0.45 / $1.82 | **25.0** | 32.2 |
| OpenAI: o4 Mini High | $1.10 / $4.40 | **25.0** | 32.2 |
| OpenAI: o3 | $2.00 / $8.00 | **25.0** | 32.2 |
| OpenAI: o4 Mini | $1.10 / $4.40 | **25.0** | 32.2 |
| OpenAI: GPT-4.1 | $2.00 / $8.00 | **25.0** | 32.2 |
| OpenAI: GPT-4.1 Mini | $0.40 / $1.60 | **25.0** | 32.2 |
| OpenAI: GPT-4.1 Nano | $0.10 / $0.40 | **25.0** | 32.2 |
| EleutherAI: Llemma 7b | $0.80 / $1.20 | **25.0** | 32.2 |
| xAI: Grok 3 Mini Beta | $0.30 / $0.50 | **25.0** | 32.2 |
| xAI: Grok 3 Beta | $3.00 / $15.00 | **25.0** | 32.2 |
| NVIDIA: Llama 3.1 Nemotron Ultra 253B v1 | $0.60 / $1.80 | **25.0** | 32.2 |
| Meta: Llama 4 Maverick | $0.15 / $0.60 | **25.0** | 32.2 |
| Meta: Llama 4 Scout | $0.08 / $0.30 | **25.0** | 32.2 |
| Qwen: Qwen2.5 VL 32B Instruct | $0.20 / $0.60 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3 0324 | $0.19 / $0.87 | **25.0** | 32.2 |
| OpenAI: o1-pro | $150.00 / $600.00 | **25.0** | 32.2 |
| Mistral: Mistral Small 3.1 24B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Mistral: Mistral Small 3.1 24B | $0.35 / $0.56 | **25.0** | 32.2 |
| AllenAI: Olmo 2 32B Instruct | $0.05 / $0.20 | **25.0** | 32.2 |
| Google: Gemma 3 4B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Google: Gemma 3 4B | $0.04 / $0.08 | **25.0** | 32.2 |
| Google: Gemma 3 12B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Google: Gemma 3 12B | $0.04 / $0.13 | **25.0** | 32.2 |
| Cohere: Command A | $2.50 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-4o-mini Search Preview | $0.15 / $0.60 | **25.0** | 32.2 |
| OpenAI: GPT-4o Search Preview | $2.50 / $10.00 | **25.0** | 32.2 |
| Google: Gemma 3 27B (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Google: Gemma 3 27B | $0.04 / $0.15 | **25.0** | 32.2 |
| TheDrummer: Skyfall 36B V2 | $0.55 / $0.80 | **25.0** | 32.2 |
| Perplexity: Sonar Reasoning Pro | $2.00 / $8.00 | **25.0** | 32.2 |
| Perplexity: Sonar Pro | $3.00 / $15.00 | **25.0** | 32.2 |
| Perplexity: Sonar Deep Research | $2.00 / $8.00 | **25.0** | 32.2 |
| Qwen: QwQ 32B | $0.15 / $0.40 | **25.0** | 32.2 |
| Google: Gemini 2.0 Flash Lite | $0.07 / $0.30 | **25.0** | 32.2 |
| Anthropic: Claude 3.7 Sonnet | $3.00 / $15.00 | **25.0** | 32.2 |
| Anthropic: Claude 3.7 Sonnet (thinking) | $3.00 / $15.00 | **25.0** | 32.2 |
| Mistral: Saba | $0.20 / $0.60 | **25.0** | 32.2 |
| Llama Guard 3 8B | $0.02 / $0.06 | **25.0** | 32.2 |
| OpenAI: o3 Mini High | $1.10 / $4.40 | **25.0** | 32.2 |
| Google: Gemini 2.0 Flash | $0.10 / $0.40 | **25.0** | 32.2 |
| Qwen: Qwen VL Plus | $0.21 / $0.63 | **25.0** | 32.2 |
| AionLabs: Aion-1.0 | $4.00 / $8.00 | **25.0** | 32.2 |
| AionLabs: Aion-1.0-Mini | $0.70 / $1.40 | **25.0** | 32.2 |
| AionLabs: Aion-RP 1.0 (8B) | $0.80 / $1.60 | **25.0** | 32.2 |
| Qwen: Qwen VL Max | $0.80 / $3.20 | **25.0** | 32.2 |
| Qwen: Qwen-Turbo | $0.05 / $0.20 | **25.0** | 32.2 |
| Qwen: Qwen2.5 VL 72B Instruct | $0.25 / $0.75 | **25.0** | 32.2 |
| Qwen: Qwen-Plus | $0.40 / $1.20 | **25.0** | 32.2 |
| Qwen: Qwen-Max | $1.60 / $6.40 | **25.0** | 32.2 |
| OpenAI: o3 Mini | $1.10 / $4.40 | **25.0** | 32.2 |
| Mistral: Mistral Small 3 | $0.05 / $0.08 | **25.0** | 32.2 |
| DeepSeek: R1 Distill Qwen 32B | $0.29 / $0.29 | **25.0** | 32.2 |
| Perplexity: Sonar | $1.00 / $1.00 | **25.0** | 32.2 |
| DeepSeek: R1 Distill Llama 70B | $0.70 / $0.80 | **25.0** | 32.2 |
| DeepSeek: R1 | $0.70 / $2.50 | **25.0** | 32.2 |
| MiniMax: MiniMax-01 | $0.20 / $1.10 | **25.0** | 32.2 |
| Microsoft: Phi 4 | $0.06 / $0.14 | **25.0** | 32.2 |
| Sao10K: Llama 3.1 70B Hanami x1 | $3.00 / $3.00 | **25.0** | 32.2 |
| DeepSeek: DeepSeek V3 | $0.32 / $0.89 | **25.0** | 32.2 |
| Sao10K: Llama 3.3 Euryale 70B | $0.65 / $0.75 | **25.0** | 32.2 |
| OpenAI: o1 | $15.00 / $60.00 | **25.0** | 32.2 |
| Cohere: Command R7B (12-2024) | $0.04 / $0.15 | **25.0** | 32.2 |
| Meta: Llama 3.3 70B Instruct (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Meta: Llama 3.3 70B Instruct | $0.10 / $0.32 | **25.0** | 32.2 |
| Amazon: Nova Lite 1.0 | $0.06 / $0.24 | **25.0** | 32.2 |
| Amazon: Nova Micro 1.0 | $0.04 / $0.14 | **25.0** | 32.2 |
| Amazon: Nova Pro 1.0 | $0.80 / $3.20 | **25.0** | 32.2 |
| OpenAI: GPT-4o (2024-11-20) | $2.50 / $10.00 | **25.0** | 32.2 |
| Mistral Large 2411 | $2.00 / $6.00 | **25.0** | 32.2 |
| Mistral Large 2407 | $2.00 / $6.00 | **25.0** | 32.2 |
| Mistral: Pixtral Large 2411 | $2.00 / $6.00 | **25.0** | 32.2 |
| SorcererLM 8x22B | $4.50 / $4.50 | **25.0** | 32.2 |
| TheDrummer: UnslopNemo 12B | $0.40 / $0.40 | **25.0** | 32.2 |
| Anthropic: Claude 3.5 Haiku | $0.80 / $4.00 | **25.0** | 32.2 |
| Magnum v4 72B | $3.00 / $5.00 | **25.0** | 32.2 |
| Anthropic: Claude 3.5 Sonnet | $6.00 / $30.00 | **25.0** | 32.2 |
| Qwen: Qwen2.5 7B Instruct | $0.04 / $0.10 | **25.0** | 32.2 |
| NVIDIA: Llama 3.1 Nemotron 70B Instruct | $1.20 / $1.20 | **25.0** | 32.2 |
| Inflection: Inflection 3 Pi | $2.50 / $10.00 | **25.0** | 32.2 |
| Inflection: Inflection 3 Productivity | $2.50 / $10.00 | **25.0** | 32.2 |
| TheDrummer: Rocinante 12B | $0.17 / $0.43 | **25.0** | 32.2 |
| Meta: Llama 3.2 3B Instruct (free) | $0.00 / $0.00 | **25.0** | 32.2 |
| Meta: Llama 3.2 3B Instruct | $0.02 / $0.02 | **25.0** | 32.2 |
| Meta: Llama 3.2 1B Instruct | $0.03 / $0.20 | **25.0** | 32.2 |
| Meta: Llama 3.2 11B Vision Instruct | $0.05 / $0.05 | **25.0** | 32.2 |
| Qwen2.5 72B Instruct | $0.12 / $0.39 | **25.0** | 32.2 |
| NeverSleep: Lumimaid v0.2 8B | $0.09 / $0.60 | **25.0** | 32.2 |
| Cohere: Command R (08-2024) | $0.15 / $0.60 | **25.0** | 32.2 |
| Cohere: Command R+ (08-2024) | $2.50 / $10.00 | **25.0** | 32.2 |
| Sao10K: Llama 3.1 Euryale 70B v2.2 | $0.65 / $0.75 | **25.0** | 32.2 |
| Qwen: Qwen2.5-VL 7B Instruct | $0.20 / $0.20 | **25.0** | 32.2 |
| Sao10K: Llama 3 8B Lunaris | $0.04 / $0.05 | **25.0** | 32.2 |
| OpenAI: GPT-4o (2024-08-06) | $2.50 / $10.00 | **25.0** | 32.2 |
| Meta: Llama 3.1 405B (base) | $4.00 / $4.00 | **25.0** | 32.2 |
| Meta: Llama 3.1 8B Instruct | $0.02 / $0.05 | **25.0** | 32.2 |
| Meta: Llama 3.1 405B Instruct | $4.00 / $4.00 | **25.0** | 32.2 |
| Meta: Llama 3.1 70B Instruct | $0.40 / $0.40 | **25.0** | 32.2 |
| Mistral: Mistral Nemo | $0.02 / $0.04 | **25.0** | 32.2 |
| OpenAI: GPT-4o-mini (2024-07-18) | $0.15 / $0.60 | **25.0** | 32.2 |
| OpenAI: GPT-4o-mini | $0.15 / $0.60 | **25.0** | 32.2 |
| Google: Gemma 2 27B | $0.65 / $0.65 | **25.0** | 32.2 |
| Google: Gemma 2 9B | $0.03 / $0.09 | **25.0** | 32.2 |
| Sao10k: Llama 3 Euryale 70B v2.1 | $1.48 / $1.48 | **25.0** | 32.2 |
| Mistral: Mistral 7B Instruct | $0.20 / $0.20 | **25.0** | 32.2 |
| Mistral: Mistral 7B Instruct v0.3 | $0.20 / $0.20 | **25.0** | 32.2 |
| Meta: LlamaGuard 2 8B | $0.20 / $0.20 | **25.0** | 32.2 |
| OpenAI: GPT-4o (2024-05-13) | $5.00 / $15.00 | **25.0** | 32.2 |
| OpenAI: GPT-4o | $2.50 / $10.00 | **25.0** | 32.2 |
| OpenAI: GPT-4o (extended) | $6.00 / $18.00 | **25.0** | 32.2 |
| Meta: Llama 3 70B Instruct | $0.51 / $0.74 | **25.0** | 32.2 |
| Meta: Llama 3 8B Instruct | $0.03 / $0.04 | **25.0** | 32.2 |
| Mistral: Mixtral 8x22B Instruct | $2.00 / $6.00 | **25.0** | 32.2 |
| WizardLM-2 8x22B | $0.62 / $0.62 | **25.0** | 32.2 |
| OpenAI: GPT-4 Turbo | $10.00 / $30.00 | **25.0** | 32.2 |
| Anthropic: Claude 3 Haiku | $0.25 / $1.25 | **25.0** | 32.2 |
| Mistral Large | $2.00 / $6.00 | **25.0** | 32.2 |
| OpenAI: GPT-3.5 Turbo (older v0613) | $1.00 / $2.00 | **25.0** | 32.2 |
| OpenAI: GPT-4 Turbo Preview | $10.00 / $30.00 | **25.0** | 32.2 |
| Mistral: Mistral 7B Instruct v0.2 | $0.20 / $0.20 | **25.0** | 32.2 |
| Mistral: Mixtral 8x7B Instruct | $0.54 / $0.54 | **25.0** | 32.2 |
| Noromaid 20B | $1.00 / $1.75 | **25.0** | 32.2 |
| Goliath 120B | $3.75 / $7.50 | **25.0** | 32.2 |
| Auto Router | $-1000000.00 / $-1000000.00 | **25.0** | 32.2 |
| OpenAI: GPT-4 Turbo (older v1106) | $10.00 / $30.00 | **25.0** | 32.2 |
| OpenAI: GPT-3.5 Turbo Instruct | $1.50 / $2.00 | **25.0** | 32.2 |
| Mistral: Mistral 7B Instruct v0.1 | $0.11 / $0.19 | **25.0** | 32.2 |
| OpenAI: GPT-3.5 Turbo 16k | $3.00 / $4.00 | **25.0** | 32.2 |
| Mancer: Weaver (alpha) | $0.75 / $1.00 | **25.0** | 32.2 |
| ReMM SLERP 13B | $0.45 / $0.65 | **25.0** | 32.2 |
| MythoMax 13B | $0.06 / $0.06 | **25.0** | 32.2 |
| OpenAI: GPT-4 (older v0314) | $30.00 / $60.00 | **25.0** | 32.2 |
| OpenAI: GPT-4 | $30.00 / $60.00 | **25.0** | 32.2 |
| OpenAI: GPT-3.5 Turbo | $0.50 / $1.50 | **25.0** | 32.2 |

152
update_models.py Normal file
View File

@@ -0,0 +1,152 @@
import os
import requests
import json
import time
from dotenv import load_dotenv
# Charger .env.global depuis le répertoire parent
load_dotenv("../.env.global")
MAMMOUTH_APIKEY = os.getenv("MAMMOUTH_APIKEY")
AIANALASYS_APIKEY = os.getenv("AIANALASYS_APIKEY")
def get_mammouth_models():
# Mammouth utilise l'API OpenRouter (revendeur)
url = "https://openrouter.ai/api/v1/models"
headers = {"Authorization": f"Bearer {MAMMOUTH_APIKEY}"}
try:
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json()['data']
except Exception as e:
print(f"Error fetching Mammouth models: {e}")
return []
def get_aa_data():
# URL correcte d'après la doc (version v2)
url = "https://artificialanalysis.ai/api/v2/data/llms/models"
headers = {"x-api-key": AIANALASYS_APIKEY}
try:
response = requests.get(url, headers=headers)
response.raise_for_status()
# Le diagnostic a montré que les données sont dans 'data'
return response.json().get('data', [])
except Exception as e:
print(f"Error fetching Artificial Analysis data: {e}")
return []
def generate_markdown(models_data):
# Trier par catégorie (genre)
categories = {}
for m in models_data:
cat = m.get('category', 'General')
if cat not in categories:
categories[cat] = []
categories[cat].append(m)
md = "# Table des Modèles Mammouth.ai\n\n"
md += "*Mise à jour automatique via Artificial Analysis & Mammouth API*\n\n"
md += "Dernière mise à jour : " + time.strftime("%Y-%m-%d %H:%M:%S") + "\n\n"
# Liste des catégories dans un ordre spécifique
order = ['Coding', 'Agents', 'General']
sorted_cats = sorted(categories.keys(), key=lambda x: order.index(x) if x in order else 99)
for cat in sorted_cats:
models = categories[cat]
md += f"## {cat}\n\n"
md += "| Modèle | Prix (In / Out / 1M) | Performance (AA Index) | Vitesse (TPS) |\n"
md += "| :--- | :--- | :--- | :--- |\n"
# Trier par performance (AA index)
models.sort(key=lambda x: x.get('score') or 0, reverse=True)
for m in models:
p_in = f"${m['price_in']:.2f}" if m['price_in'] is not None else "N/A"
p_out = f"${m['price_out']:.2f}" if m['price_out'] is not None else "N/A"
score = f"**{m['score']:.1f}**" if m['score'] else "N/A"
speed = f"{m['speed']:.1f}" if m['speed'] else "N/A"
md += f"| {m['name']} | {p_in} / {p_out} | {score} | {speed} |\n"
md += "\n"
return md
def main():
print("Fetching Mammouth models...")
mammouth_models = get_mammouth_models()
print("Fetching Artificial Analysis data...")
aa_data = get_aa_data()
# Créer un dictionnaire de mapping pour AA (clé: nom du modèle en minuscule)
aa_map = {}
for aa_m in aa_data:
name = aa_m.get('model_name', '').lower()
aa_map[name] = aa_m
enriched_models = []
for m in mammouth_models:
m_id = m['id']
m_name = m['name'].lower()
short_name = m_id.split('/')[-1].lower()
# Mapping logique plus complet
aa_info = aa_map.get(m_name) or aa_map.get(short_name)
# Si pas de match exact, on cherche par sous-chaîne ou flou
if not aa_info:
for key in aa_map:
if key in m_name or m_name in key or key in short_name or short_name in key:
aa_info = aa_map[key]
break
# Extraction des prix Mammouth (prix pour 1 token chez OpenRouter)
pricing = m.get('pricing', {})
try:
price_in = float(pricing.get('prompt', 0)) * 1000000
price_out = float(pricing.get('completion', 0)) * 1000000
except (ValueError, TypeError):
price_in = 0
price_out = 0
score = None
speed = None
# On essaie d'extraire la catégorie de AA, sinon on devine
category = "General"
if aa_info:
evals = aa_info.get('evaluations', {})
# On cherche l'intelligence index
score = evals.get('artificial_analysis_intelligence_index')
speed = aa_info.get('median_output_tokens_per_second')
# Détermination de la catégorie (Genre)
if any(x in m_name or x in short_name for x in ['coding', 'code', 'starcoder', 'stable-code', 'deepseek-coder']):
category = "Coding"
elif any(x in m_name or x in short_name for x in ['agent', 'hermes', 'tool']):
category = "Agents"
else:
category = "General"
enriched_models.append({
'name': m['name'],
'price_in': price_in,
'price_out': price_out,
'score': score,
'speed': speed,
'category': category
})
# On ne garde que les modèles qui ont un score de performance OU un prix raisonnable
# (Certains modèles sont gratuits ou ont des prix nuls)
final_list = [m for m in enriched_models if m['price_in'] > 0 or m['score'] is not None]
markdown = generate_markdown(final_list)
with open("README.md", "w", encoding="utf-8") as f:
f.write(markdown)
print(f"README.md updated with {len(final_list)} models!")
if __name__ == "__main__":
main()