Revert "fix max_tokens for reasoning models (#1819)" (#1839)

* Revert "fix max_tokens for reasoning models (#1819)"

This reverts commit 1e2e233ff5.

* Revert "fix: revert max_completion_tokens to max_tokens (#1741)"

This reverts commit cd13eeb7d9.

* fix: nvim_version
This commit is contained in:
yetone
2025-04-09 16:58:54 +08:00
committed by GitHub
parent 1fc57ab1ae
commit 04336913b3
6 changed files with 8 additions and 12 deletions

View File

@@ -35,7 +35,7 @@ Then enable it in avante.nvim:
api_key_name = 'GROQ_API_KEY',
endpoint = 'https://api.groq.com/openai/v1/',
model = 'llama-3.3-70b-versatile',
max_tokens = 32768, -- remember to increase this value, otherwise it will stop generating halfway
max_completion_tokens = 32768, -- remember to increase this value, otherwise it will stop generating halfway
},
},
--- ... existing configurations