nvim_del_autocmd was being called in a callback from acp_client, which is a
fast event context where it's not allowed. Wrapping it with vim.schedule()
defers the execution to a safe context.
Signed-off-by: Nayab Sayed <nayabbasha.sayed@microchip.com>
In response to the issue raised in # 281, I have added a dependency of
'nbconvert' to the Python dependency list of the avaent rag server,
hoping to fix the problem
When tools have no parameters, llm_tool_param_fields_to_json_schema
returns an empty Lua table {}. vim.json.encode serializes empty tables
as JSON arrays [] instead of objects {}, causing Anthropic API to reject
requests with "tools.N.custom.input_schema.properties: Input should be
a valid dictionary" error.
Solution: Use vim.empty_dict() for empty properties to ensure correct
JSON object {} serialization instead of array [].
Tested with Claude Sonnet 4.5 - tools now work correctly without
schema validation errors.
Ollama is disabled by default, as it normally does not require API keys
to be defined. Users are supposed to override is_env_set() method in
their configs to enable Ollama.
Provide check_endpoint_alive() helper in Ollama provider module that can
be used directly in place of is_env_set() and checks whether the server
replies to "get list of models" query:
...
ollama = {
is_env_set = require("avante.providers.ollama").check_endpoint_alive,
},
...
Without on_error() handler curl.get() will raise an error which then
will interrupt execution of the plenary job. This interruption will
result in timeout handling code triggering, introducing unneeded delay
and ugly stack traces.
Fix the problem by defining on_error() and callback() handlers and
explicitly wait for job completion with pcall() and Job:wait(). This
allows proper error handling and control over error messages.
Because plenary's curl implementation mangles stderr data, use a local
table to map select error codes to descriptive text messages.
Also do not emit hard error when endpoint is not configured, use
Utils.error().
With Ollama majority of people are using their own models, and Ollama
provider by default queries the server for list of models, so there is
no need to inherit anything.