I'm building a chatbot that uses an LLM. I've been using together.ai. The requests to the API due to the nature of the LLM's sometimes takes more than 10 seconds.
Hi @Baptiste An option to read the webhook header returning from request will be appreciated because some tokens(jwt bearer token, for sample) and other informations are important to read them.