Share feedback, ideas and get community help

Updated 7 months ago

Inconsistency in Displaying the Last Assistant Message When Listing Messages with Typebot via HTTP

Hey community,

I've encountered a challenge while using the OpenAI Assistant via HTTP requests to list the messages of a specific thread and extract the most recent messages. The goal is to display the Assistant's latest response to the user. However, when listing the messages, the last message shown is the user's last message, not the Assistant's response.

Despite this, when performing a test call by passing the conversation 'thread', the Assistant's response is completed and available, indicating that the response was correctly generated by the Assistant, but it's not being displayed as expected in the message listing.

Could you help me with this issue?
Attachments
4.png
2.png
3.png
1.png
B
N
25 comments
I really suggest that you use the OpenAI block with the Ask Assistant action
You will struggle otherwise
@Baptiste Thank you for suggesting using the OpenAI block with the Ask Assistant action. However, this approach does not resolve my specific situation due to the current limitation of not allowing dynamic variables in the Assistant ID and Thread ID fields.

At the beginning of the flow, I use the Ask Assistant, but for a specific point, I need these fields to accept dynamic values defined during the execution of the flow. This flexibility is essential to adapt the assistant's behavior according to different contexts and real-time user inputs.
Only assistant ID is not ID
Thread ID is always a variable
Indeed we could make the assistant ID dynamic
I appreciate knowing that the Thread ID is already a dynamic variable. Making the Assistant ID dynamic would significantly enhance the flexibility and adaptability of my workflows.

Being able to dynamically assign the Assistant ID during the flow execution would allow the assistant to respond more appropriately to different contexts and user inputs in real-time.
I should work on that fairly soon
Will deploy that in a couple hours 🙂
I am eager to test this improvement; it will be wonderful. Baptiste, I don't want to take advantage of your goodwill, but I need help understanding an issue. When the flow contains an 'HTTP Request' using the GET method:

During tests, everything seems to work correctly.
Attachment
image.png
However, when the lead uses it, it seems like the JSON is incomplete, and part of the response is missing.
Attachment
image.png
When we check it in Postman, the response is correct.
Attachment
image.png
Indeed, the saved body is automatically truncated to avoid storing big objects
I understand, but this creates a significant problem. The content being truncated is exactly what we need when using the GET parameter, which in this case is the AI's response. Would it be possible to adjust it to show this part again? If we consider saving space, we can remove all the currently displayed content, as there is another endpoint that performs this function.
The response on your screenshot is fully displayed here though
It is only displayed when we test with the node, but for the user, only their own question is shown.
Attachment
Captura_de_tela_2024-05-21_124559.png
Look at assistant_id it is null for some reason
That’s why it fails
You should wait for the ask assistant improvement (deploying that tomorrow)
Thank you in advance for your attention.
🤝
Good morning Master!
@Baptiste I performed about 10 updates throughout the day without seeing the new variable. It was only in the late afternoon that I realized the update was available only in the .IO version, while I am using the version on the VPS. Is there any timeline for updating the image with this implementation on the VPS?
Add a reply
Sign up and join the conversation on Discord