OpenAI’s Assistants API Uncovered: Building a Real-Time Train Assistant Integrated with Multiple APIs

David Sharma
GoPenAI
Published in
15 min readNov 22, 2023

--

Overview of the Use Case

Sam Altman presented the new Assistants API during the OpenAI keynote two weeks ago. This API enables you to create your own assistant or agent capable of executing various tasks, utilizing different APIs, code interpreters, or files as a knowledge base — similar to what LangChain (Agents) or LlamaIndex (as discussed in my previous blog post) already accomplishes. The presented Travel Assistant use case during the keynote was really impressive.

I experimented with the new Assistant API to explore its potential. I’ve long had the idea of creating a chat assistant capable of providing users with information on train journeys (routes, best prices ...). While there are APIs available in Germany, such as hafas-client, for obtaining train journey details and prices, I found the process of implementation and extracting responses from these APIs to be a bit exhausting. Fortunately, with the help of the OpenAI Assistant, we now have a solution to handle these tasks for us! 🙃 This provided a great opportunity to show the capabilities of the new AssistantAPI.

You can check my Demo-Application here : deactivated

Let’s Start : Creating our first Assistant

I invested a few hours into building an assistant that provides users with train journey information for different locations. This involved using APIs to check the Deutsche Bahn ticket system, doing multiple external API calls, referring to a knowledge base (in this case, a CSV file) and create a chat message at the end. It’s working really well for a prototype. The key advantage is that you no longer need to figure out how to extract parameters from the user’s query, manage JSON responses, or creating a user response message. The assistant takes care of all these details for you.

Outcome of this Prototype

We will build a small Python application using the Assistants API and streamlit to build the chat interface for the user. In the end, you will also be able to deploy your assistant to the streamlit cloud.

Setting Up the Project

You can also find the complete code here on Github:

https://github.com/sharmaD91/LlamaIndex-Tutorial

Step 1: Create an OpenAI-Key

As explained above, to create an index over our documents, as well as when processing a user prompt, an API call is made to OpenAI. Therefore LlamaIndex needs an API key.

Create a new account on the OpenAI platform. After logging in, click on Generate new Secret Key in your profile.

Copy that key to some text file, we will need it later.

OpenAI-API Key

Step 2: Set up the Project

Create a new folder “openai-assistant” and open this folder with Visual Studio Code.

Project structure

Create a new folder “.streamlit” and a new file “secrets.toml”. We will add there our secrets (OpenAI-Key and Assistant-ID)

Add your OpenAI_API_KEY in the secrets.toml file.

OPENAI_API_KEY="PASTE YOUR KEY HERE"

Create a requirements.txt file and add the following libaries:

streamlit
openai

We will create a new virtual environment (venv) for this tutorial.

Start a new terminal tab in VS code and create a new virtual environment and save it into the folder venv.

python -m venv venv

A new folder named venv has been created. Activate your virtual env in the terminal with:

Windows: venv/bin/activate
Mac: source venv/bin/activate
activate your venv in python

To install all the required libraries, run pip install inside your terminal:

pip install -r requirements.txt

Step 3: Create a new assistant

Go to https://platform.openai.com/assistants and click on “Assistants” on the left sidebar. Name the bot and choose the newest gpt-4–preview model. Activate the Retrieval and Code Interpreter. You can also define the functions directly within the code by calling the OpenAI endpoint. However, for now, we will use the interface.

Let’s add the following three functions (we will implement them later in our python project). All these functions must adhere to a specific JSON format. You need to define a description and the parameters for each function, specifying which parameters are required.

  • “get_journey”
{
"name": "get_journey",
"description": "find train journeys from station A (from_location) to station B (to_location) for a specific time.",
"parameters": {
"type": "object",
"properties": {
"from_location": {
"type": "string",
"description": "the train station for depature. one of these formats: as stop/station ID (e.g. from=8010159 for Halle (Saale) Hbf) You will get the Ids for a station for from_location from the csv file delimiter is the semicolon"
},
"to_location": {
"type": "string",
"description": " the train station for arrivalone of these formats: as stop/station ID (e.g. from=8010159 for Halle (Saale) Hbf) You will get the Id for a station to_location from the csv file delimiter is the semicolon"
},
"depature": {
"type": "string",
"description": "Compute journeys departing at this date/time. Mutually exclusive with arrival."
},
"arrival": {
"type": "string",
"description": "Compute journeys arriving at this date/time. Mutually exclusive with departure."
}
},
"required": [
"from_location",
"to_location"
]
}
}
  • “get_actual_time_and_date”
{
"name": "get_actual_time_and_date",
"description": "get the actual time and date for the location germany which is needed to get the train schedules",
"parameters": {
"type": "object",
"properties": {
"n": {
"type": "string",
"description": "the letter n"
}
},
"required": [
"n"
]
}
}
  • “get_best_prices”
{
"name": "get_best_prices",
"description": "find best prices for train journeys from station A (from_location) to station B (to_location) for a specific time.The output is an array. Every touple has the time(070000 and price 4660 (as example)",
"parameters": {
"type": "object",
"properties": {
"from_location": {
"type": "string",
"description": "the train station for depature. one of these formats: as stop/station ID (e.g. from=8010159 for Halle (Saale) Hbf) You will get the Ids for a station for from_location from the csv file delimiter is the semicolon"
},
"to_location": {
"type": "string",
"description": " the train station for arrivalone of these formats: as stop/station ID (e.g. from=8010159 for Halle (Saale) Hbf) You will get the Id for a station to_location from the csv file delimiter is the semicolon"
},
"depature": {
"type": "string",
"description": "Compute journeys departing at this date/time format : 20231116. Mutually exclusive with arrival. Take tomorrow as default depature"
},
"journey_time": {
"type": "string",
"description": "on which time do you want to depature (around) in this format : 120000 for 12 pm, default is 12. "
}
},
"required": [
"from_location",
"to_location",
"depature"
]
}
}

We will also include a csv file that include ID’s for every train station in Germany. We need this because the API requires IDs rather than names. In Files click on “add” and upload the train_ids.CSV, that you can find here :

TRAIN_IDS.CSV

Define the Instructions for the Assistant

We also need to add some instructions for the assistant. You can play around with the instructions, add or remove something. It will influence the behavior of the assistant:

You are a train employee and you give information about depatures of trains from x to y.  If the user don't specify the departure or arrival station, then ask for it. 
You have access to trains informations in the function get_journeys.
Give the user back some possibilities when he can take the train. If he writes a city name you have first find the ID in the CSV
You can find the IDs for the train stations in the D_Bahnhof_2020_alle.CSV, Delimiter is Semicolon and the city is within the column station !
You are able to make function calls.
You can also get the actual time and date using the function get_actual_time_and_date for predicting weather and trains
You can also get the best prices for one specific day, but only if the user is asking expressly for best/cheap prices.
Don't response to questions outside the context within the train advisor !

At the end your Assistant should look like this :

Step 4: Implement the functions

Let’s continue with train_informations.py. Here we will define all functions that our assistant can call. There will be 3 functions in total:

  • get_actual_time_and_date: Is required because the GPT-4 language model does not returning the acutal date and we need the exact time/date for train information. Taken from a public date/time API
  • get_journey: Find journeys from A (from) to B (to) for a specific date/time. The departure and arrival stations have to be specified as IDs, which are located in the .csv (see above). Taken from the public hafas-REST
  • get_best_prices: Find the cheapest prices for a journey from A (from) to B (to) for a specific date/time. Taken directly from Deutsche Bahn

Here are the three implemented functions:

import requests
import json
from hashlib import md5


def get_journey(from_location,to_location,depature,arrival=None):
"""Uses hafasClient.journeys() to find journeys from A (from) to B (to)

Args:
from_location : the depature station as ID
to_location : the arrival station as ID
depature : depature date/time
arrival (String, optional): journeys arriving at this date/time Defaults to None.

Returns:
JSON: That containing informations about the trip
"""
BASE_URL = 'https://v6.db.transport.rest'
endpoint= '/journeys'
params = {
"from":from_location,
"to":to_location,
"depature": depature,
"arrival":arrival
}
response = requests.get(BASE_URL+endpoint, params=params)

# Check if the response was successful
if response.status_code == 200:
return response.json()
else:
return "Error during calling the get_jouney function"

def get_actual_time_and_date(n):
endpoint = 'http://worldtimeapi.org/api/timezone/Europe/London'
response = requests.get(endpoint)
# Check if the response was successful
if response.status_code == 200:
return response.json()
else:
return "Error during calling the get_jouney function"

def _checksum(data):
SALT = 'bdI8UVj40K5fvxwf'
saltedData = data+SALT
saltedDataEncoded = saltedData.encode('utf-8')
return md5(saltedDataEncoded).hexdigest()

def get_best_prices(depature,from_location, to_location,journey_time=120000):
"""Returning the cheapest prices (Sparpreis) for a journey

Args:
depature : date+time
from_location : the depature station as ID
to_location : the depature station as ID
journey_time (int, optional):. Defaults to 120000.

Returns:
string: time + price
"""
url = "https://reiseauskunft.bahn.de/bin/mgate.exe"

headers = {
"User-Agent": "Dalvik/2.1.0 (Linux; U; Android 9; Pixel 3 Build/PI)",
"Content-Type": "application/json;charset=UTF-8"
}

bestPriceSearchRequest = {
"auth": {"aid": "n91dB8Z77MLdoR0K", "type": "AID"},
"client": {"id": "DB", "name": "DB Navigator", "os": "Android 9", "res": "1080x2028", "type": "AND", "ua": "Dalvik/2.1.0 (Linux; U; Android 9; Pixel 3 Build/PI)", "v": 22080000},
"ext": "DB.R22.04.a",
"formatted": False,
"lang": "eng",
"svcReqL": [{
"cfg": {"polyEnc": "GPA", "rtMode": "HYBRID"},
"meth": "BestPriceSearch",
"req": {
"outDate": f"{depature}",
"outTime": f"{journey_time}",
"depLocL": [{
"extId": f"{from_location}",
"type": "S"
}],
"arrLocL": [
{
"extId": f"{to_location}",
"type": "S"
}],
"getPasslist": True,
"getPolyline": True,
"jnyFltrL": [{
"mode": "BIT",
"type": "PROD",
"value": "11111111111111"
}],
"trfReq": {
"cType": "PK",
"jnyCl": 2,
"tvlrProf": [{"type": "E"}]
}
}
}],
"ver": "1.15"
}

bestPriceSearchRequestStr = json.dumps(bestPriceSearchRequest, ensure_ascii=False, separators=(',', ':'))
bestPriceSearchRequestEncoded = bestPriceSearchRequestStr.encode('utf-8')

reqChecksum = _checksum(bestPriceSearchRequestStr)

params = {
'checksum': reqChecksum,
}

response = requests.post(url, params=params, headers=headers, data=bestPriceSearchRequestEncoded)
bestPrices = [(bestPrice['toTime'], bestPrice['bestPrice']['amount']) for bestPrice in response.json()['svcResL'][0]['res']['outDaySegL']]

return bestPrices

As you can see, it’s a quick implementation and I haven’t included any error handling, so feel free to make some improvements ;)

Step 5: Implement the Streamlit UI and Assistant Handling

Add your assistant_id (copy it from the OpenAI) to the secrets.toml

Assistant-ID
assistant_id="asst_....."

Let’s implement the flow of the assistant and build the streamlit UI in our streamlit.py file. Copy the following code:


# Importing required packages
import streamlit as st
import openai
import uuid
import json
import os
import time
from train_informations import get_journey,get_actual_time_and_date,get_best_prices
from openai import OpenAI
import logging

openai.api_key = os.getenv('OPENAI_API_KEY')
client = OpenAI()
assistant_id = os.getenv('assistant_id')


# Initialize conversation history
conversation_history = []
# add you assistant_id
st.title('DB GPT Train Assistant 🚉')

with st.sidebar:
st.title('DB GPT Train Assistant 🚉')
st.divider()
st.subheader('Example queries:')
st.write('Give me a train connection for tomorrow 16:00 from Hamburg Hbf to Frankfurt ')
st.write('Give me the cheapest connections for next week tuesday from Berlin to Hannover ')
st.divider()
st.write('📝 Showcase for OpenAI Assistant, read more under : [blogpost ](http://medium.com)')
st.markdown('**Created by [David Sharma](http://david-sharma.de)**')

# map function names from the assistant to your Python functions
functions = {
'get_journey':get_journey,
'get_actual_time_and_date':get_actual_time_and_date,
'get_best_prices': get_best_prices
}

# Function to call the assistant required functions and return their outputs as JSON strings
def execute_required_functions(required_actions):
try:
tool_outputs = []
for tool_call in required_actions.submit_tool_outputs.tool_calls:
func_name = tool_call.function.name
args = json.loads(tool_call.function.arguments)

# Call the corresponding Python function
if func_name in functions:
function = functions[func_name]
result = function(**args)

# Serialize the function's output to JSON
result_str = json.dumps(result)
print(f'Result from {func_name} : {result}')
# Add the result to the list of tool outputs
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": result_str,
})
except Exception as e:
st.error("Sorry, I'm confused. Please refresh the Page (F5)")
return tool_outputs


if "session_id" not in st.session_state:
st.session_state.session_id = str(uuid.uuid4())
if "run" not in st.session_state:
st.session_state.run = {"status": None}
if "messages" not in st.session_state:
st.session_state.messages = []
if "retry_error" not in st.session_state:
st.session_state.retry_error = 0
if "assistant" not in st.session_state:


# Load the previously created assistant
st.session_state.assistant = openai.beta.assistants.retrieve(assistant_id)

# Create a new thread for this session
st.session_state.thread = client.beta.threads.create(
metadata={
'session_id': st.session_state.session_id,
}
)

# If the run is completed, display the messages
elif hasattr(st.session_state.run, 'status') and st.session_state.run.status == "completed":
print(st.session_state.run.status)

# Retrieve the list of messages
st.session_state.messages = client.beta.threads.messages.list(
thread_id=st.session_state.thread.id
)

for thread_message in st.session_state.messages.data:
for message_content in thread_message.content:
# Access the actual text content
message_content = message_content.text
annotations = message_content.annotations

# Display messages
for message in reversed(st.session_state.messages.data):
if message.role in ["user", "assistant"]:
with st.chat_message(message.role,avatar=f'{"👩‍🎨" if message.role=="user" else "🤖"}'):
for content_part in message.content:
message_text = content_part.text.value
st.markdown(message_text)

if prompt := st.chat_input("How can I help you?"):
with st.chat_message('user',avatar="👩‍🎨"):
st.write(prompt)

# Add message to the thread
st.session_state.messages = client.beta.threads.messages.create(
thread_id=st.session_state.thread.id,
role="user",
content=prompt
)

# Do a run to process the messages in the thread
st.session_state.run = client.beta.threads.runs.create(
thread_id=st.session_state.thread.id,
assistant_id=st.session_state.assistant.id,
)
if st.session_state.retry_error < 3:
st.rerun()

if hasattr(st.session_state.run, 'status'):

print(st.session_state.run.status)

if st.session_state.run.status == "requires_action":
print(f'requried action', st.session_state.run.required_action)
with st.chat_message('assistant',avatar="🤖"):
st.write(f'Executing Action ...')

# Get the tool outputs by executing the required functions
tool_outputs = execute_required_functions(st.session_state.run.required_action)

# Submit the tool outputs back to the Assistant
st.session_state.run = client.beta.threads.runs.submit_tool_outputs(
thread_id=st.session_state.thread.id,
run_id=st.session_state.run.id,
tool_outputs=tool_outputs
)
if st.session_state.retry_error < 3:
st.rerun()


# Handle the 'failed' status
elif st.session_state.run.status == "failed":
st.session_state.retry_error += 1
with st.chat_message('assistant'):
if st.session_state.retry_error < 3:
st.write("Run failed, retrying ......")
time.sleep(3) # Longer delay before retrying
st.rerun()
else:
st.error("FAILED: The OpenAI API is currently processing too many requests. Please try again later ......")

# Handle any status that is not 'completed'
elif st.session_state.run.status != "completed":
with st.chat_message('assistant',avatar="🤖"):
st.write(f'Thinking ...... ')
# Attempt to retrieve the run again, possibly redundant if there's no other status but 'running' or 'failed'
st.session_state.run = client.beta.threads.runs.retrieve(
thread_id=st.session_state.thread.id,
run_id=st.session_state.run.id,
)
if st.session_state.retry_error < 3:
st.rerun()

I don’t want to explain the whole code, but I want to highlight some important points. Following we create a dict with our three functions, which we previously implemented in .py and added to our assistant.

# map function names from the assistant to your Python functions
functions = {
'get_journey':get_journey,
'get_actual_time_and_date':get_actual_time_and_date,
'get_best_prices': get_best_prices
}

These functions are used, when the Assistant requires an action (requres_action). It looks for a match between the required functions and the implemented ones.

# Function to call the assistant required functions and return their outputs as JSON strings
def execute_required_functions(required_actions):
tool_outputs = []
for tool_call in required_actions.submit_tool_outputs.tool_calls:
func_name = tool_call.function.name
args = json.loads(tool_call.function.arguments)

# Call the corresponding Python function
if func_name in functions:
function = functions[func_name]
result = function(**args)

# Serialize the function's output to JSON
result_str = json.dumps(result)

# Add the result to the list of tool outputs
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": result_str,
})
return tool_outputs

Once a user has sent their prompt, the message is added to a thread associated with a conversation. The thread then creates a new Run to process that message.

if prompt := st.chat_input("How can I help you?"):
with st.chat_message('user',avatar="👩‍🎨"):
st.write(prompt)

# Add message to the thread
st.session_state.messages = client.beta.threads.messages.create(
thread_id=st.session_state.thread.id,
role="user",
content=prompt
)

# Do a run to process the messages in the thread
st.session_state.run = client.beta.threads.runs.create(
thread_id=st.session_state.thread.id,
assistant_id=st.session_state.assistant.id,
)
if st.session_state.retry_error < 3:
st.rerun()

Let’s take a closer look at the run lifecycle in our assistant:

Run Steps inside an action object Source

We handle these run steps in our streamlit application by constantly checking the run status (via the if-statements). For example:

    if st.session_state.run.status == "requires_action":
print(f'requried action', st.session_state.run.required_action)
with st.chat_message('assistant',avatar="🤖"):
st.write("Executing Action ......")

# Get the tool outputs by executing the required functions
tool_outputs = execute_required_functions(st.session_state.run.required_action)
...
...

We have to call st.rerun() after every run.status check, so we are forcing streamlit to refresh the state. It’s a quick win but there might be a better solution.

...
# Handle any status that is not 'completed'
elif st.session_state.run.status != "completed":
with st.chat_message('assistant',avatar="🤖"):
st.write("Thinking ......")
# Attempt to retrieve the run again, possibly redundant if there's no other status but 'running' or 'failed'
st.session_state.run = client.beta.threads.runs.retrieve(
thread_id=st.session_state.thread.id,
run_id=st.session_state.run.id,
)
if st.session_state.retry_error < 3:
st.rerun()
....

RUN and Evaluate our Assistant

Lets take a look under the hood and see what the Assistant API does during the queries. We have added some print statements for logging.

Run the app with:

streamlit run streamlit.py

Now let’s ask our assistant:

The initial step taken by the assistant is an API call to retrieve the current timestamp since GPT-model-4 cannot provide real-time data, and we had specified a reference to tomorrow.

in_progress
requires_action
requried action RequiredAction(submit_tool_outputs=RequiredActionSubmitToolOutputs(tool_calls=[RequiredActionFunctionToolCall(id='call_IG4gD305RXLiNwD1j87udfdJ', function=Function(arguments='{"n": "n"}', name='get_actual_time_and_date'), type='function')]), type='submit_tool_outputs')
Result from get_actual_time_and_date : {'abbreviation': 'GMT', 'client_ip': '178.13.14.116', 'datetime': '2023-11-21T11:28:35.796231+00:00', 'day_of_week': 2, 'day_of_year': 325, 'dst': False, 'dst_from': None, 'dst_offset': 0, 'dst_until': None, 'raw_offset': 0, 'timezone': 'Europe/London', 'unixtime': 1700566115, 'utc_datetime': '2023-11-21T11:28:35.796231+00:00', 'utc_offset': '+00:00', 'week_number': 47}

Subsequently, the assistant examines the .csv file to identify the accurate IDs corresponding to each station, namely Hamburg Hbf and Frankfurt Hbf.

Quering get_journey with depature: 20231122T160000, from 2514, to 1866
Result from get_journey : Error during calling the get_jouney function
queued
in_progress

Uh-oh, an error occurred ☹️ Let’s observe how our assistant manages this situation. (Keep in mind that we didn’t explicitly incorporate error handling.)

The assistant communicated to the user about the encountered error and mentioned attempting the task again. Now, let’s investigate what is happening in the console.

requires_action
requried action RequiredAction(submit_tool_outputs=RequiredActionSubmitToolOutputs
(tool_calls=[RequiredActionFunctionToolCall(id='call_fGAMsckZskBMnXss3a0LYyIO',
function=Function(arguments='{"from_location":"8002549","to_location":"8098105","depature":"20231122T160000"}',
name='get_journey'), type='function')]), type='submit_tool_outputs')
Quering get_journey with depature: 20231122T160000, from 8002549, to 8098105

He made another attempt at the API request, this time with the correct IDs, and received an exceptionally large JSON response.

Result from get_journey : {'earlierRef': None, 
'laterRef': '3|OF|MT#14#500484#500484#500744#500744#0#0#485#500459#3#0#1050#0#0#-2147483648#1#2|PDH#b3942021c633eb9bf06178f2caf96ee8|RD#21112023|RT#122449|US#0|RS#INIT', 'journeys':
[{'type': 'journey', 'legs': [{'origin': {'type': 'stop', 'id': '8002549', 'name': 'Hamburg Hbf', 'location': {'type': 'location', 'id': '8002549', 'latitude': 53.553533, 'longitude': 10.00636}, 'products':
{'nationalExpress': True, 'national': True, 'regionalExpress': True, 'regional': True, 'suburban': True, 'bus': True, 'ferry': False, 'subway': True, 'tram': False, 'taxi': False}}, 'destination':
{'type': 'stop', 'id': '8000105', 'name': 'Frankfurt(Main)Hbf', 'location': {'type': 'location', 'id': '8000105', 'latitude': 50.106817, 'longitude': 8.663003}, 'products': {'nationalExpress': True, 'national': True, 'regionalExpress': True,
'regional': True, 'suburban': True, 'bus': True, 'ferry': False, 'subway': True, 'tram': True, 'taxi': False}}, 'departure': '2023-11-21T12:28:00+01:00', 'plannedDeparture': '2023-11-21T12:28:00+01:00', 'departureDelay': 0, 'arrival': '2023-11-21T17:09:00+01:00',
'plannedArrival': '2023-11-21T17:09:00+01:00', 'arrivalDelay': 0, 'reachable': True, 'tripId': '1|190542|0|80|21112023', 'line': {'type': 'line', 'id': 'ice-1671', 'fahrtNr': '1671', 'name': 'ICE 1671', 'public': True, 'adminCode': '80____', 'productName': 'ICE', 'mode':
'train', 'product': 'nationalExpress', 'operator': {'type': 'operator', 'id': 'db-fernverkehr-ag', 'name': 'DB Fernverkehr AG'}}, 'direction': 'Karlsruhe Hbf', 'currentLocation': {'type': 'location', 'latitude': 53.545371, 'longitude': 10.006981}, 'arrivalPlatform': '13',
'plannedArrivalPlatform': '13', 'arrivalPrognosisType': 'prognosed', 'departurePlatform': '11A-C', 'plannedDeparturePlatform': '11A-C', 'departurePrognosisType': 'prognosed', 'remarks': [{'text': 'Komfort Check-in possible (visit bahn.de/kci for more information)', 'type': 'hint', 'code': 'komfort-checkin', 'summary': 'Komfort-Checkin availabl
...
...
...

Following that, the assistant is processing this information and presenting a good looking response.

Okay, not bad. Let’s ask if I can bring my bicycle on the train.

Let’s ask if I need to change trains.

The assistant asked if we would like to proceed with booking this train, but we haven’t implemented any booking mechanism.

Okay, so we either need to implement a booking mechanism now, or we can simply instruct him that he cannot book any trains and should only provide information.

Another option that we implemented is providing us with the display of the cheapest connection from one stop to another.

So, the assistant is making an API call to the ‘best_prices’ API and returning the cheapest prices for us.

requried action RequiredAction(submit_tool_outputs=RequiredActionSubmitToolOutputs(tool_calls=[RequiredActionFunctionToolCall(id='call_RUM3xBAVPUOXSwvkiQVFs0XI', function=Function(arguments='{"from_location":"8089066","to_location":"8000152","depature":"20231128"}', name='get_best_prices'), type='function')]), type='submit_tool_outputs')
Quering best_prices with depature: 20231128, from 8089066, to 8000152
Result from get_best_prices : [('070000', 1990), ('100000', 2790), ('130000', 2790), ('160000', 2790), ('190000', 2790), ('01000000', 3390)]

Let’s ask the assistant something out of the context of his role as a train advisor.

I’m sure you can get him to answer your question by doing some prompt injection ;)

Conclusion

We built a personalized chatbot that can answer our questions and call different APIs (in the right order!) on its own. If an error occurs, the assistant automatically retries the API call. We just give it a few instructions, and it does most of the tedious development work for us. I think that in the future, routine developer tasks will decrease, as is already happening today with tools like Copilot.

The OpenAI Assistant API is a really cool extension and it’s easy to use. However, you also have to keep in mind the cost: every call to the Assistant API incurs expenses ((0.01 $ for each request to OpenAI). Overall, there are a lot of different use cases that you can build, also in a corporate context. I hope this guide helps you to try them out and build you own use cases.

--

--