Spring AI: Your Ultimate LLM-Powered City Tour Guide

Ashish Kumar
5 min readMay 15, 2024

--

Introduction

In this post, we’ll explore how you can leverage LLMs to build a city information retrieval system. We will explore how you can do efficient prompting to LLM and parsing the LLM responses for your application. We will build the rest API using spring AI which will give us the information about any city using LLM and then we will use this API in a sample react webpage to visualize.

Prerequisite

  1. You will need the OpenAI API key or you can run open source LLM locally on you computer using ollama. For installing the LLM locally refer to this post
  2. Java 17 and above

Maven Dependencies

Dependencies

Configuration Properties

spring.application.name=ai-tour-guide
spring.ai.ollama.chat.options.model=llama3.1
spring.ai.ollama.chat.options.temperature=0.3

Implementation

  1. Getting Started

Lets start with a very simple chat API. API takes a simple message and it returns the response of message from LLM. Below is the code for it

@RestController
@RequestMapping("/chat")
public class ChatController {

private final ChatClient chatClient;

public ChatController(ChatClient.Builder chatClient) {
this.chatClient = chatClient.build();
}

@GetMapping()
public String generate(@RequestParam(defaultValue = "Tell me a joke.") String message) {
return chatClient.prompt().user(message).call().content();
}

@GetMapping("/stream")
public Flux<String> generateStream(@RequestParam(value = "message", defaultValue = "Tell me about city London") String message) {
return chatClient.prompt()
.system("""
You are an helpful AI tour guide. You can provide the historical
information about cities under 500 character
""")
.user(message)
.stream()
.content();
}
}

You can see that there are two API. ‘/chat’ returns the response when the complete response is available from LLM in synchronous mode. As you already be knowing the experience of ChatGPT that it return the response in streaming fashion. You can have the same experience in spring AI chat client as well. ‘/chat/stream’ API returns the response in streaming mode.

The chat client which we are using here is ChatClient. Builder.build() method provides its implementation DefaultChatClient. DefaultChatClien tencapsulates the ChatModel and the only Model which we have added in the dependency is OllamaChatModel.

Similarly there are implementations available for other LLM APIs . E.g. OpenAI, Gemini. If you want to use those then you will have to add the corresponding dependencies. For more information you can look into the Spring AI documentations

2. First Experience.

Run the application and try it out using the browser

2. Advanced use case

Now lets build our main API. Here is the code for it

@RestController
@RequestMapping("/citi-info")
public class CityController {
private final ChatClient llmClient;

public CityController(ChatClient.Builder chatClientBuilder) {
String defaultSystemPrompt = """
You are an helpful AI tour guide. You can provide information about cities to the users who want to travel to those cities. Your response should include following information:
1. Small description about the city
2. History of the city
3. Attractions in the city
4. Architecture of the city
5. Cuisines of the city
6. Location of the city
7. Weather of the city
8. Getting around in the city
9. More information about the city which includes web links for more information
""";
this.llmClient = chatClientBuilder.defaultSystem(defaultSystemPrompt).build();
}

@GetMapping("/{city}")
public CityInfo cityInfo(@PathVariable String city) {

String userMessagePrompt = """
Tell me about {city}. If You do not know the city then you must not provide any information.
""";
CityInfo cityInfo = llmClient.prompt()
.user(u->u.text(userMessagePrompt).param("city", city))
.call()
.entity(CityInfo.class);
return cityInfo;
}
}
package com.aarash.llm.model;

import java.util.List;

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonInclude;

import lombok.Data;


@Data
@JsonIgnoreProperties(ignoreUnknown = true)
@JsonInclude(JsonInclude.Include.NON_NULL)
public class CityInfo {
String city;
String about;
String history;
List<Attraction> attraction;
List<String> architecture;
List<Attraction> cuisine;
Location location;
String weather;
String gettingAround;
List<String> moreInformation;

public record Location (String latitude, String longitude) {
}

public record Attraction (String name, String information) {
}
public record Cuisine (String name, String information) {
}
}

Explanation:
a. Chat client is the same using Ollama Chat Model however I provided the System message while configuring.

b. We are providing two types of message when interacting with llm client. One is system message and other is user message depicting system and user role.
Commonly there are 3 types of role when it comes to interacting with LLM “system,” “user,” and “assistant.” The “system” role provides high-level instructions to LLM according to which it marches the query with answers and generate the response accordingly, the “user” role presents the user queries or prompts, and the “assistant” is the model’s response.

c. There one input parameters in user prompt — city. City is the input city. Outpur parsing is taken care by the Spring. Since we are using Bean here so it would be a Bean Output Parser. This will take care of formatting the LLM response. Now LLM will generate the response in the format of type CityInfo. For more refer https://spring.io/blog/2024/05/09/spring-ai-structured-output

d. the content method in the chain will finally invoke the LLM api and generate the reponse

Testing:

Run the application and test the API using any city name and wait for the response (since it won’t be a streaming response).

Testing from UI:
I quickly generated a react UI to test the API using the same locally running ollama model. See how the response looks like on a UI

Conclusion:

LLMs offer exciting possibilities. By combining their capabilities with thoughtful design and efficient parsing techniques, you can create a robust system that enhances user experiences.
For complete source code you can refer github repo

--

--

Ashish Kumar
0 Followers

A hand on solution architect with 13+ years of experience in building scalable and reliable software solutions