info@yenlo.com
eng
Menu
Enterprise Integration 35 min

Smarter Log Analysis with Apache Camel and LLMs

Ajanthan Eliyathamby Integration Expert Yenlo
Ajanthan Eliyathamby
Integration Expert
log analysis with camel

When working on system integrations, we often rely on log files to monitor operational activity and troubleshoot issues. But locating the right logs and interpreting them can be time-consuming and repetitive. In this blog, you’ll learn how to build an AI-powered Ops Assistant using Apache Camel, Langchain4j, Ollama, and Qdrant to streamline this process.

By combining these tools into a working Retrieval-Augmented Generation (RAG) pipeline, we can turn raw logs into searchable insights enabling faster diagnostics, less manual work, and a smarter operations workflow.

  • Apache Camel: Apache Camel is an open-source integration framework that helps to route, transform, and process data between different systems using a wide variety of protocols and formats.
  • Langchain4j: Langchain4j is a Java-based framework designed to simplify the integration of Large Language Models (LLMs) into applications.
  • Ollama: Ollama is a tool that allows you to run and manage large language models locally on your machine.
  • Qdrant: Qdrant (Vector Database) is an open-source vector similarity search engine optimized for storing and querying high-dimensional vector data.
  • LLM(Gemma 3): Large Language Model, in this case, we are using Gemma 3, a lightweight, open-source Large Language Model developed from Google built on Gemini technology.

Since we are building a solution to analyze data from log files, this implementation follows the Retrieval-Augmented Generation (RAG) approach. Apache Camel and LangChain4j work together to support the ingestion, retrieval, and AI-driven processing of the data.

To demonstrate the usage and the ability of the Apache Camel, we will primarly build  two routes:

  • Data Ingestion Route: This route reads log files and stores the extracted data into a vector database (Qdrant in this case).
  • Data Retrieval Route: This route performs a similarity search on Qdrant to fetch relevant data, sends the retrieved content to Ollama, and leverages the configured LLM to summarize and generate a refined response.
  1. Data Ingestion Route

a. High Level Flow Diagram

image

Figure 01: High Level Flow Diagram of the Data Ingestion Route    

b. Setting up Qdrant

To run Qdrant locally for POC or testing purposes, we can easily pull and start its official Docker images provided by Qdrant.

docker pull qdrant/qdrant

This command pulls the latest official Qdrant vector database image from Docker Hub

docker run -p 6333:6333 -p 6334:6334 \    -v "$(pwd)/qdrant_storage:/qdrant/storage:z" \    qdrant/qdrant. 

This command runs a Qdrant container using Docker with the following configuration:

  • -p 6333:6333: Maps port 6333 of the container (Qdrant’s REST API) to port 6333 on your host.
  • -p 6334:6334: Maps port 6334 of the container (Qdrant’s gRPC API) to port 6334 on your host.
  • -v “$(pwd)/qdrant_storage:/qdrant/storage:z”: Mounts the local directory qdrant_storage (in your current working directory) to the container’s internal data storage path.
  • qdrant/qdrant: Specifies the Docker image to run (latest Qdrant image from Docker Hub).

Once the Qdrant Docker instance has started and is running execute the following commands to create the collection to hold the text segments.

curl -X PUT http://localhost:6333/collections/log-data -H "Content-Type: application/json" -d '{ "vectors": { "size": 768, "distance": "Cosine" } }'

image
image

Figure 02: Qdrant Console

c. Setting up Ollama

Download and install Ollama based on your operating system. For this article, we demonstrate the setup on macOS. Then, we need to install two models in Ollama.

  1. Gemma 3: Execute the “ollama run gemma3” to install the LLM that we are going to use.
  2. nomic-embed-text: Is a lightweight model designed specifically for generating text embeddings. Run “ollama pull nomic-embed-text”.

ajanthan@YENLO-R7RDCJ626R ~ % ollama list

NAME                        ID              SIZE      MODIFIED    

nomic-embed-text:latest     0a109f422b47    274 MB    10 days ago    

gemma3:latest               a2af6cc3eb7f    3.3 GB    3 weeks ago     

ajanthan@YENLO-R7RDCJ626R ~ %

d. Apache Camel Project Preparation

In this article, we will build the solution using Spring Boot with Apache Camel integrated. Here’s the folder structure we’ll use for this project.

├── OpsAssistant

│   ├── HELP.md

│   ├── README.md

│   ├── mvnw

│   ├── mvnw.cmd

│   ├── pom.xml

│   ├── src

│   │   └── main

│   │       ├── java

│   │       │   └── com

│   │       │       └── ai

│   │       │           └── agent

│   │       │               └── OpsAssistant

│   │       │                   ├── OpsAssistantApplication.java

│   │       │                   ├── config

│   │       │                   │   └── BeanConfig.java

│   │       │                   ├── routes

│   │       │                   │   ├── DataIngestionRoute.java

│   │       │                   │   └── DataRetreivalRoute.java

│   │       │                   ├── service

│   │       │                   │   └── DataIngestionService.java

│   │       │                   └── utils

│   │       │                       ├── CamelConstants.java

│   │       │                       └── TextSegmenter.java

│   │       └── resources

│   │           └── application.yml

i. Understanding the pom.xml

No.DependencyDescription
1camel-spring-boot-starterHelps integration of Camel routes within the Spring Boot Applications
2camel-platform-http-starterTo support Camel Routes to configure as Restful Services.
3lombokThis is for development easy implementation like slf4j logging and getters setters with simply using annotations.
4camel-langchain4j-chat-starterThis component allows us to integrate with any Large Language Model (LLM) supported by LangChain4j.
5camel-langchain4j-embeddings-starterThe LangChain4j embeddings component provides support for compute embeddings using LangChain4j embeddings.
6langchain4j-ollamaThis is a LangChain4j integration that allows you to easily connect and interact with local LLMs managed by Ollama, enabling fast, private AI operations directly from your Java applications.
7spring-boot-starter-webThis is a Spring Boot starter that provides all the dependencies needed to build web applications and RESTful APIs using Spring MVC, with embedded Tomcat server support by default.
8langchain4j-qdrantThis is a LangChain4j integration that enables storing and retrieving vector embeddings from a Qdrant vector database, supporting tasks like semantic search and Retrieval-Augmented Generation (RAG) in Java applications.
9camel-stream-starterThis is a Camel Spring Boot starter that allows reading from and writing to standard input/output (such as console streams), making it easy to integrate simple text-based interactions into Camel routes.
10camel-jackson-starterThis is a Camel Spring Boot starter that provides easy integration with the Jackson library, enabling automatic JSON serialization and deserialization in your Camel routes.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-parent</artifactId>
       <version>3.4.4</version>
       <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.ai.agent</groupId>
    <artifactId>OpsAssistant</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>OpsAssistant</name>
    <description>This is an Operations Assistant AI Agent</description>
    <url/>
    <licenses>
       <license/>
    </licenses>
    <developers>
       <developer/>
    </developers>
    <scm>
       <connection/>
       <developerConnection/>
       <tag/>
       <url/>
    </scm>
    <properties>
       <java.version>17</java.version>
    </properties>
    <dependencies>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-spring-boot-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
       <dependency>
          <groupId>org.projectlombok</groupId>
          <artifactId>lombok</artifactId>
          <optional>true</optional>
       </dependency>
       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-test</artifactId>
          <scope>test</scope>
       </dependency>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-langchain4j-chat-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-langchain4j-embeddings-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
       <dependency>
          <groupId>dev.langchain4j</groupId>
          <artifactId>langchain4j-ollama</artifactId>
          <version>0.36.2</version>
       </dependency>
       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-web</artifactId>
       </dependency>
       <dependency>
          <groupId>dev.langchain4j</groupId>
          <artifactId>langchain4j-qdrant</artifactId>
          <version>0.36.2</version>
       </dependency>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-platform-http-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-stream-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
       <dependency>
          <groupId>org.apache.camel.springboot</groupId>
          <artifactId>camel-jackson-starter</artifactId>
          <version>4.10.2</version>
       </dependency>
    </dependencies>
    <build>
       <plugins>
          <plugin>
             <groupId>org.apache.maven.plugins</groupId>
             <artifactId>maven-compiler-plugin</artifactId>
             <configuration>
                <annotationProcessorPaths>
                   <path>
                      <groupId>org.projectlombok</groupId>
                      <artifactId>lombok</artifactId>
                   </path>
                </annotationProcessorPaths>
             </configuration>
          </plugin>
          <plugin>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-maven-plugin</artifactId>
             <configuration>
                <excludes>
                   <exclude>
                      <groupId>org.projectlombok</groupId>
                      <artifactId>lombok</artifactId>
                   </exclude>
                </excludes>
             </configuration>
          </plugin>
       </plugins>
    </build>

</project>

ii. Spring Boot Main Application Class Implementation

package com.ai.agent.OpsAssistant;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class OpsAssistantApplication {

    public static void main(String[] args) {
       SpringApplication.run(OpsAssistantApplication.class, args);
    }

}

iii. BeanConfig class implementation

This class configures the necessary components for using Ollama for chat and embeddings, store data in Qdrant, and manage conversation memory and all configured automatically using Spring Boot.

Note: For a clearer understanding of each bean, please refer to the comments provided.

package com.ai.agent.OpsAssistant.config;

import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.memory.chat.ChatMemoryProvider;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.embedding.EmbeddingModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.model.ollama.OllamaEmbeddingModel;
import dev.langchain4j.rag.content.retriever.EmbeddingStoreContentRetriever;
import dev.langchain4j.store.embedding.EmbeddingStore;
import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore;
import dev.langchain4j.store.memory.chat.ChatMemoryStore;
import dev.langchain4j.store.memory.chat.InMemoryChatMemoryStore;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class BeanConfig {
    @Value("${ollama.url}")
    private String ollamaUrl;

    @Value("${ollama.model}")
    private String ollamaModel;

    @Value("${ollama.embed.model}")
    private String ollamaEmbedModel;

    @Value("${ollama.temperature}")
    private double temperature;

    @Value("${qdrant.host}")
    private String qdrantHost;

    @Value("${qdrant.port}")
    private int qdrantPort;

    /*
    * Creates a ChatLanguageModel bean using Ollama for LLM-based chat, configured with base URL, model name, and temperature.
    * baseUrl -> Where to talk to Ollama.
    * modelName -> Which model to use for chats.
    * temperature -> How creative the model should be. A low value makes answers more predictable and a higher value makes answers more random and creative. Range 0-1.
    */
    @Bean(name = "ollama")
    public ChatLanguageModel chatLanguageModel(ChatMemoryProvider memoryProvider) {
        return OllamaChatModel.builder()
                .baseUrl(ollamaUrl)
                .modelName(ollamaModel)
                .temperature(temperature)
                .build();
    }


    /*
    * This bean creates an EmbeddingModel that connects to the Ollama server to generate vector embeddings for text.
    * baseUrl -> Where the embedding model API is running.
    * modelName -> Which embedding model to use to turn text into vectors.
    */
    @Bean
    public EmbeddingModel embeddingModel() {
        return OllamaEmbeddingModel.builder()
                .baseUrl(ollamaUrl)
                .modelName(ollamaEmbedModel)
                .build();
    }

    /*
    * This bean sets up an EmbeddingStore that connects to a Qdrant vector database. It allows to store, search, and retrieve
    * vector embeddings which is useful for similarity search or Retrieval-Augmented Generation (RAG).
    * host -> Where Qdrant is running.
    * port -> How to reach Qdrant.
    * collectionName → Where the vectors will be saved inside Qdrant.
    */
    @Bean
    public EmbeddingStore<TextSegment> embeddingStore() {
        return QdrantEmbeddingStore.builder()
                .host(qdrantHost)
                .port(qdrantPort)
                .collectionName("log-data")
                .build();
    }

    /*
    * This bean creates an EmbeddingStoreContentRetriever, which searches the EmbeddingStore, here Qdrant to find text segments that are semantically similar to a given input text.
    * It uses the EmbeddingModel to turn input text into a vector, then finds the best matching stored vectors from the database.
    * embeddingModel -> Convert your input text into a vector.
    * embeddingStore -> Where the vectors are stored and searched.
    * maxResults -> Limit the number of results returned.
    * minScore -> Filter out weak matches. Only results with a similarity score above this value are returned, ensuring that the matches are relevant and strong.
    */
    @Bean
    public EmbeddingStoreContentRetriever retriever(EmbeddingModel embeddingModel,
                                                    EmbeddingStore<TextSegment> embeddingStore) {
        return EmbeddingStoreContentRetriever.builder()
                .embeddingModel(embeddingModel)
                .embeddingStore(embeddingStore)
                .maxResults(30)
                .minScore(0.8)
                .build();
    }


    /*
    * Creates a simple in-memory store to hold the chat memory. It doesn't persist across application restarts, making it suitable for temporary use during a session.
    */
    @Bean
    public ChatMemoryStore chatMemoryStore() {
        return new InMemoryChatMemoryStore();
    }

    /*
    * This bean creates a ChatMemoryProvider, which is responsible for providing access to the chat history of a conversation.
    * memoryId -> Identifies a specific conversation.
    * maxMessages -> Here for example Stores up to 10 messages per conversation.
    * chatMemoryStore -> The place where the messages are stored, here in-memory.
    */
    @Bean
    public ChatMemoryProvider memoryProvider(ChatMemoryStore store) {
        return memoryId -> MessageWindowChatMemory.builder()
                .id(memoryId)
                .maxMessages(10)
                .chatMemoryStore(store)
                .build();
    }

}

iv. TextSegmenter class implementation

package com.ai.agent.OpsAssistant.utils;

import dev.langchain4j.data.segment.TextSegment;

import java.util.ArrayList;
import java.util.List;

/*
* This takes an input string containing multiple lines of text and splits it into a list of TextSegment objects, which represent individual lines.
* Empty lines are ignored, and each non-empty line is converted into a TextSegment.
*/
public class TextSegmenter {
    public static List<TextSegment> segment(String input) {
        List<TextSegment> segments = new ArrayList<>();

        String[] lines = input.split("\\r?\\n");
        for (String line : lines) {
            if (!line.trim().isEmpty()) {
                segments.add(TextSegment.from(line.trim()));
            }
        }

        return segments;
    }
}

v. DataIngestionService class implementation

package com.ai.agent.OpsAssistant.service;

import com.ai.agent.OpsAssistant.utils.TextSegmenter;
import dev.langchain4j.data.embedding.Embedding;
import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.model.embedding.EmbeddingModel;
import dev.langchain4j.store.embedding.EmbeddingStore;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;

@Service
public class DataIngestionService {

    @Autowired
    private EmbeddingStore<TextSegment> vectorStore;
    @Autowired
    private EmbeddingModel embeddingModel;

    /*
    * The method ingestLogData is responsible for processing a log text, splitting it into segments, generating embeddings for each segment, and storing the embeddings in a vector store.
    */
    public void ingestLogData(String logText) {
        List<TextSegment> segments = TextSegmenter.segment(logText);

        for (TextSegment segment : segments) {
            Embedding embedding = embeddingModel.embed(segment.text()).content();
            vectorStore.add(embedding, segment);
        }
    }
}

vi. DataIngestionRoute implementation

In the data ingestion logic as we are reading the server.log file and inserting the lines to Qdrant as text segments, in this article for demonstration purpose mainly loading the lines with timestamp. This can be extended  for more advanced enhancements.

package com.ai.agent.OpsAssistant.routes;

import com.ai.agent.OpsAssistant.service.DataIngestionService;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import java.io.*;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

@Component
public class DataIngestionRoute extends RouteBuilder {
    @Autowired
    private DataIngestionService dataIngestionService;

    @Value("${custom.data.ingestion.log.file}")
    private String logFilePath;

    @Value("${custom.data.ingestion.log.tracker}")
    private String lastTimestampFile;

    private volatile LocalDateTime lastTimestamp = LocalDateTime.MIN;
    private final DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss,SSS");

    @Override
    public void configure() throws Exception {

        /* Retrieve the last time stamp data file, which helps to identify which line has been read last */
        File tsFile = new File(lastTimestampFile);
        if (tsFile.exists()) {
            try (BufferedReader reader = new BufferedReader(new FileReader(tsFile))) {
                String ts = reader.readLine();
                if (ts != null) {
                    lastTimestamp = LocalDateTime.parse(ts, formatter);
                }
            }
        }

        /* Apache Camel Route for read the lines and push to Vector Store */
        from("stream:file?fileName=" + logFilePath + "&scanStream=true&scanStreamDelay=1000")
            .routeId("DataIngestionRoute")
            .process(exchange -> {
                String line = exchange.getIn().getBody(String.class);
                Pattern timestampPattern = Pattern.compile("^\\[(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2},\\d{3})\\]");
                Matcher matcher = timestampPattern.matcher(line);

                if (matcher.find()) {
                    String timestampStr = matcher.group(1);
                    try {
                        LocalDateTime logTime = LocalDateTime.parse(timestampStr, formatter);
                        if (logTime.isAfter(lastTimestamp)) {
                            lastTimestamp = logTime;
                            try (BufferedWriter writer = new BufferedWriter(new FileWriter(lastTimestampFile))) {
                                writer.write(logTime.format(formatter));
                            }
                            log.info("Processing as the timestamp is not processed earlier.");
                            dataIngestionService.ingestLogData(line);
                        }
                    } catch (DateTimeParseException e) {
                        log.warn("Skipping line due to invalid timestamp: {}", timestampStr);
                    }
                } else {
                    log.warn("Skipping line without valid timestamp: {}", line);
                }
            })
            .log("Processed: ${body}");
    }


}

2. Data Retrieval Route

a. High Level Flow Diagram

image

Figure 03: High Level Flow Diagram of the Data Retrieval Route    

b. Camel Constants Class Implementation

package com.ai.agent.OpsAssistant.utils;

import lombok.experimental.UtilityClass;

@UtilityClass
public class CamelConstants {
    public static final String MEMORY_ID = "memoryId";

}

b. Data Retrieval Route Implementation

package com.ai.agent.OpsAssistant.routes;

import com.ai.agent.OpsAssistant.utils.CamelConstants;
import dev.langchain4j.data.message.SystemMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.memory.ChatMemory;
import dev.langchain4j.memory.chat.ChatMemoryProvider;
import dev.langchain4j.rag.content.Content;
import dev.langchain4j.rag.content.retriever.EmbeddingStoreContentRetriever;
import dev.langchain4j.rag.query.Query;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.langchain4j.chat.LangChain4jChat;
import org.apache.camel.model.dataformat.JsonLibrary;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.*;

@Component
public class DataRetreivalRoute extends RouteBuilder {

    @Autowired
    private EmbeddingStoreContentRetriever retriever;

    @Autowired
    private ProducerTemplate producerTemplate;

    @Autowired
    private ChatMemoryProvider memoryProvider;

    @Override
    public void configure() throws Exception {
        /*
         * Use the platform-http component, Camel's built-in lightweight HTTP server.
         * */
        restConfiguration()
                .component("platform-http")
                .port(8080);

        /* API Configuration */
        from("rest:post:chat:/assistant/{userId}")
            .routeId("DataRetreivalRouteApi")
            .to("direct:initiate-rag");

        /*
         * Apache Camel route to fetch the needed information userId and the question from the API request and
         * initiate the call to LLM through langchain4j
         */
        from("direct:initiate-rag")
            .routeId("DataRetreivalRouteRag")
            .unmarshal()
            .json(JsonLibrary.Jackson)
            .setHeader(CamelConstants.MEMORY_ID, header("userId"))
            .setBody(simple("${body[message]}"))
            .process(exchange -> {
                    String question = exchange.getIn().getBody(String.class);
                    String memoryId = exchange.getIn().getHeader(CamelConstants.MEMORY_ID, String.class);
                    /* Retrieve previous chat memory messages for this particular userId */
                    ChatMemory chatMemory = memoryProvider.get(memoryId);

                    /* Add SystemMessage only if it's a new conversation */
                    if (chatMemory.messages().isEmpty()) {
                        chatMemory.add(SystemMessage.from("You are a log analyzer assistant. "
                                + "Your job is to carefully analyze logs retrieved from Qdrant embeddings and answer based on that. "
                                + "If the user asks a common, casual questions like greetings, respond politely without using log data."));
                    }

                    /* Add the new UserMessage */
                    chatMemory.add(UserMessage.from(question));
                    Map<String, Object> headers = new HashMap<>();

                    /* Load the contents from Qdrant */
                    List<Content> contents = retriever.retrieve(Query.from(question));
                    log.info("Contents retrieved: {}", contents.size());
                    headers.put(LangChain4jChat.Headers.AUGMENTED_DATA, contents);

                    /* Set memoryId for conversation continuity */
                    headers.put(CamelConstants.MEMORY_ID, memoryId);
                    /* Send to LangChain4j */
                    String result = producerTemplate.requestBodyAndHeaders("direct:send-langchain4j", chatMemory.messages(), headers, String.class);
                    exchange.getIn().setBody(result);
                });

        from("direct:send-langchain4j")
            .routeId("DataRetreivalLangchainRoute")
            .to("langchain4j-chat:ollama?chatModel=#ollama&chatOperation=CHAT_MULTIPLE_MESSAGES")
            .process(exchange -> {
                    String memoryId = exchange.getIn().getHeader("memoryId", String.class);
                    ChatMemory chatMemory = memoryProvider.get(memoryId);
                    String response = exchange.getMessage().getBody(String.class);
                    /* Set the response from LLM to memory */
                    chatMemory.add(new UserMessage(response));
                    /* Setting a proper json back to API call initiator */
                    Map<String, Object> responseJson = new HashMap<>();
                    responseJson.put("response", response);
                    exchange.getIn().setBody(responseJson);
                })
            .marshal()
            .json(JsonLibrary.Jackson);

    }

}

3. Testing and Verification of the Implementation

  1. Configure the application.yml

spring:
  application:
    name: OpsAssistant

ollama:
  url: 'http://localhost:11434'
  model: gemma3
  embed:
    model: nomic-embed-text
  temperature: 0.2

qdrant:
  host: localhost
  port: 6334

camel:
  component:
    platform-http:
      enabled: true

custom:
  data:
    ingestion:
      log:
        file: "https://2ae95bce.delivery.rocketcdn.me/Users/ajanthan/yenlo/blogs/2025/logs/server.log"
        tracker: "https://2ae95bce.delivery.rocketcdn.me/Users/ajanthan/yenlo/blogs/2025/logs/last-log-timestamp.txt"

  • Execute the below commands to start the service
    • mvn clean install
    • java -jar target/OpsAssistant-0.0.1-SNAPSHOT.jar
  • Once the process has started, the server.log file will be read by the process and data will be inserted to the qdrant as text segments.
image

4. Initiating the Chat for the Data Retrieval

Afbeelding5

Figure 05: Example of initial chat output using Gemma 3 and retrieved log segments.
Check whether memory persistence is functioning correctly

Afbeelding6

Figure 06: Test Result – Memory Check

Afbeelding 7

Figure 07: Test Result – Memory Check

5 Ask questions about the logs

afbeelding 8

Figure 08: Test Result – Log Analysis

This blog demonstrated how to build a basic Ops Assistant capable of reading and understanding log files using open-source tools and RAG principles. By combining Apache Camel for routing, Langchain4j for orchestration, Ollama for running local LLMs, and Qdrant for vector storage, you’ve laid the foundation for intelligent log analysis.

In a follow-up blog, we’ll explore how to improve the assistant’s precision by incorporating structured log data and optimizing prompt handling.

Stay tuned; smarter, faster Ops workflows are just getting started.

Contact our experts if you have any further questions regarding this blog.

Whitepaper: API Security

wp API Security mockup
Download Whitepaper
eng
Close