LangChain4j Integration
Secure your LangChain4j agents and tools with AIM's Java SDK. Full support for @Tool annotations and AI services.
LangChain4j + AIM Features
@Tool Annotation Support
Combine @Tool with @SecureAction seamlessly
AI Service Monitoring
Track all LLM calls and responses
Memory Tracking
Monitor chat memory and context usage
RAG Security
Secure retrieval augmented generation pipelines
Prerequisites
- • Java 17+ and Maven/Gradle
- • LangChain4j 0.27.0+ dependency
- • AIM SDK installed
Basic Integration
Add both LangChain4j and AIM SDK dependencies to your project:
<dependencies>
<!-- LangChain4j -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>0.27.0</version>
</dependency>
<!-- AIM SDK -->
<dependency>
<groupId>org.opena2a</groupId>
<artifactId>aim-sdk</artifactId>
<version>1.0.0</version>
</dependency>
</dependencies>Securing LangChain4j Tools
Combine LangChain4j's @Tool annotation with AIM's @SecureAction for verified, logged tool execution:
import dev.langchain4j.agent.tool.Tool;
import org.opena2a.aim.annotations.SecureAction;
import org.opena2a.aim.client.RiskLevel;
public class SecuredTools {
@Tool("Search the product catalog")
@SecureAction(capability = "catalog:search")
public List<Product> searchProducts(String query) {
// Tool execution is verified and logged
return productService.search(query);
}
@Tool("Place an order for a customer")
@SecureAction(
capability = "order:create",
riskLevel = RiskLevel.HIGH
)
public Order createOrder(String customerId, List<String> productIds) {
// High-risk action with full audit trail
return orderService.create(customerId, productIds);
}
@Tool("Process a refund")
@SecureAction(
capability = "payment:refund",
riskLevel = RiskLevel.CRITICAL,
jitAccess = true // Requires admin approval
)
public Refund processRefund(String orderId, double amount) {
// Pauses until admin approves in AIM dashboard
return paymentService.refund(orderId, amount);
}
}AI Service Integration
Wrap LangChain4j AI services with AIM security:
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.opena2a.aim.client.AIMClient;
import org.opena2a.aim.client.AgentType;
import java.util.Arrays;
public class SecuredAIService {
private final AIMClient agent;
public SecuredAIService() {
// Initialize AIM with builder pattern
this.agent = AIMClient.builder("langchain4j-assistant")
.agentType(AgentType.LANGCHAIN)
.capabilities(Arrays.asList("chat:send", "tool:execute", "memory:read"))
.build();
}
public Assistant createAssistant() {
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.modelName("gpt-4")
.build();
// Create secured tools
SecuredTools tools = new SecuredTools();
return AiServices.builder(Assistant.class)
.chatLanguageModel(model)
.tools(tools)
.build();
}
interface Assistant {
String chat(String userMessage);
}
}RAG Pipeline Security
Secure retrieval augmented generation pipelines:
import dev.langchain4j.rag.content.retriever.ContentRetriever;
import org.opena2a.aim.annotations.SecureAction;
import org.opena2a.aim.client.RiskLevel;
public class SecuredRAG {
private final ContentRetriever contentRetriever;
@SecureAction(capability = "knowledge:retrieve")
public List<Content> retrieveContext(String query) {
// Document retrieval is logged
return contentRetriever.retrieve(query);
}
@SecureAction(
capability = "knowledge:query",
riskLevel = RiskLevel.MEDIUM,
resource = "customer-data"
)
public String queryKnowledgeBase(String query, String dataSource) {
// Track which data sources are accessed
List<Content> context = retrieveContext(query);
return generateResponse(query, context);
}
}