Limited Partners (LPs) in alternative investments, such as private equity, venture capital, private debt, infrastructure, face a critical challenge in today's market: extracting actionable insights from complex portfolio data distributed across numerous funds, portfolio companies, and investment vehicles. While the need for quick, accurate portfolio analysis has never been greater, LPs have traditionally relied on time-consuming manual processes - sifting through multiple reports and data tables, or depending on investment teams to compile specific information.
To transform how LPs interact with their private markets data, we have developed Tamarice, an AI-powered portfolio intelligence system. This innovative solution enables Limited Partners to analyze their private equity investments through natural language queries, receiving instant, data-driven insights enhanced by clear visualizations. By combining advanced AI with deep private markets expertise, Tamarice makes sophisticated portfolio analysis accessible and efficient.
While AI-powered portfolio intelligence solutions are becoming increasingly common, Limited Partners remain hesitant to adopt them due to several critical concerns: the risk of incorrect or hallucinated data, lack of transparency in how answers are generated, limited computational capabilities for complex portfolio metrics and questions, and the need to switch between multiple tools for different analytical functions. Tamarice has been purposefully designed to address these fundamental challenges, as the following table summarizes:
Data Accuracy:
Tamarice takes a fundamentally different approach to data accuracy through its agentic architecture. Rather than generating responses that might be plausible but incorrect, the system's language model acts as a decision-maker that systematically interacts with your portfolio database. By translating user questions into precise database queries, all responses are grounded in actual portfolio data, eliminating the risk of hallucinated or incorrect information.
Built-In Private Markets Expertise:
Our system has been specifically optimized for private markets through a proprietary Retrieval-Augmented Generation (RAG) system. With a comprehensive knowledge base of private equity concepts and terminology, Tamarice ensures accurate interpretation of complex private markets terminology and concepts. This specialized focus enables the understanding of industry-specific queries and context that generic solutions cannot match.
Integrated Multi-Functional Analysis:
Unlike traditional solutions that require switching between different tools, Tamarice provides a unified interface for all portfolio analysis needs. The system handles everything from data retrieval to visualization in a single conversational flow.
Advanced Computational Capabilities:
Tamarice combines generic mathematical libraries with private markets-specific calculations, enabling financial analysis beyond basic operations.
Process Transparency and Validation:
Through a self-evaluation process, Tamarice validates every step of its analysis. The system verifies intermediate results against both technical requirements and its private markets knowledge base, creating a dual validation approach. This ensures not only computational accuracy but also alignment with private markets principles. Users receive clear explanations of data sources and calculation steps, making complex analysis both powerful and comprehensible.
At the core of Tamarice lies an agent-based system where a large language model serves dual roles: as a strategic decision-maker orchestrating the analysis process, and as a translator converting natural language queries into precise database operations. Tamarice has access to a sophisticated suite of tools - from database querying to mathematical operations to visualization capabilities - which it can leverage at each step of the analysis process.
Having outlined the high-level architecture, let's examine each component that makes our portfolio intelligence system possible. From the knowledge management infrastructure to the specialized tools at the agent's disposal, each component plays a vital role in delivering accurate and insightful responses to user queries.
Language Model
Our large language model functions as a sophisticated decision-maker and translator. Unlike traditional chatbots that simply generate responses, our model operates as an intelligent agent with distinct capabilities.
As a decision-maker, the model orchestrates the entire query resolution process. The Language Model constantly evaluates different paths and tools at its disposal. When it generates a database query, it first validates it through a specialized parser - if the validation reveals any issues, the model uses this feedback to refine its approach and generate a better query. Similarly, when data is retrieved, the model can leverage mathematical libraries to perform additional calculations, ensuring comprehensive analysis.
Throughout this process, the model stays grounded by accessing its knowledge base. This might involve retrieving information about private markets concepts, checking similar queries it has processed before, or understanding the structure of the available data.
Importantly, the model isn't confined to a linear process. At any point, if it determines that clarification would be valuable, it can pause to ask the user for additional input. This iterative approach, combining tool usage with user interaction, helps ensure accurate and meaningful results.
Finally, it makes informed decisions about the most effective way to visualize the results for maximum clarity.
Beyond decision-making, the model serves as a bidirectional translator between natural language and structured query languages.When a user asks a question in natural language, the Language Model first retrieves relevant context from multiple sources: database schema definitions to understand data structure, private markets knowledge to grasp industry concepts, and training materials to learn from similar past queries. Armed with this context, it can translate the natural language question into a SQL query.
The translation process works in both directions. When receiving results from the database, the Language Model uses its understanding of private markets concepts and user expectations to translate technical SQL outputs into clear, natural language responses that make sense to the user.
Moreover, the model continuously learns and updates the knowledge base through its interactions. When users provide new information or context through their queries, the Language Model processes this information and updates the relevant parts of the knowledge base - whether that's new training examples, additional private markets knowledge, or updated understanding of data structures.
Knowledge Base and Vector Store
Our knowledge management infrastructure is built around a vector store powered by Qdrant, which implements a highly efficient indexing mechanism for ML applications that allows us to maintain fast query performance even as the knowledge base grows. Furthermore, its built-in support for filtered search helps us maintain strict data segregation between customers while allowing for flexible querying patterns.
The store maintains three distinct types of information that form the collective intelligence of our system. First, it stores detailed database schema definitions (DDL) that provide a precise map of how portfolio data is structured and interconnected. Second, it contains carefully curated training materials that capture common query patterns and analytical approaches specific to private markets portfolios. Third, it maintains a comprehensive body of general private markets knowledge, enabling the system to understand industry-specific concepts, terminology, and relationships.
Advanced RAG Implementation
Our Retrieval-Augmented Generation (RAG) system plays a crucial role in managing the context provided to the language model at each step of the analysis process. Even with large context windows, language models can struggle to effectively utilize all available information simultaneously. Our RAG implementation addresses this challenge by intelligently selecting only the most relevant pieces of information needed for each specific step of the workflow, ensuring the model can focus its attention on truly pertinent context.
The foundation of our RAG system uses a standard embedding model for initial information retrieval. While embedding fine-tuning is often proposed as a solution for domain-specific applications, our research indicates that the marginal benefits are limited unless massive amounts of domain-specific training data are available. Instead, we've focused our efforts where they make the most impact: the reranking phase.
Our proprietary reranker has been specifically fine-tuned using a carefully curated dataset of human-validated questions and queries from the private markets domain. This training data comes from two sources: an internally developed dataset of verified question-answer pairs, and an evolving collection of successful user interactions gathered through our feedback system. This dual-source approach has allowed us to create a reranker that significantly improves the relevance and ordering of context provided to the language model. By effectively filtering and prioritizing information, our reranker ensures that only the most pertinent context is used at each step, leading to more accurate and reliable responses. Here is an example of how the reranker would work:
Tools
Customer-Specific Data Architecture
We've implemented a rigorous data isolation architecture that ensures each customer's portfolio data used by Tamarice exists in a completely separate environment from all others. This separation serves a dual purpose: it maintains the highest standards of data security and privacy while enabling us to optimize the system's performance for each customer's specific needs.
This isolated architecture allows Tamarice to develop a deep understanding of each customer's unique portfolio structure and query patterns. As customers interact with the system, it learns from these interactions, continuously improving its ability to handle customer-specific terminology, data structures, and analytical preferences, all while maintaining strict data segregation.
A key strength of our system is its ability to maintain real-time synchronization with portfolio changes. Whether updates come through customer interactions or scheduled data deployments, Tamarice automatically detects and incorporates these changes into its knowledge base. More importantly, the system demonstrates sophisticated intelligence in how it handles different types of updates. It can distinguish between simple updates to portfolio data points, which require basic synchronization, and structural changes to the data architecture, which trigger a more comprehensive update of its knowledge base.
We continue to develop new capabilities that will further increase the value delivered to our customers. Our vision for Tamarice extends beyond its current capabilities as a portfolio intelligence system. We aim to transform it into a comprehensive personal portfolio assistant that not only helps users analyze their investments but actively works alongside them to enhance their portfolio management experience.
Personalized Memory and Learning
We are developing capabilities that will allow Tamarice to maintain a long-term memory of each user's interactions. This will enable the system to remember frequently asked questions and automatically update the answers as new data becomes available. For instance, if you regularly ask about the performance of certain funds or sectors, Tamarice will proactively maintain updated analyses of these areas of interest, providing consistent monitoring of the metrics that matter most to you.
Dynamic Platform Customization
A key upcoming feature will integrate Tamarice with our platform's configuration system. This will allow Tamarice to dynamically adjust the user interface based on observed usage patterns and explicitly stated preferences. The system will be able to reorganize dashboards, adjust displayed metrics, and customize visualization defaults to align with each user's specific interests and workflow patterns.
Enhanced Document Intelligence
We are extending Tamarice's capabilities to include comprehensive analysis of portfolio documents. This will enable the system to not only answer questions about numerical data but also to analyse investment memoranda, quarterly reports, and other portfolio documents. By combining this with its understanding of industry trends from its knowledge base, Tamarice will be able to provide comparative analyses and identify unique patterns or anomalies in your portfolio's performance and strategy.
Predictive Analytics Integration
Perhaps most excitingly, we are working to integrate Tamarice with our proprietary forecasting engine. This will enable the system to answer forward-looking questions about portfolio performance, capital calls, and distributions. By combining historical patterns with our sophisticated forecasting models, Tamarice will be able to provide nuanced predictions while clearly communicating the assumptions and confidence levels behind these forecasts.