Fabrice AI: The Technical Journey

As I mentioned in the previous post, developing Fabrice AI proved way more complex than expected, forcing me to explore many different approaches.

The Initial Approach: Llama Index – Vector Search

My first foray into enhancing Fabrice AI’s retrieval abilities involved the use of the Llama Index for vector search. The concept was simple: take the content from my blog, convert it into Langchain documents, and then transform these into Llama documents. These Llama documents would then be stored in a vector index, enabling me to query this index for relevant information.

However, as I began to test the system, it became apparent that this approach was not yielding the results I had hoped for. Specifically, when I queried the system with context-heavy questions like “What are the biggest mistakes marketplace founders make?” the AI failed to provide meaningful answers. Instead of retrieving the nuanced content I knew was embedded in the data, it returned irrelevant or incomplete responses.

This initial failure led me to reconsider my approach. I realized that simply storing content in a vector index was not enough; the retrieval mechanism needed to understand the context and nuances of the questions being asked. This realization was the first of many lessons that would shape the evolution of Fabrice AI.

Storing Knowledge: MongoDB Document Storage and Retrieval

With the limitations of the Llama Index approach in mind, I next explored storing the Llama documents in MongoDB. MongoDB’s flexible schema and document-oriented structure seemed like a promising solution for managing the diverse types of content I had accumulated over the years.

The plan was to create a more dynamic and responsive search experience. However, this approach quickly ran into issues. The search functionality, which I had anticipated to be more robust, failed to perform as expected. Queries that should have returned relevant documents instead yielded no results or irrelevant content.

This setback was frustrating, but it also underscored a critical lesson: the storage method is just as important as the retrieval strategy. I began to consider other options, such as utilizing MongoDB Atlas for vector searches, which could potentially provide the precision and scalability I needed. However, before committing to this alternative, I wanted to explore other approaches to determine if there might be a more effective solution.

Metadata Retriever and Vector Store: Seeking Specificity

One of the next avenues I explored was the use of a metadata retriever combined with a vector store. The idea behind this approach was to categorize the vast array of information within Fabrice AI and then retrieve answers based on these categories. By structuring the data with metadata, I hoped to improve the AI’s ability to provide specific, targeted answers.

Yet, this method also had its limitations. While it seemed promising on the surface, the AI struggled to deliver accurate responses to all types of queries. For example, when I asked, “Is the author optimistic?” The system failed to interpret the question in the context of the relevant content. Instead of providing an insightful analysis based on the metadata, it either returned vague answers or none.

This approach taught me a valuable lesson about the importance of context in AI. It is not enough to simply categorize information; the AI must also understand how these categories interact and overlap to form a cohesive understanding of the content. Without this depth of understanding, even the most sophisticated retrieval methods can fall short.

Structuring Knowledge: The SummaryTreeIndex

As I continued to refine Fabrice AI, I experimented with creating a SummaryTreeIndex. This approach aimed to summarize all the documents into a tree format, allowing the AI to navigate through these summaries and retrieve relevant information based on the structure of the content.

The idea was that by summarizing the documents, the AI could quickly identify key points and respond to queries with concise, accurate information. However, this method also faced significant challenges. The AI struggled to provide meaningful answers to complex queries, such as “How to make important decisions in life?” Instead of drawing from the rich, nuanced content stored within the summaries, the AI’s responses were often shallow or incomplete.

This experience underscored the difficulty of balancing breadth and depth in AI. While summaries can provide a high-level overview, they often lack the detailed context needed to answer more complex questions. I realized that any effective solution would need to integrate both detailed content and high-level summaries, allowing the AI to draw on both as needed.

This is why in the version of Fabrice AI that is currently live, I have the AI first give a summary of the answer, before going into more details.

Expanding Horizons: Knowledge Graph Index

Recognizing the limitations of the previous methods, I turned to a more sophisticated approach: the Knowledge Graph Index. This approach involved constructing a knowledge graph from unstructured text, enabling the AI to engage in entity-based querying. The goal was to create a more dynamic and interconnected understanding of the content, allowing Fabrice AI to answer complex, context-heavy questions more effectively.

Despite its promise, the Knowledge Graph Index also faced significant hurdles. The AI struggled to produce accurate results, particularly for queries that required a deep understanding of the context. For example, when asked, “What are fair Seed & Series A valuations?” the AI again failed to provide a relevant answer, highlighting the difficulty of integrating unstructured text into a coherent knowledge graph.

This approach, while ultimately unsuccessful, provided important insights into the challenges of using knowledge graphs in AI. The complexity of the data and the need for precise context meant that even a well-constructed knowledge graph could struggle to deliver the desired results. One more drawback with the Knowledge Graph Index was its slow speed. The response time to get related documents was very high relative to a vector store index.

Re-evaluating the Data: Gemini

After several setbacks, I decided to take a different approach by leveraging Google’s AI, Gemini. The idea was to create datasets from JSON-CSV files and then train a custom model LLM using this data. I hoped that by using structured data and a robust training model, I could overcome some of the challenges that had plagued previous attempts.

However, this approach also encountered difficulties. The training process was halted due to incorrect data formatting, which prevented the model from being trained effectively. This setback underscored the importance of data integrity in AI training. Without properly formatted and structured data, even the most advanced models can fail to perform as expected.

This experience led me to consider the potential of using BigQuery to store JSON data, providing a more scalable and reliable platform for managing the large datasets needed to train Fabrice AI effectively.

Combining Strengths: Langchain Documents with Pinecone

Despite the challenges faced so far, I was determined to find a solution that would allow Fabrice AI to effectively store and retrieve knowledge. This determination led me to experiment with Langchain documents and Pinecone. The approach involved creating a Pinecone vector store using Langchain documents and OpenAI embeddings, then retrieving the top similar documents based on the query.

This method showed promise, particularly when the query included the title of the document. For example, when asked, “What is the key to happiness?” the AI was able to retrieve and summarize the relevant content accurately. However, there were still limitations, particularly when the query lacked specific keywords or titles.

This approach demonstrated the potential of combining different technologies to enhance AI performance. By integrating Langchain documents with Pinecone’s vector store, I was able to improve the relevance and accuracy of the AI’s responses, albeit with some limitations.

Achieving Consistency: GPT Builder OpenAI

After exploring various methods and technologies, I turned to Open AI’s GPT Builder to consolidate and refine the knowledge stored within Fabrice AI. By uploading all the content into a GPT knowledge base, I aimed to create a more consistent and reliable platform for retrieving and interacting with my knowledge.

This approach proved to be one of the most successful, with the AI able to provide better results across a range of queries. The key to this success was the integration of all the knowledge into a single, cohesive system, allowing the AI to draw on the full breadth of content when answering questions.

As mentioned in my previous post, I could not get it to run on my website, and it was only available to paid subscribers of Chat GPT which I felt was too limiting. Also, while it was better, I still did not love the quality of the answers and was not comfortable releasing it to the public.

Final Refinement: GPT Assistants Using Model 4o

The final piece of the puzzle in developing Fabrice AI came with the introduction of GPT Assistants using Model 4o. This approach represented the culmination of everything I had learned throughout the project. By utilizing a vector database and refining the prompts, I aimed to achieve the highest possible level of accuracy and contextual understanding in the AI’s responses.

This method involved uploading all the knowledge I had accumulated into a vector database, which was then used as the foundation for the AI’s interactions. The vector database allowed the AI to perform more sophisticated searches, retrieving information based on the semantic meaning of queries rather than relying solely on keyword matching. This marked a significant advancement over previous approaches, enabling the AI to better understand and respond to complex, nuanced questions.

One of the key innovations of this approach was the careful refinement of prompts. By meticulously crafting and testing different prompts, I was able to guide the AI towards providing more accurate and relevant answers. This involved not only tweaking the wording of the prompts but also experimenting with different ways of structuring the queries to elicit the best possible responses.

The results were impressive. The AI was now able to handle a wide range of queries with high accuracy, even when the questions were open-ended or required a deep understanding of context. For example, when asked, “How to make the most important decisions in your life?” The AI provided a comprehensive and insightful answer, drawing on a variety of sources and perspectives to deliver a well-rounded response.

This success was the culmination of hundreds of hours of work and countless experiments. It demonstrated that, with the right combination of technology and refinement, it was possible to create an AI that could not only store and retrieve information effectively but also engage with it in a meaningful way. The development of GPT Assistants using Model 4o marked the point at which Fabrice AI truly came into its own, achieving the level of sophistication and accuracy that I had envisioned from the start. The GPT Assistants API was then integrated into my blog to allow end users to interact with Fabrice AI in the way you see it on the blog right now.

Reflecting on the Journey

The process of developing Fabrice AI highlighted the complexities of working with AI, particularly when it comes to understanding and contextualizing information. It taught me that there are no shortcuts in AI development—every step, every iteration, and every experiment is a necessary part of the journey towards creating something truly effective.

Looking ahead, I’m excited to continue refining and expanding Fabrice AI. As mentioned in the last post, I will review the questions asked to complete the knowledge base where there are gaps. I am also hoping to eventually release an interactive version that looks and sounds like me that you can talk to.

  • So fascinating to hear the back story of this incredible journey to store, organize and share the inside of Fabrice’s brain! This means his kids and grandkids can go to him for advice even when he is no longer physically on the planet, and we can all get help from Fabrice any time night or day, without actually bothering him or taking his time. But Fabrice, one question to you: What did you learn about yourself that surprised you during this project so far?