12/04/2025 - Articles
An AI framework for BCS: Requirements and architecture
Over the past few months, Projektron has been working intensively on developing a powerful and future-proof AI framework for BCS. After presenting our initial experiments with language models, RAG concepts, and prompt engineering in previous articles, we are now focusing on how these findings can be used to create a robust, productive system architecture.
The goal is not to develop individual isolated AI functions, but rather a modular framework in which various applications can be operated, combined, and flexibly developed. This article describes the key requirements that Projektron places on such a framework, the basic architecture behind it, and the technical decisions that lead to its implementation.
General requirements for our AI solution
At Projektron, we didn't want to create isolated solutions, especially for our BCS software, but rather develop a flexible framework that works as generically as possible. This AI assistant is to be built up from individual applications that determine which task is to be solved, how, and with which resources. It should be possible to link the applications together (AI workflows).
As a result of the preliminary research and testing completed in the summer of 2024, we will focus on RAG and prompt engineering techniques. We will limit ourselves to using the models' language comprehension and avoid resorting to training knowledge. This should prevent hallucinations.
The framework should be designed in such a way that the individual components, such as the language model, embedding model, or vector database, can be easily replaced. We want to remain independent of specific products due to rapid and unpredictable developments. It is important that the framework can be operated completely locally in order to meet all data security and data protection requirements.
In our view, high-quality responses are crucial for success. The responses should be precise and accurate. If a question cannot be answered based on the context material, the AI should report this and not try to make something up (hallucinations). The traceability of responses is an important quality criterion. Wherever possible, the source of the information should be linked in the response so that it can be verified with a single click.
Since the development of AI applications is trial- and test-heavy, it is important to facilitate learning loops. One focus is on comprehensive and easily readable logging of AI actions. This is essential in order to be able to investigate unexpected results and find the point that needs to be improved. User feedback is recorded and evaluated.
The framework is scheduled to be delivered to customers as part of BCS at the end of 2026. Customers with local BCS installations will receive instructions on how they can also use this service.
The plan is for Projektron to deliver a basic set of ready-made applications in the course of 2026. In a second step, customers will also be able to create their own “applications.” The technology should therefore also work for heterogeneous source data.
All blog articles on the main topic of AI: AI knowledge (1-4) and AI at Projektron (5-8)
Architecture
The following graphic shows the functional view of the framework. The first application entered is the software help. Interfaces exist to data sources, language models, result display in BCS, and via a user interface (UI) to the framework administrator. The administrator defines the applications via the UI. At the core is the system prompt, which determines what the application should do. In addition, the administrator defines the language model (internal/external) and can enter parameters. If, as in the case of the help function, it is a RAG application, it is possible to control how the vector index is generated from the data set, for example, the type and size of the text splits, or how many hits are returned and what their minimum rating must be.
The framework is accessed via an interface from BCS. When a user enters a question in the AI help window, a request is sent to the framework containing the relevant application as a parameter. Depending on the task to be performed (help, summarize ticket), various applications can be requested via the interface.
The framework can contain a large number of applications, as shown in the following graphic. Applications can be chained together. For example, a local model can be used to generate a data set consisting of anonymized summaries from closed tickets. This data set is then available to answer new incoming tickets.
In a later version, customers will be able to define their own individual applications that do not necessarily have anything to do with BCS. This is illustrated in the image on the far right, “Contracts.” This provides support for employees who frequently have to negotiate contracts or explain them, as is the case in a software company where a license agreement is concluded with each new customer. Most of these questions have already been answered in the past. An AI application can draw on this experience to speed up the processing of new cases. The data set consists of documents, each containing a contract clause, the customer question previously asked about it, and the decision made. This can be done simply via a collection of .txt files. These are processed like the help documents, and a vector index is generated that can then be queried.
In the same way, a customer can make company-specific process instructions, security guidelines, and similar data collections queryable by AI.
The technical components are shown in the following graphic. Basically, these are all interchangeable in order to be able to react to the rapidly changing developments in the field, which are difficult to predict. Server-like components such as Ollama and LangServe are more deeply rooted, have a near-standard status, and are also widely used in the industry (as of December 2024). We selected components such as text embedding through testing; more on that later. The language models can be replaced with a simple configuration entry.
To select a local embedding model, we conducted a comparison test with various products. We used OpenAI's text-embedding-ada as a benchmark, but it cannot be operated locally. The test was based on a dataset of approximately 100 texts and 10 questions, each of which was specific to a text, so that it was clear which text should be the top hit for the question. The texts were imported into OpenAI's RAG solution, the questions were asked, and the 4 hits (i.e., TopK=4) were recorded. OpenAI was able to answer the questions well, with the expected documents coming in first place. We compared 6 local embedding models as well as text-embedding-ada-002, which is also available individually but only online. The 10 questions were asked and the 4 hits were compared with the benchmark. A deviation at position 1 was awarded 7 points, followed by 4, 2, and 1 point. We selected the local model with the lowest score. This was BAAI/bge-m3 from the Beijing Academy of Artificial Intelligence.
The results are shown in detail in the following table: the upper part shows the points for each question, and the lower part shows which texts were selected as deviating. We plan to repeat the test at intervals; if there are significant improvements, we could change the embedding model. However, this would require re-indexing the entire document collection, which would involve a certain amount of effort.
This concludes the description of the technical considerations and aspects of the framework. Through numerous small tests and feedback loops, we have ensured that the basic setup works as we intended.
Outlook
RAG and chat
Many users are accustomed to chat functions thanks to ChatGPT. If the answer is unclear or unsatisfactory, they simply ask again. This is probably the most promising way to achieve further improvements in results, including in the area of help. The problem with RAG is case differentiation: when the user asks the second question, the AI has to assess whether it belongs to the old topic and, if so, whether the information found is sufficient or needs to be searched for again. If it is a new topic, the previous context must be ignored so that it does not lead to confusion. We are therefore working on getting the case differentiation and the follow-up process right.
Feedback
Users can already provide feedback in the help window on whether they found the answer helpful or not. We see potential here for further processes, for example, in the case of negative feedback, adding information to the FAQ that provides the information sought.
About the authors

Maik Dorl is one of the three founders and remains one of the managing directors of Projektron GmbH. Since its founding in 2001, he has shaped the strategic direction of the company and is now responsible for sales, customer service, and product management. As product manager, he is the driving force behind the integration of innovative AI applications into the ERP and project management software BCS.

Dr. Marten Huisinga heads teknow GmbH, a platform for laser sheet metal cutting. In the future, AI methods will simplify the offering for amateur customers. Huisinga was one of the three founders and, until 2015, co-managing director of Projektron GmbH, for which he now works as a consultant. As DPO, he is responsible for implementing the first AI applications in order to assess the benefits of AI for BCS and Projektron GmbH.
More interesting articles on the Projektron blog

Product management at Projektron
How does software remain successful for 25 years? Projektron BCS shows that continuous updates, user feedback, and modern technologies ensure long-term success. Learn how product management works at Projektron.

Use cases for AI in BCS
Step by step, an AI ecosystem is emerging at BCS that is making everyday work noticeably easier. The article shows which use cases are already productive and which functions are still to come.

AI-Help in BCS
Since version 25.3, the new BCS AI user help has been providing precise answers to questions about Projektron documentation. The article shows how iterative optimizations in retrieval and splitting have significantly improved the quality of responses.

Comparison: ERP for Service Providers
For service providers, choosing the right ERP solution determines efficiency, transparency, and revenue. Our comparison highlights the best systems, explains which features truly matter, and provides valuable tips for making the right choice. Discover the ERP that will really drive your business forward!










