RFC 9745 and its impact on API governance
June 10, 2025

RFC 9745 and its impact on API governance

The Internet Engineering Task Force (IETF) - an open international organization that develops and promotes technical standards for the Internet, such as the TCP/IP, HTTP and DNS protocols - has just released RFC 9745, which defines a standardized way of reporting resource deprecation in the context of HTTP, which is especially relevant for APIs based on this protocol. In short, from now on, servers can inform their clients about the deprecation status of certain resources using the Deprecation response header, whose value is a date, in the past or future, indicating that the resource has already been or will be deprecated. Additionally, it is possible to use the link header to point to documentation and also Sunset, providing the date on which the resource will become unavailable. In this article we will evaluate the feasibility of applying this standard in the real world. Using best practices in API development, we will start with definition files that follow the OpenAPI Specification and end up with the KrakenD API gateway. Impacts As mentioned, this new standard is especially important for web APIs, that is, for APIs that adhere to the HTTP protocol, such as the famous REST. In this context, the RFC provides a means of bringing deprecation policies—once restricted to documentation or design time—to runtime. Therefore, the novelty has the potential to seriously mitigate integration failures, allowing developers to make the necessary adaptations with a comfortable lead time. By the way, it is worth remembering that we are entering the era of AI (with their agents, MCP servers, etc.), which only increases the impact of this new standard, as they can learn and adapt on their own when faced with depreciation signals. In the context of governance, the RFC also makes it possible for API gateway vendors (such as Kong, Tyk, KrakenD, Traefik, APISix, etc.) to consider the new standard during automated API deployment processes, especially when we think about APIOps based on OpenAPI specification. Let's see. The OpenAPI specification provides for the indication of deprecation of operations through the deprecated field. With this new RFC, it is simply natural to think about linking things, that is, making the deprecation indication present in the definition files match the configuration of the gateways, which, once running, start to inject the new response header in the appropriate operations. This improvement would take governance to the next level of quality! Proving the concept We will use the definition file that adheres to the OpenAPI Specification (OAS) to describe our API, we will build a parser in Go using libopenapi, we will rely on KrakenD as the API gateway and HttpBin as the backend. Full project details can be found in this repository. So, I'll just highlight the main points: The definition file (openapi.yaml) paths: (...) /users/{userId}: (...) delete: (...) deprecated: true Note that the user delete operation relies on the OAS standard field deprecated with the value true. Well, it is easy to see that we are faced with an impedance mismatch when we try to make this boolean interact with the new headers provided in RFC 9745, since these are much richer in information than that one. For reasons like this, OAS has extensions, which, in our case, will be used to describe the properties expected by the RFC as follows: paths: (...) /users/{userId}: (...) delete: (...) deprecated: true x-deprecated-at: "2025-06-30T23:59:59Z" x-deprecated-sunset: "2026-01-01T00:00:00Z" x-deprecated-link: https://api.example.com/deprecation-policy The Parser The parser's role is to read and interpret the openapi.yaml definition file, extract the relevant information for the gateway, and create the operations.json file, which will be embedded in the KrakenD image and consumed during its initialization, in an approach called flexible configuration. This is the result of operations.json: { "list": [ { "path": "/users", "method": "get", "backend": { "path": "/users", "host": "http://backend:8888" } }, { "path": "/users", "method": "post", "backend": { "path": "/users", "host": "http://backend:8888" } }, { "path": "/users/{userId}", "method": "get", "backend": { "path": "/users/{userId}", "host": "http://backend:8888" } }, { "path": "/users/{userId}", "method": "delete", "deprecated": { "at": "@1751327999", "link": "https://api.example.com/deprecation-policy", "sunset": "Thu, 01 Jan 2026 00:00:00 UTC" }, "backend": { "path": "/users/{userId}", "host": "http://backend:8888" } } ] } Notice that the parser has projected the extended OAS elements into the KrakenD configuration file, including doing the appropriate value conversions, like so: OAS KrakenD x-deprecated-at: deprecated.at: x-deprecated-link: deprecated.link: x-deprecated-sunset: deprecated.sunset: The Plugin Now that the gateway configuration has been properly generated from the definition file, our custom plugin comes into play. Its function is to identify deprecated API operations and insert RFC 9745 headers with the appropriate values. More details can be found in the article repository. But once the plugin was embedded in KrakenD, we got the following results: GET /users/1 DELETE /users/1 Note that only the second operation was deprecated (see operations.json) and the gateway correctly added the headers in the response. Conclusions The experiment showed the viability of the concept, that is, that it is possible to take depreciation policies beyond definition and documentation, being easily communicated at runtime. In this way, systems can adopt automated actions to communicate obsolescence to interested parties and significantly reduce the chances of integration failures. Although the OpenAPI Specification extensions have made this possible in the face of the insufficiency of the deprecated boolean, I imagine that the OpenAPI Initiative should include an improvement in future versions. Especially when I think that Eric Wilde, co-author of this RFC, is very active in the world of APIs. To the readers who have come this far, thank you very much. I hope these few words have added something to you and made your time worthwhile.

Embeddings: what they are and their applications
27 May, 2025

Embeddings: what they are and their applications

We know that with the emergence of various technologies, there is a large increase in the number of terms we hear about, embeddings is one of them, but what are they? Embeddings, which in English means "to incorporate", is a term used in AI and Natural Language Processing (NLP). It refers to the process of "embedding" or "embedding" complex information (such as words, sentences, or documents) into a vector space. This means that data that would be difficult to process directly is transformed into a numerical form (vectors), which Machine Learning models can understand and use for tasks such as classification and semantic analysis. When combined with vector databases, they enable systems to analyze large volumes of unstructured data. This allows for the extraction of relevant information and complex queries quickly and efficiently. This data transformation technique is essential in building scalable solutions, as the vector representation facilitates the search and recovery of information, in addition to compressing the information and still maintaining the relationship with its original content. How it works We know that Embeddings are vectors for machine understanding based on texts, phases, documents. But how do we transform this information into vectors? Vectors are formed by using AI models trained to identify contexts, classifying them based on the approximation of the context in numbers, which typically range from -1 to 1. A value of 1 indicates the closest match, with thousands of comparison parameters. These models are typically trained on large volumes of text, identifying patterns of competition between words that appear frequently in similar contexts, such as "cat" and "animal." During training, the model learns to map these words to numeric vectors in a multidimensional space, so that words with related meanings or similar contexts are positioned closer to each other in this vector space. The goal is to make words or phrases with similar meanings closer together in the "space" of vectors. For example, "cat" and "dog" should be represented by close vectors, while "cat" and "car" will be further apart. Embedding example | Image: https://arize.com/blog-course/embeddings-meaning-examples-and-how-to-compute/ How is the similarity between two vectors calculated, comparing, for example, a text with several vectors from the trained model? Mathematically, the cosine similarity technique is normally used to compare two vectors. Cosine similarity provides a value in the range [-1,1], with 1 being the closest context value and -1 being the furthest [1] Cosine similarity equation | Image: Wikipedia Two vectors with 98% similarity based on the cosine of the angle between the vectors | Image: Richmond Alake Embeddings, in practice PDF analysis with QA (Question Answering): Embeddings are used in document analysis systems, such as PDFs, to perform Question and Answer (QA) tasks. Companies that deal with large volumes of documents, such as contracts or reports, can use embeddings to automatically locate relevant passages in a text. For example, when analyzing a PDF contract, embeddings allow you to semantically map the content and identify passages related to questions like "What is the validity period of this contract?" or "What are the customer's payment obligations?" A generative AI model can then use these snippets to interpret the context and generate natural language responses with greater accuracy. Product Recommendation (E-commerce): Platforms like Amazon and Netflix use embeddings to recommend products or movies based on users' preferences and past behaviors. For example, when recommending movies, embeddings are used to capture the style, genre, and features of the movies the user has watched, suggesting new content based on vector similarity. Sentiment Analysis (Customer Service): Companies use embeddings to analyze sentiment in customer feedback or messages. For example, when analyzing a set of social media comments or customer emails, embeddings help automatically identify whether the sentiment is positive, negative, or neutral, enabling a quick and appropriate response. Conclusion Embeddings have proven to be a powerful and growing tool in several industries, transforming the way we interact with unstructured data. Its ability to represent complex information numerically has led to improvements in document analysis systems, recommendations and even customer service. As a technology that is constantly evolving, it is expected that, over time, it will be increasingly integrated into intelligent and scalable solutions. Furthermore, with the trend towards reducing computational costs and the advancement of processing and storage infrastructures, it becomes increasingly viable to scale these solutions efficiently and at low cost.

How Karpenter optimized the management of our EKS infrastructure on AWS
13 May, 2025

How Karpenter optimized the management of our EKS infrastructure on AWS

Companies face daily challenges in managing Kubernetes infrastructure, especially to maintain efficiency and reduce costs. Here at Softplan, we discovered a solution that transforms the way we manage our EKS clusters on AWS: Karpenter. Challenges in instance management Before talking about Karpenter, it is necessary to take a few steps back and explain a little about what node auto-scaling is. Suppose we have our cluster with some machines (instances) available running our workloads. What happens if there is a spike in usage in our applications and we need to launch more instances (replicas) of our pods? Without autoscaling, we would need to provision a node, instruct it to join our cluster so that our pods would be able to be started on this new instance. Remembering that provisioning an instance is not instantaneous, there is a whole bootstrapping of the machine, network configurations and many other things before it becomes fully available. Okay, we talked about peak users in our applications, but what about when there is idleness? Do we really want to leave these nodes standing with underutilized computing power? To resolve this and other issues, the concept of auto scalers comes into play. Auto Scalers Auto scaler implementations are basically responsible for node provisioning and consolidation. Here we are talking about horizontal scaling, that is, adding more machines to our cluster. There are several implementations of node autoscaling, but in this article the focus will be on the AWS implementation and why we decided to migrate to another solution. Below is a figure exemplifying how node autoscaling works: Figure 01: AWS autoscaling - Auto Scaling Groups When defining a scaling group in AWS we need to define several properties, such as the minimum/maximum number of node instances allowed for this group, resources used, disk type, network configurations (subnets, etc.) and many other details. For example, for a certain type of application that uses more CPU, we will configure a group that contains instance types with more CPU than memory. In the end we will possibly have some distinct groups for certain types of applications. Putting the pieces together – Cluster Auto Scaler In order for my cluster to be able to “talk” to my cloud provider (in this example AWS), we need a component called Cluster Auto Scaler, or CAS. This component was created by the community that maintains Kubernetes, and is available here. A default CAS configuration can be seen below, using helm for installation: nameOverride: cluster-autoscaler awsRegion: us-east-1 autoDiscovery: clusterName: my-cluster image: repository: registry.k8s.io/autoscaling/cluster-autoscaler tag: v1.30.1 tolerations: - key: infra operator: Exists effect: NoSchedule nodeSelector: environment: "infra" rbac: create: true serviceAccount: name: cluster-autoscaler annotations: eks.amazonaws.com/role-arn: "role-aws" extraArgs: v: 1 stderrthreshold: info With this configured and installed and our autoscaling groups created we have just enabled automatic management of our nodes! Why we decided to migrate to Karpenter Our use case here at Projuris is as follows: we have a development cluster and a production cluster. After migrating to Gitlab SaaS, we had a challenge of how to provision runners to execute our pipelines. It was decided that we would use the development cluster to provision these runners. In the “first version” we chose the auto scaler cluster because it was a simpler configuration and already met our production setup. But then we started to face some problems with this choice: Provisioning time: when starting a pipeline the machine provisioning time was a little slow. The big point is that the auto scaler cluster pays a “toll” to the cloud provider for provisioning a new node. Difficulty in configuring groups: as we have some pipeline “profiles”, this management became a little complicated, because for each new profile a new node group needs to be created. Cost: to mitigate the problem of slow startup of a new node, we had an “online” machine profile that was always on, even without executing any pipeline. What is Karpenter? It is an autoscaling cluster solution created by AWS, which promises the provisioning and consolidation of nodes always at the lowest possible cost. He is smart enough to know that, for example, when buying an on-demand machine on AWS, depending on the situation, it is more cost-effective than if it were a spot machine. And that's just one of the features of this tool. Karpenter also works with the idea of ​​“groups” of machines (which in the Karpenter world we call NodePools), but the difference is that we do this through CRDs (custom resource definitions) from Karpenter itself, that is, we have manifests within our cluster with all these configurations, eliminating the need for any node group created in AWS. pool. How did Karpenter help us overcome the challenges presented? Provisioning time: since Karpenter talks directly to the cloud provider's APIs, there is no need to pay the autoscaler cluster toll. We had many timeout issues when provisioning new nodes; after switching to Karpenter, this problem simply disappeared, precisely because provisioning is more efficient. Difficulty in configuring groups: with Karpenter's NodePools and NodeClass solution, this configuration became trivial, and most importantly, versioned in our version control on Gitlab. In other words, do you need to include a new machine profile in the NodePool? No problem, just one commit and Karpenter will already consider it in new provisioning. Cost: We were able to use machines, since now runners with similar characteristics are allocated to nodes that support the required memory and CPU requirements. In other words, we are really using all the computing power that that node provides. This also applies to node consolidation. With the cluster auto scaler, there were complex scripts to drain the nodes before consolidation. With Karpenter, this is configured in the NodePool in a very simplified way. A great argument for management to justify investing in this type of change is cost. Below we have a cost comparison using Cluster AutoScaler and Karpenter in January/25, where we achieved a total savings of 16%: Figure 02: Period from 01/01 to 15/01 with ClusterAutoScaler Figure 03: Period from 16/01 to 31/01 with Karpenter Final considerations The migration to Karpenter was a wise choice. We were able to simplify the management of our nodes with different profiles in a very simplified way. There is still room for some improvements, such as using a single NodePool to simplify things even further, and letting runners configure specific labels for the machine profile that should be provisioned for the runner (more at https://kubernetes.io/docs/reference/labels-annotations-taints/). References Karpenter (official doc): https://karpenter.sh/ Node Auto Scaling (official k8s doc): https://kubernetes.io/docs/concepts/cluster-administration/node-autoscaling/

Group Softplan completes acquisition of govtech 1Doc and reinforces digital transformation strategy of Brazilian cities
30 April, 2025

Group Softplan completes acquisition of govtech 1Doc and reinforces digital transformation strategy of Brazilian cities 

The startup, which manages digital processes in city halls and has been majority-owned by the Group since 2019, has shown significant growth of 75% over six years Florianópolis, April 2025 – The Group Softplan, one of the largest SaaS and digital transformation companies in Brazil, has just officially acquired the entirety of 1Doc, a company that specializes in the digitalization of processes, institutional communication, and citizen services in city halls and public agencies. In a journey that began in 2017 with a strategic investment and, later in 2019, with the majority purchase of the company, the company finalized the purchase of the remaining shares from Jéferson de Castilhos, one of the founders, and now holds 100% of the startup. Last year, the stake of Jaison Niehues, another founding partner, had already been acquired. Part of the group’s Public Sector vertical since 2019, 1Doc has delivered a Compound Annual Growth Rate (CAGR) of 75% between 2019 and 2025. “The journey with 1Doc represents exactly how we believe the relationship with the innovation and startup ecosystem should happen: with long-term vision, collaboration and real impact. The startup has grown within the ecosystem Softplan, has evolved autonomously and is now consolidating itself as a key part of our strategy to transform the public sector”, says Eduardo Smith, CEO of the Group Softplan. Founded in 2014, 1Doc was born with the purpose of digitizing processes and bringing citizens and governments closer together through a 100% cloud-based platform. Today, it serves an average of 1000 entities throughout the country and has a team of 141 employees, reinforcing its reach and relevance in the public sector, being responsible for directly impacting the lives of more than 22 million Brazilians. "We have great appreciation for 1Doc's culture and are very proud of the growth trajectory we have built over the years, always guided by a clear purpose. Today, we are fully convinced of the positive impact our services generate, facilitating the routine of our customers — who recognize both the value of our solutions and the relevance of the work we deliver", celebrates Alice Luz, CEO of 1Doc. The acquisition will allow us to expand the integration that already exists between the products, enabling the development of functionalities in partnership between Softplan Public Sector and 1Doc, bringing expertise in functionalities using artificial intelligence and machine learning aimed at automating processes, validating and classifying documents, and large-scale digital services. “1Doc has added a highly scalable and easy-to-implement solution to our portfolio, capable of serving small municipalities and large urban centers. Today, this solution is a central part of our delivery to more efficient, connected, and citizen-centric cities and entities,” explains Márcio Santana, Executive Director of Softplan Public Sector. Acquisition strategy With 1Doc fully integrated into its portfolio, the company will expand its capacity to offer a complete digital journey — from document processing to citizen service — with SaaS, cloud-based and easily scalable solutions. “The journey with 1Doc is a clear example of our acquisition strategy: identifying companies with high growth potential and strong synergy with our business. Since the first investment, our goal has been to boost the solution, combining capabilities and expanding the value delivered to the end customer. The full acquisition consolidates this movement and reinforces our commitment to building more efficient and connected journeys for our customers,” says Alex Anton, M&A and Strategy Director of the Group. Softplan. The acquisition of the startup also strengthens the group's role in the smart cities movement, promoting innovation with a focus on transparency, citizen participation, cost reduction and greater efficiency of the public sector. 

Group Softplan Completes Acquisition of Oystr to Strengthen Legal Intelligence Ecosystem
30 April, 2025

Group Softplan Completes Acquisition of Oystr to Strengthen Legal Intelligence Ecosystem

Company specializing in automation robots for the legal market is the company's 13th acquisition; the company's strategy Softplan Legal Intelligence complements the Projuris portfolio of solutions for law firms and legal departments Florianópolis, April 2025 – The Group Softplan, one of the largest technology and SaaS ecosystems in the country, continues its growth strategy and announces the acquisition of Oystr, a legaltech specialized in the automation of legal flows and management of digital certificates. The company becomes part of the portfolio of Softplan Legal Intelligence, a vertical dedicated to solutions for the legal sector and law firms, which has Projuris as its main solution. This is the company's 13th acquisition in its 34-year history. Since 2020, the Group Softplan has intensified this strategy, totaling 12 acquisitions in the last five years. With this new move, Softplan Legal Intelligence expands its portfolio with solutions that automate legal tasks, such as publication capture, electronic petitioning, and digital certificate management. The solution integrates with external systems, optimizing workflows and increasing the efficiency of professionals in the sector. “We monitored Oystr’s potential in the market and identified the opportunity to further strengthen our vertical. This acquisition expands the integration of automation solutions and enhances the use of Artificial Intelligence in our products”, highlights Eduardo Smith, CEO of the Group Softplan. For Rafael Caillet, CEO of Oystr, the acquisition represents an important milestone in the more than 10-year history of the startup, which also has Leandro Cruz and Jonas Pacheco as founding partners. “Integrating our solution into the largest legal ecosystem in the country is a fundamental step towards boosting automation in the sector and increasing the efficiency of professionals,” says the executive. The company recently launched Presto, an innovative solution for managing digital credentials and certificates. With the inclusion of Oystr in its portfolio, Softplan Legal Intelligence projects a 38% growth for the year 2025, compared to 2024. Sidney Falcão, executive director of the business unit, emphasizes that the implementation of RPA (Robotic Process Automation) is a strategic differentiator to maximize the productivity and efficiency of clients. “Our commitment is to offer a complete experience, which ranges from the optimization of repetitive tasks to the integrated management of documents and legal processes. We identify synergies that not only streamline operations, but also add direct value to the market”, he concludes. The Group’s inorganic growth Softplan considers pillars such as product synergy and organizational culture. “Our M&A process carefully evaluates the complementarity of solutions and the cultural affinity between companies. This ensures that we deliver a more robust ecosystem to customers and leverage the talents of both operations,” explains Smith. Technology potential The robots used by Oystr are multi-system, which can increase the flow of tasks. “In total, we have more than 480 active robots, but if we consider that the same robot that performs a task in Bahia also works in Roraima, Paraná or any other state, this indicator increases considerably, boosting the flow of systems throughout the country,” says Rafael Caillet. The company has a high data processing capacity, which in turn enhances the automation of several law firms and legal departments. Over the last six years, Oystr robots have performed more than 250 million tasks. More than 600 companies were impacted and, through cutting-edge technology, made their businesses more efficient and sustainable. Founded in 2014 to automate repetitive and costly legal tasks, the more than 1.000 robots already developed by Oystr perform the collection of subpoenas, mass protocols, internal systems feeding, among other functions. Its more than 300 clients include Nelson Wilians Advogados, Mascarenhas Barbosa Advogados, Góes & Nicoladelli, Pereira Gionedis Advogados, Pellon & Associados, Unimed Curitiba. Oystr founding partners: Jonas Pacheco, CTO (left); Rafael Caillet, CEO (center); Leandro Cruz, Head of Big Data and Machine Learning (right)

Softplan Public Sector advances in Latin America and signs contract with Peruvian Judiciary
30 April, 2025

Softplan Public Sector advances in Latin America and signs contract with Peruvian Judiciary  

A reference in digital transformation for the Brazilian and Colombian public sector, the company is expanding its operations in Latin America by implementing technological solutions in legal proceedings throughout Peru. Florianópolis, March 2025 – Softplan Public Sector, a pioneer in process automation for public institutions and one of the Group's verticals Softplan, won an international bid and is now working with the Peruvian Judiciary. The contract is among the largest ever signed and endorses the strategic plan to not only export high technology, but also introduce the Brazilian digital justice model to other countries. The project, called EJE NO PENAL, highlights the robustness of the technological solution, which is a reference in Brazil and was also adopted by the Special Jurisdiction for Peace (JEP) in Colombia - a transitional justice system created for the purpose of reparation under the peace agreement between the National Government and the FARC-EP. The contract with the Peruvian Judiciary consists of implementing a technological platform for managing non-criminal proceedings throughout Peruvian territory, reinforcing the Softplan Public Sector as a trusted advisor in digital transformation in governments and Justice. With this, the Peruvian government will have the technological support to meet the need to improve judicial processes, bringing cutting-edge technologies, through the product SAJ Tribunais, which ensures a significant reduction in processing time, increasing efficiency, transparency and access to judicial services for the population. The solution currently has more than 27 million legal cases in progress and another 100 million have already been completed. “This is a relevant project because it covers the judiciary of an entire country. It will be a moment of digital transformation for our neighbors and we will be able to contribute with the expertise of Softplan The Public Sector in this challenge brings us immense responsibility and pride in our technology and experience. In terms of strategy, internationalization is part of one of our main growth objectives and having the Peruvian Judiciary in our portfolio further endorses our strategy”, highlights Márcio Santana, Executive Director of Softplan Public Sector. “This agreement represents a historic milestone for the Judiciary and for the country, a turning point in the modernization of the judicial system, allowing the implementation of innovative technological solutions that will transform the administration of justice and the management of processes,” said the President of the Judiciary of Peru, Supreme Justice Janet Ofelia Lourdes Tello Gilardi. (...) The technological revolution in the administration of justice will then occur, with greater efficiency, reduced procedural time, optimized workload, digitalization will minimize unnecessary delays, transparency in judicial processes, guaranteed access to information, promoting greater accessible and reliable justice”, he added. With the formalized signature, Peruvian Judicial Branch employees will have access to a digital process solution with benefits that allow them to maintain integrity, security and consistency, in addition to supporting decision-making based on process information. “With the milestone of signing the contract, we are facing a modernization project that involves 34 Superior Courts of Justice throughout Peru. There will be more than 18 thousand users and approximately 1.700 agencies involved in the implementation of this project. We will begin with a stage of parameterizing the solution to the local reality and then we will continue with training and assisted monitoring of users in the process of appropriating the solution. This is a challenge of national magnitude and impact, reinforces Márcio regarding the execution of the project. With over 30 years in the market, Softplan The Public Sector offers a pioneering service in the implementation of digital processes in the Brazilian justice system, promoting solutions that ensure agility, transparency and efficiency in all processes. In 2024 alone, 201 million Brazilians were impacted by the solutions promoted by the company, which is equivalent to 91% of the Brazilian population. Regarding efficiency, the courts that use SAJ had a reduction of up to 90% in the time between the distribution of the process and the first act of the judge; 6,2 thousand hours saved in the automatic distribution of processes and 19,5 million hours optimized with the automatic incorporation of documents - the figures refer to the period 2015-2020. Regarding savings with the digitalization of processes and reduction of environmental impact, since 2009 the system has avoided 22,6 billion page prints, making it possible to avoid the emission of 596 thousand tons of CO2, which is equivalent to the combined emissions of a fleet of 864,5 thousand tons of CO2. The elimination of paper provided the SAJ Courts with financial savings of around 261,3 million reais in 2024 alone. In total, since 2009, the estimated savings are 2,26 billion reais.