Company Logo Tech Writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
The strength of collaboration and the customer as the protagonist: impacts of product evolution in the Group Softplan
Tech Writers 29 April, 2025

The strength of collaboration and the customer as the protagonist: impacts of product evolution in the Group Softplan

In the group Softplan, product evolution is an ongoing effort that involves cross-team collaboration and a deep commitment to the customer. In my role as Product Growth, I constantly exchange ideas with other teams and receive valuable feedback from customers, whether through analysis of how they use the product or through specific communication channels, such as email. These interactions give me a clear view of the impact that continuous product evolution has on the success of the company and the value delivered to customers. This article explores how cross-team collaboration and customer focus drive the evolution of our products, fostering the growth of the Group. Softplan and the success of those who use our solutions. Collaboration between teams: the engine of innovation Product development and improvement in the Group Softplan require continuous integration between different teams. Software solutions need to be effective and aligned with market demands. Although it is not directly part of defining the roadmap, my role allows me to bring valuable insights based on customer interactions and performance data. This directly contributes to the prioritization of development initiatives. Studies by Forbes indicate that companies that encourage internal collaboration are 4,5 times more likely to retain top talent and innovate more efficiently (Forbes on Collaboration). In the Group Softplan, effective collaboration is one of the pillars to ensure that customer needs are met quickly and efficiently. The product, marketing, growth and sales departments work together continuously, always seeking to align initiatives with market demands. This collaborative work, combined with the support of the Growth team in prioritizing initiatives, integrates different perspectives and areas of the company, allowing for constant adjustments to products and driving the creation of innovations based on these interactions between departments. Customer as the protagonist: the guide for our decisions In the Group Softplan, the customer is at the center of all decisions, especially in the Industry and Construction Unit, where the value of "customer as protagonist" guides our way of working. We use specific channels to collect continuous feedback, and these insights shape product initiatives. As pointed out by Salesforce, 80% of customers consider the experience offered by a company as important as its products and services (Salesforce State of the Connected Customer). In practice, this means that by listening to users and adjusting our products based on their demands, we strengthen the relationship and increase loyalty to our brand. An example of this was the recent feature update, based on customer feedback, which brought more diversified communications across product modules, aligning with identified needs. This customer-driven approach not only meets current needs, but also allows us to anticipate future demands. This ongoing process solidifies our role as a strategic partner for customers. Market Impact: Innovation and Growth The Group Softplan stands out in the market for its commitment to innovation and focus on concrete results. Adjusting our products based on direct customer feedback has a direct impact on the company’s growth and user satisfaction. As mentioned, the update of more diversified communications in product modules was a direct response to this feedback, highlighting how continuous communication with the customer guides the evolution of our solutions. According to PwC, companies that prioritize customer experience can see a 16% increase in revenue and greater customer retention (PwC Future of Customer Experience Survey). This reality also applies to the Group Softplan, where continuous adjustment and focus on customer needs help us deliver relevant solutions that stand out in the market. Strategic use of customer feedback not only improves the user experience, but also ensures that we are always one step ahead in terms of innovation and competitiveness. Come grow with us The Group Softplan stands out for listening to its customers and bringing its teams together to create solutions that drive business. The value of "customer as protagonist" is a practical guide and present in our journey of product evolution. We collaborate, innovate and adapt, always ensuring that customer needs are at the center of our decisions. If you value an environment that fosters collaboration and innovation, with opportunities for continuous learning and growth, the Group Softplan is the right place for you. Here, our values ​​and strategic objectives are reinforced by training and the opportunity to work on challenging projects that transform the software market. Join us and be part of a team that transforms the lives of customers and innovates the market. Visit our career page.

Digital Evolution in the Public Sector: B2G Product Management
Tech Writers 16 April, 2025

Digital Evolution in the Public Sector: B2G Product Management

In recent years, the public sector has expanded its digital services for citizens. The 2020 pandemic accelerated this trend, driving the modernization of bodies such as Courts of Justice, Public Ministries and Public Defenders' Offices. This transformation aims to improve the efficiency of public services and facilitate access for the population. Historically, the Information Technology areas of these organizations have adopted project management models that prioritize the delivery of defined scopes, with deadlines and teams limited to specific demands. However, the growing need for agility has driven the transition to product management. In this context, the concept of Business-to-Government (B2G) gains relevance, highlighting the importance of product management in offering innovative solutions to the government. As a Product Manager working on B2G products, my focus is to deliver solutions aligned with the needs of end users. Unlike the B2B sector, where there is a structured sales funnel, product management in the public sector requires the adoption of metrics and tools adapted to this ecosystem. Day-to-day life of Product Managers in the public sector Interactions with customers begin after the contract is signed, when the first contact occurs with the management group, formed by employees responsible for implementing the product. From this point, insights are gained into the needs of end users, allowing an initial understanding of the workflow. To prioritize the backlog, we use the RICE (Reach, Impact, Confidence, Effort) matrix, ensuring that decisions consider both contractual requirements and user needs. This prioritization occurs continuously, following the evolution of the product and established contracts. In the development cycle, we apply experimentation, prototyping and usability testing techniques with pilot groups. We collect quantitative and qualitative data to measure adoption and define improvements to product functionality. Example of RICE matrix With these premises in mind, we apply to pilot users, for example, experimentation techniques, prototyping, conducting usability tests for new functionalities. We also constantly collect quantitative and qualitative data on the journey they use, as adherence increases. Based on the metrics collected, we can define whether the main or additional features of our user's journey need to be improved. Example of quantitative insights with information from the user journey in organizing tasks using the MixPanel tool Example of using the INDECX tool for qualitative information about the product or functionality Product triad delivering efficient results Product management in the public sector requires a collaborative approach, integrating the technical team, user experience team and customer. This ongoing interaction strengthens strategic alignment and clarity about product evolution. The product roadmap is shared with the client to ensure transparency and predictability in deliveries. Softplan has established itself as a reference in the digital transformation of the public sector, generating positive impacts for citizens. Solutions such as the Justice Automation System (SAJ) provide efficiency and speed in public services. As Product Manager at Softplan, I contribute to the management of products aimed at the public sector. An example is the SAJ Defensorias, whose task panel was developed after business study and technical analysis based on the product triad. This panel centralizes daily activities, prioritizing tasks to be performed immediately and organizing completed ones for future reference. SAJ Solution Softplan Our goal is to offer intuitive and efficient products that meet the daily demands of public defenders and contribute to improving the provision of services to society. Defender's task panel in SAJ Defensorias Digital initiatives in the public sector have great potential for growth, driven by product culture. Digital transformation is irreversible and will continue to evolve to meet society’s expectations for more agile, efficient and transparent services.

.Net ThreadPool Exhaustion
Tech Writers March 25, 2025

.Net ThreadPool Exhaustion

More than once in my career I have come across this scenario: a .Net application frequently showing high response times. This high latency can have several causes, such as slow access to an external resource (a database or an API, for example), CPU usage reaching 100%, disk access overload, among others. I would like to add another possibility to the previous list, often overlooked: ThreadPool exhaustion. I will briefly show how the .Net ThreadPool works, and code examples where this can happen. Finally, I will demonstrate how to avoid this problem. The .Net ThreadPool The .Net Task-based asynchronous programming model is well known by the development community, but I believe that its implementation details are poorly understood - and it is in the details that the danger lies, as the saying goes. Behind the .Net Task execution mechanism there is a Scheduler, responsible, as its name suggests, for scheduling the execution of Tasks. Unless explicitly changed, the default .Net scheduler is the ThreadPoolTaskScheduler, which, as the name suggests, uses the default .Net ThreadPool to perform its work. The ThreadPool then manages, as expected, a pool of threads, to which it assigns the Tasks it receives using a queue. It is in this queue that the Tasks are stored until there is a free thread in the pool, and then start processing it. By default, the minimum number of threads in the pool is equal to the number of logical processors on the host. And here's the detail in how it works: when there are more Tasks to be executed than the number of threads on the host, pool, the ThreadPool can either wait for a thread to become free or create more threads. If it chooses to create a new thread and if the current number of threads in the pool is equal to or greater than the configured minimum number, this growth takes between 1 and 2 seconds for each new thread added to the pool. Note: Starting with .Net 6, improvements were introduced to this process, allowing for a faster increase in the number of threads in the ThreadPool, but the main idea still remains. Let's look at an example to make it clearer: suppose a computer has 4 cores. The minimum value of the ThreadPool will be 4. If all the Tasks that arrive quickly process their work, the pool may even have less than the minimum of 4 active threads. Now, imagine that 4 Tasks of slightly longer duration arrived simultaneously, thus using all the threads of the pool. When the next Task arrives in the queue, it will need to wait between 1 and 2 seconds, until a new thread is added to the queue. pool, and then leave the queue and start processing. If this new Task also has a longer duration, the next Tasks will wait in the queue again and will need to “pay the toll” of 1 to 2 seconds before they can start executing. If this behavior of new long-running Tasks continues for some time, the clients of this process will feel slow for any new Tasks that arrive at the ThreadPool queue. This scenario is called ThreadPool exhaustion (or ThreadPool starvation). This will occur until the Tasks finish their work and start returning threads to the pool, enabling the reduction of the queue of pending Tasks, or that the pool can grow enough to meet the current demand. This can take several seconds, depending on the load, and only then will the slowdown observed previously cease to exist. Synchronous vs. asynchronous code It is now necessary to make an important distinction about types of long-running work. Generally, they can be classified into 2 types: CPU/GPU-bound (CPU-bound or GPU-bound), such as the execution of complex calculations, or I/O-bound (I/O-bound), such as database access or network calls. In the case of CPU-bound tasks, except for algorithm optimizations, there is not much that can be done: you need to have enough processors to meet the demand. However, in the case of I/O-bound tasks, it is possible to free up the processor to respond to other requests while waiting for the I/O operation to finish. And this is exactly what the ThreadPool does when asynchronous I/O APIs are used. In this case, even if the specific task is still time-consuming, the thread will be returned to the pool and can serve another Task from the queue. When the I/O operation is finished, the Task will be requeued and then continue executing. For more details on how the ThreadPool waits for I/O operations to finish, click here. However, it is important to note that there are still synchronous I/O APIs, which cause the thread to block and prevent it from being released to the pool. These APIs - and any other type of call that blocks a thread before returning to execution - compromise the proper functioning of the ThreadPool, and may cause it to exhaust itself when subjected to sufficiently large and/or long loads. We can therefore say that the ThreadPool - and by extension ASP.NET Core/Kestrel, designed to operate asynchronously - is optimized for executing tasks of low computational complexity, with asynchronous bound I/O loads. In this scenario, a small number of threads is capable of processing a very high number of tasks/requests efficiently. Thread blocking with ASP.NET Core Let's see some code examples that cause threads to block pool, using ASP.NET Core 8. Note: These codes are simple examples, and are not intended to represent any particular practice, recommendation, or style, except for the points related to the ThreadPool demonstration specifically. To maintain identical behavior between examples, a request to a SQL Server database will be used that will simulate a workload that takes 1 second to return, using the WAITFOR DELAY statement. To generate a usage load and demonstrate the practical effects of each example, we will use siege, a free command-line utility designed for this purpose. In all examples, a load of 120 concurrent accesses will be simulated for 1 minute, with a random delay of up to 200 milliseconds between requests. These numbers are enough to demonstrate the effects on the ThreadPool without generating timeouts when accessing the database. Synchronous Version Let's start with a completely synchronous implementation: The DbCall action is synchronous, and the ExecuteNonQuery method of the DbCommand/SqlCommand is synchronous, so it will block the thread until the database returns. Below is the result of the load simulation (with the siege command used). You can see that we achieved a rate of 27 requests per second (Transaction rate), and an average response time (Response time) of around 4 seconds, with the longest request (Longest transaction) lasting more than 16 seconds – a very poor performance. Asynchronous Version – Attempt 1 Let’s now use an asynchronous action (returning Task ), but still use the synchronous ExecuteNonQuery method. Running the same load scenario as before, we have the following result. Note that the result was even worse in this case, with a request rate of 14 per second (compared to 27 for the completely synchronous version) and an average response time of more than 7 seconds (compared to 4 for the previous one). Asynchronous Version – Attempt 2 In this next version, we have an implementation that exemplifies a common – and not recommended – attempt to transform a synchronous I/O call (in our case, ExecuteNonQuery ) into an “asynchronous API”, using Task.Run. The result, after simulation, shows that the result is close to the synchronous version: request rate of 24 per second, average response time of more than 4 seconds and the longest request taking more than 14 seconds to return. Asynchronous Version – Attempt 3 Now the variation known as “sync over async”, where we use asynchronous methods, such as ExecuteNonQueryAsync in this example, but the .Wait() method of the Task returned by the method is called, as shown below. Both .Wait() and the .Result property of a Task have the same behavior: they cause the executing thread to block! Running our simulation, we can see below how the result is also bad, with a rate of 32 requests per second, an average time of more than 3 seconds, with requests taking up to 25 seconds to return. Not surprisingly, the use of .Wait() or .Result in a Task is discouraged in asynchronous code. Problem Solution Finally, let's look at the code created to work in the most efficient way, through asynchronous APIs and applying async / await correctly, following Microsoft's recommendation. We then have the asynchronous action, with the ExecuteNonQueryAsync call with await. The simulation result speaks for itself: request rate of 88 per second, average response time of 1,23 seconds and request taking a maximum of 3 seconds to return - numbers generally 3 times better than any previous option. The table below summarizes the results of the different versions, for a better comparison of the data between them. Code VersionRequest Rate ( /s)Average Time (s)Max Time (s)Synchronous27,384,1416,93Asynchronous114,337,9414,03Asynchronous224,904,5714,80Asynchronous332,433,5225,03Solution88,911,233,18 Workaround It is worth mentioning that we can configure the ThreadPool to have a minimum number of threads greater than the default (the number of logical processors). With this, he will be able to quickly increase the number of threads without paying that “toll” of 1 or 2 seconds. There are at least 3 ways to do this: by dynamic configuration, using the runtimeconfig.json file, by project configuration, by adjusting the ThreadPoolMinThreads property, or by code, by calling the ThreadPool.SetMinThreads method. This should be seen as a temporary measure, while the appropriate adjustments are not made to the code as shown above, or after appropriate prior testing to confirm that it brings benefits without performance side effects, as recommended by Microsoft. Conclusion ThreadPool exhaustion is an implementation detail that can have unexpected consequences. And they can be difficult to detect if we consider that .Net has several ways to obtain the same result, even in its best-known APIs – I believe motivated by years of evolution in the language and ASP.NET, always aiming at backward compatibility. When we talk about operating at increasing rates or volumes, such as going from dozens to hundreds of requests, it is essential to know the latest practices and recommendations. Furthermore, knowing one or another implementation detail can make a difference in avoiding scale problems or diagnosing them more quickly. Tech Writers. In a future article, we will explore how to diagnose ThreadPool exhaustion and identify the source of the problem in code from a running process.

What is UX Writing and everything you need to know to create amazing experiences
Tech Writers February 11, 2025

What is UX Writing and everything you need to know to create amazing experiences

What is UX Writing and how does it positively impact a business's product? See best practices, responsibilities, methodologies, and much more! UX Writing involves creating valuable content in interfaces and digital products, including texts, based on the user experience, that is, aiming to deliver the best experience to the public. This practice is related to marketing, design, and information architecture concepts, and aims to delight and offer value through informative pieces. An example of UX Writing is when you access an online teaching platform, or an application that, as soon as you log in, demonstrates with an objective tutorial each step the user must take. Below, an example of the ProJuris ADV application Softplan, which shows a clean and friendly interface before the user decides whether to create an account on the application or log in, showing some things that can be done in the application. User experience has become increasingly important for attracting, converting and retaining customers. Aspects such as the agility of your website navigation, scannability and intuitive way of browsing, and even the colors chosen for the design of the pages directly affect users' decisions on any digital platform. When we talk about digital platforms with UX Writing, we can take Gestor Obras as an example, which, on the first page of the system, shows a practical tutorial on how it works. As you click where it says, it will show the next steps and functions of each part of the system. Another example of UX Writing, which provides direct and objective information, is in Sienge, showing in an image some advantages of using the system, in addition to direct communication in the CTA “Request a Demonstration”, moving away from the common “Learn More” and calling for a very objective action. Therefore, if your company is not yet constantly optimizing its communication channels with users, especially its website, it is time to review some choices. After all, the user is the one who uses your product or service. To achieve this, not only is the website design crucial when it comes to optimization, but also customer service and clear and objective communication on the brand's channels will be essential to create a greater connection with its consumers. To get an idea of ​​the user experience and how it is aggregating, a survey was conducted this year by Foundever, which revealed that 80% of customers consider the experience as a much more valuable aspect than the products and services themselves. When executed efficiently, the practice of UX Writing becomes a significant competitive advantage in a market that is increasingly rigorous in terms of quality and users who demand the best digital products. What are the main characteristics of UX Writing? UX Writing consists of some important characteristics so that it can be executed correctly and consistently. It's important to keep this in mind for better creations that will truly impact the user experience positively. Clarity and Objectivity: the content must be clear and direct, facilitating quick understanding by the user. Consistency: Language and tone should be consistent across all user touchpoints, creating a cohesive experience. Empathy: understanding and anticipating users’ needs and expectations to create texts that really help them. Focus on Action: Guide users on what to do next using clear calls to action (CTAs). Brevity: use as few words as possible without sacrificing clarity, respecting users' time and attention. Scannability: structure the text so that it is easy to read and skim quickly, using headings, subheadings, lists and short paragraphs. Accessibility: ensuring that content is accessible to all users, including those with some type of disability, through simple and inclusive language. Visual Orientation: Integrate text harmoniously with the visual elements of the interface, contributing to a pleasant and intuitive user experience. Personalization: adapting content to the user's context and preferences, offering a more relevant and personalized experience. Brand Tone and Voice: reflect the brand's personality and values ​​in all texts, strengthening the identity and connection with the public. Examples of the application of UX Writing It is easy to confuse UX Writing with other writing strategies. Therefore, we will demonstrate how to apply UX Writing to your website or digital applications. Personalization Want to see an example of UX Writing with personalization? Spotify is a streaming service that, as you use it, personalizes songs that end up being recommended to users, with similar songs that the user usually listens to. In addition, at the end of each year, the platform provides each user with an annual summary of what was listened to most throughout the period, as well as which artists, podcasts and genres were listened to. All of this is done in objective and clear language so that the user can understand exactly the entire summary, with no room for doubt. Photo: Reproduction/Spotify Objective and Clear Texts Now, for an application, for example, it is essential that the texts are very objective! Therefore, user error rates when using it will certainly be much lower, in addition to navigation being more intuitive. Good and bad example of an action button with UX Writing applied. Source: Adobe Anticipate errors We have already talked about the importance of giving the user a good experience, and this includes anticipating any possibility of future errors. In the example below, we can see a form being filled out, where the email address is not filled out correctly and the application tells the user to see the message, next to the field filled out with error, on the left side. Source: Adobe Differences between Copywriting, UX Writing and Tech Writing Although related, the strategies of copywriting, UX Writing and Tech Writing have their differences. Let's see what the main ones are within some approaches? Copywriting Objective: The objective is to persuade the reader to take a specific action, such as purchasing a product, signing up for a newsletter, or clicking on a link. Therefore, it is focused on conversions and sales. UX Writing: facilitates user interaction with a digital product or service, making the experience more intuitive, pleasant and efficient. With UX Writing, the user is guided through the interface and in completing tasks. Tech Writing: The goal is to explain clearly and precisely how to use complex products or technologies. Focused on providing detailed and informative instructions. Copywriting Approach: uses persuasion and rhetoric techniques to capture the reader's attention and motivate them to take some action. The tone is more emotional and appealing. UX Writing: adopts a functional and informative approach, prioritizing clarity, simplicity and usefulness. The tone is objective, focused on guiding and helping the user. Tech Writing: focuses on detail and accuracy, providing step-by-step instructions and technical explanations. The tone is technical and informative, with clear and objective language. Copywriting Application Location: Found in marketing materials such as advertisements, promotional emails, sales pages, blog posts, and social media content. UX Writing: present in digital interfaces, such as applications, websites, e-commerce, dashboards, and any point of user interaction with the system. Examples include buttons, error messages, instructions, and navigation menus. Tech Writing: Appears in user manuals, installation guides, software documentation, FAQs, tutorials, and knowledge bases. Copywriting Success Metrics: Measured by conversion metrics such as click-through rate (CTR), conversion rate, sales volume, and return on investment (ROI). UX Writing: Measured by usability and user satisfaction, such as reduced error rates, task completion time, user retention, and positive feedback on the user experience. Tech Writing: Measured by the clarity and effectiveness of documentation, such as number of support tickets, user feedback, time to find information, and ease of use of documentation. Copywriting Collaboration: collaborates with marketing, sales and branding teams. UX Writing: Works with UX/UI designers, developers, user experience researchers, and product managers to integrate writing into product design and functionality. Tech Writing: Collaborates with engineers, developers, product managers, and support teams to ensure documentation is accurate and useful. Ultimately, while copywriting seeks to persuade and convert, UX writing aims to facilitate and guide, and tech writing focuses on explaining and instructing. Each strategy uses writing as the main tool, but with different focuses and applications, which complement each other at some point in the user's journey. How to apply UX Writing to Products to add value? Now that you understand what UX Writing is, you can understand how to apply the strategy. In this case, when we talk about UX Writing and Product, these terms must go hand in hand in the creation and constant optimization of a product. Here we can even talk about the “Product Writer”, a professional totally focused on working in Products who seek improvements, researching and understanding the users’ point of view about a given product and defining writing solutions. So, we must understand how UX Writing adds value to digital products in different ways, contributing significantly to the user experience and, consequently, to the success of the product. Let's look at some practices that can be implemented in digital products? 1. Clarity in Error and Success Messages Error messages: should be clear and specific, informing the user what went wrong and how to correct the problem. For example, "Password must be at least 8 characters long" is more useful than "Password error". Success Messages: Clear confirmations that inform the user that the action was completed successfully. For example, "Your purchase was successful!" 2. Onboarding Instructions and User Guides: Provide step-by-step tutorials and guides for new users, helping them become familiar with the product. Tooltips and Pop-ups: Contextual instructions that appear at the right time to guide the user without interrupting their experience. 3. Effective Calls to Action (CTAs) Buttons and Links: Use clear and direct action verbs, such as “Buy Now”, “Sign Up” or “Learn More”. Avoid vague terms like "Click Here". Visual Hierarchy: Ensure CTAs are visually highlighted to guide user attention. 4. Improved Navigation Menus and Labels: Use familiar and intuitive terminology in menus and labels. For example, "Account" instead of "User Profile". Breadcrumbs: Implement breadcrumbs to help users understand where they are in the site navigation and how to return to previous pages. 5. Microcopy Forms: Provide clear, concise instructions for each input field. Examples: "Enter your email" instead of just "Email". Immediate Feedback: Provide instant feedback when filling out forms, such as marking correct fields with a green checkmark. 6. Accessibility Adjustments Alt Text: Add helpful descriptions to images, graphics, and icons to improve accessibility. Plain Language: Avoid jargon and complex technical terms, making content accessible to all users, including those with cognitive disabilities. 7. Consistency in Tone of Voice Style Manual: Develop and adhere to a style manual that defines the brand voice and tone, ensuring consistent communication across all platforms. Regular Review: Regularly review and update content to maintain consistency and relevance. 8. Educational Content FAQs and Documentation: Create and maintain FAQ sections and help documentation that are clear, detailed, and easy to navigate. Tutorial Videos and Tips: Integrate videos and quick tips that help users better understand and use a product's features. 9. Testing and Interactions A/B Testing: Perform A/B testing to evaluate the effectiveness of different versions of microcopy, CTAs, and error messages. User Feedback: Collect and analyze user feedback to identify areas for improvement and adjust content as needed. Conclusion Realize how many actions can be very simple and that they will greatly help a product to deliver a good user experience, with greater efficiency and satisfaction. Your product can end up creating more connection with your users, encouraging loyalty and, thus, creating a network of consumers who will organically evangelize about your product and how worthwhile it is. Finally, don't waste time.

Angular: Why you should consider this front-end framework for your company
Tech Writers February 02, 2024

Angular: Why you should consider this front-end framework for your company

A fear for every team is choosing a tool that will quickly become obsolete. If you've been developing applications for a few years, you've probably already experienced this. Therefore, choosing good tools is a task that involves responsibility, as it can guide the project (and the company) to success or to a sea of ​​problems and expenses. In this article, we will understand the uses and benefits of the Angular framework. Choosing a front-end framework is no different and also involves research and studies. Choosing a “stack”, as we call it in this world, is trivial both for the present and for the future. However, some questions will arise in the midst of this choice: Will we find qualified professionals to deal with this framework? Will we be able to maintain a pace of updates? Is there a well-defined plan for the direction the framework is going? Is there a community (we also mean large companies supporting it here) engaged? All of these questions must be answered before starting any project, as neglecting a screen can lead to devastating scenarios for the product, and consequently for the company and its profits. Motivations for using a framework Perhaps the most direct answer is that sometimes it's good not to keep reinventing the wheel. Routine problems such as dealing with routes for a web application, or even controlling dependencies, generating bundles optimized for publication in production, all of these tasks already have good solutions developed, and, therefore, choosing a framework that gives you this set of tools is perfect for gaining productivity, solidity in the development of an application and also keeping it always updated following best practices. As well as the direct motivations, I can also mention: The ease of finding tools that integrate with the framework The search for quality software, integrated with tests and other tools that will make the development process mature Many situations and problems have already been resolved ( because there are a lot of people working with the technology) Motivations for using the Angular framework: Built using Typescript, one of the most popular languages ​​at the moment MVC Architecture Control and Dependency Injection Modularization (with lazy load option) Good libraries for integration Community large and engaged 1835 contributors in the official repository Officially supported and maintained by the Google team The solidity of Angular Currently, we can clearly state that the framework is stable, receiving frequent updates due to its open-source nature. This is because it is maintained by the Google team, which always seeks to make the roadmap of what is to come as clear as possible, which is very good. Furthermore, the Angular community is very active and engaged. It's difficult to have a problem that hasn't already been resolved. One of the concerns of every developer is regarding drastic changes to a tool. Anyone who lived through the change from V1 to V2 of Angular knows this pain, the change was practically total. However, the framework was correctly based on Typescript, which brought robustness and another reason for its adoption: with Typescript, we have possibilities that Javascript alone cannot solve: strong typing, integration with the IDE, making life easier for developers , error recognition at development time, and much more. Currently, the framework is in version 17 and has been gaining more and more maturity and solidity, with the increase in innovative features such as the recently launched defer. Easy upgrade The framework provides a guideline for every upgrade through the website https://update.angular.io, this resource helps a lot to guide the update of your project. Complete CLI Angular is a framework. Therefore, when installing your package we will have the CLI ready to launch new projects, generate components, run tests, generate the final package and maintain updates for your application: To create your first project, simply open your terminal and run the command a follow: Solid interface designs If you need a design for your application that provides ready-to-use components such as alerts, modal windows, snackbar notices, tables, cards, one of the most popular possibilities is choosing Angular Material, a good The point to follow your software with it is because it is maintained by Google, so whenever the framework advances in version, Material usually follows this update. In addition to Material, there are other options in the community, such as PrimeNG, which brings a very interesting (and large) set of components. Nx library support Angular has full support for the Nx project, which makes it possible to scale your project in a very consistent way, mainly guaranteeing caching and advanced possibilities for you to maintain and scale your local application or in your CI environment. Here are some specific examples of how Nx can be used to improve an Angular project: You can create an Angular library that can be reused across multiple projects. You can create a monorepo that contains all your Angular projects, which makes cross-team collaboration easier. You can automate common development tasks like running tests and deploying your projects. Tests (unit and E2E) In addition to Karma and Protactor that were born with the framework, you are now free to use popular projects like Jest, Vitest and Cypress. State with Redux One of the most used libraries by the community is the NgRx Store, which provides reactive state management for Redux-inspired Angular applications. Brazilian GDEs In Brazil we currently have two Angular GDEs, which is important for our country and also for generating Angular content in Portuguese, bringing always updated news and insights to our community straight from the Google team. Loiane Gronner William Grasel Alvaro Camillo Neto Large companies using and supporting Perhaps the most notorious is Google, the official maintainer of the framework.   Checklist Fácil  Picpay Want to know more? Interested in starting with Angular? Visit https://angular.dev/, the latest documentation for the framework that includes tutorials, a playground and good, well-explained documentation. Good code! 

Architectural Model: how to choose the ideal one for your project
Tech Writers January 17, 2024

Architectural Model: how to choose the ideal one for your project

What is an Architectural Model and why is it important? Basically, an architectural model is the abstract structure on which your application will be implemented. “The software architecture of a program or computer system is the structure or structures of the system that encompasses the software components, the externally visible properties of those components, and the relationships between them.” (Bass, Clements, & Kasman, Software Architecture in Practice) To define the model that will best suit your project, we need to know well the company's short, medium and long-term strategies, the software's non-functional and architectural requirements, as well as the user growth curve over time and the volume of requests. As well as the points mentioned throughout this article, there are still others to take into account when deciding which architectural model to apply. As an example, we can list: Security concerns; Data storage; Lockins; Total volume of users; Volume of simultaneous users; TPS (transactions per second); Availability plan/SLA; Legal requirements; Availability on one or more types of platforms; Integrations. The survey of architecture, RAs (architectural requirements), VAs (architectural variables), RFs (functional requirements), RNFs (non-functional requirements) and the criteria that define each of these items directly influence the choice of the correct model. The choice of architectural model can impact the entire life cycle of the application. Therefore, this is a subject that we must treat with great attention. The use of MVPs (especially those that do not go into production) can greatly help with this task. They give a unique opportunity to make mistakes, adjust, make mistakes again, prove concepts, adjust and make mistakes as many times as necessary so that in the end the software has the architecture in the most correct version, thus bringing the true gains of this choice. How the architectural models are divided It is ideal to make it clear that like many definitions in the software world, what architectural models are and what they are can vary. Therefore, in this article I tried to divide them into four large groups: monolithic, semi-monolithic (or modular monolith), distributed monolith (or microlith) and microcomponentized. Monolithic Model in which all components form a single application or executable integrated into a single source code. In this case, it is all developed, deployed and scaled as a single unit. Figure 1 – Example of a Monolithic Model. Pros Simplicity: As the application is treated as a single, cohesive unit, it becomes simpler as all parts are contained in a single source code. Greater adherence to Design Patterns: taking into account that we have a single source code, another factor that makes it easier is that the design patterns themselves (Design Patterns, 01/2000) were written in times of monolith dominance, making the application of even more adherent. Greater performance: due to low latency in communication, monoliths tend to have good performance, even using more outdated technologies. Lower resource consumption: low complexity, simplicity and lower communication overhead between layers favor lower resource consumption. Easier troubleshooting: Creation of development and debug environments is made easier in monoliths, as the components share the same processes in them. Another factor that we can take into account is that monoliths have fewer external failure points, simplifying the search for errors. Cons Limited team size: breakdowns related to Continuous Integration and merge conflicts happen more regularly in monoliths, creating difficulties in parallel work for large teams. Scalability: Scalability may be limited in certain aspects. Even with ease in vertical scalability, horizontal scalability can often become a problem that could affect the growth of the application. Availability of windows: normally, for a monolith, executables are exchanged, which requires a window of availability without users accessing the application, which does not happen with other architectural models that can use other deployment techniques such as Blue-Green or even work with images or pods. Single technology: low technological diversity can often become an impediment to the growth of the application by only serving one type of operating system, for example, or not fully meeting new features requested by customers due to not having updates that have the capacity to solve complex problems. Greater expenditure on compilation and execution: large monoliths generally take a long time to compile and execute locally, generating a greater commitment in development time. When to Use Low Scalability and Availability: if the application has a limited scale where, for example, the number of users is low or high availability is not mandatory, the monolithic model is a good solution. Desktop Applications: the monolithic model is highly recommended for desktop applications. Low seniority teams: monolithic models, due to their simplicity and location of components, enable low seniority teams to work with better performance. Limited resources: for a limited infrastructure with scarce resources. Semimonolithic (or Modular Monolith) Model in which applications are composed of parts of monolithic structures. In this case, the combination tries to balance the simplicity of the monolithic model and the flexibility of the microcomponentized model. Currently, this architectural model is often confused with microservices. Figure 2 – Example of a Semimonolithic Model. Pros It brings benefits of the monolithic and microcomponentized models: with this, it is possible to maintain parts as monolithic structures and only microcomponentize components that have a real need. Technological diversity: possibility of using different technological approaches. Diversified infrastructure: this model can be developed to use both On-Premise and Cloud infrastructure, favoring migration between both. Supports larger teams: the segmentation of components allows several teams to work in parallel, each within its own scope. Technical Specialties: due to segmentation, the team's hard skills are made better use of, such as frontend, UX, backend, QA, architects, etc. Cons Standardization: due to the large number of components that can appear in a semi-monolithic model, standardization (or lack thereof) can become a major problem. Complexity: the complexity inherent to this type of model also tends to increase with new features. Therefore, new features such as messaging, caching, integrations, transaction control, testing, among others, can add even more complexity to the model. Budget: in models that support the use of different technologies with large teams, more specialist professionals with a higher level of seniority are needed, often resulting in greater expenditure on personnel expenses. Complex troubleshooting: the complexity of the model and the diversity of technologies make troubleshooting the application increasingly difficult. This is due to the large number of failure points (including external to the application) that come to exist and the communication between them. When to Use Accepted in Various Scenarios: it is a flexible model that can meet various scenarios, but not always in the best way. Little Definition: in projects that have uncertainties or even that do not have the full definition of their requirements, this model is the most suitable. In medium and large teams: as mentioned, the division of components into several groups facilitates parallel work in medium and large teams. Typically, groups have their own code repositories, which makes parallel work more agile. Diverse Seniority: this model benefits from teams with this format, as semi-monolithic software presents varied challenges, both in the frontend and backend layers and in infrastructure issues (IaC – Infrastructure as a Code). Infrastructure: for a Cloud-based or hybrid infrastructure, this model is more applicable. It is a model that allows, for example, gradual adoption between On-Premise and Cloud, facilitating adaptation and minimizing operational impacts. Distributed Monolith This modeling is a "modern" modeling that has also been implemented and confused with the microcomponentized/microservices model. "You shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile." (Fowler, Martin. 2015) In summary, in this architectural model the software is built on the basis of the monolithic model, but implemented according to the microcomponentized model. Currently, many consider it an antipattern. Figure 3 – Example of Distributed Monolith Model. It wouldn't be worth listing the pro features (I don't know if there are any), but it's still worth mentioning features that go against it: this architectural model brings together the negative points of the other two styles with which it is confused. In it, services are highly coupled and also have various types of complexity, such as: operational, testability, deployment, communication and infrastructure. The high coupling, especially between backend services, brings serious difficulties in deployment, not to mention the significant increase in points of failure in the software. Microcomponentized Software model in which all components are segmented into small, completely decoupled parts. Within microcomponents, we can mention: Microfrontends Microdatabases Microvirtualizations Microservices Microbatches BFFs APIs Figure 4 – Example of a Microcomponentized Model. "A microservice is a service-oriented application component that is tightly scoped, strongly encapsulated, loosely coupled, independently deployable, and independently scalable" (Gartner, n.d.). Opinions converge to say that every microservice that worked was first a monolith that became too big to be maintained and reached a common point of having to be separated. Pros Scalability: Scalability in this model becomes quite flexible. Depending on the need, the components are scaled in a specific way. Agile Development: Teams can work independently on each component, facilitating continuous deployment and accelerating the development cycle. Resilience: if a component fails, it does not necessarily affect the entire application. This improves the overall resilience of the system. It is important to note that there are single point of failure approaches to avoid this type of problem. Diversified Technology: each component can be developed using different technologies, allowing the choice of the best tool for each specific task. Furthermore, it also favors the existing skills of each team. Ease of Maintenance: changes to one component do not automatically affect others, facilitating maintenance and continuous updating. Decoupling: components are independent of each other, which means that changes to one service do not automatically affect others, facilitating maintenance. Cons Cost: high cost of all components of this model (input, output, requests, storage, tools, security, availability, among others). Size: microcomponentized software tends to be larger in essence. Not only the size of the application, but the entire ecosystem that permeates it from commit to the production environment. Operational Complexity: there is an exponential increase in complexity in this model. Designing good architectural components so that this complexity is managed is of great importance. It is important to choose and manage logging tools, APM and Continuous Monitoring, for example, well. Managing many microservices can be complex. Additional effort is required to monitor, orchestrate, and keep services running. Latency: Communication between microservices can become complex, especially in distributed systems, requiring appropriate communication and API management strategies. Network Overhead: Network traffic between microservices can increase, especially compared to monolithic architectures, which can affect performance. Consistency between Transactions: Ensuring consistency in operations involving multiple microservices can be challenging, especially when it comes to distributed transactions. Testability:  Testing interactions between microservices can be more complex than testing a monolithic application, requiring efficient testing strategies. Infrastructure: You may need to invest in robust infrastructure to support the execution of multiple microservices, including container orchestration tools and monitoring systems. Technical Dispersion: at this point, we can say that there is an action of "Reverse" Conway's Law, as teams, as well as technologies and tools, tend to follow dispersion and segregation. In teams, each person becomes aware of a small part of a larger whole. This way, for technologies and tools, each developer uses the framework or tools that suit them best. Domain-Driven Design: to increase the chances of success of this model, teams must have knowledge of DDD. When to Use Volumetrics: the microservices/microcomponents architecture has proven to be effective in high-volume systems, that is, those that need to deal with large amounts of transactions, data and users. Availability: one of the main reasons for adopting this type of architecture is availability. When well constructed, software that adopts microcomponentization does not tend to fail as a whole when small parts present problems. Therefore, other components continue to operate while the problematic component recovers. Scalability: If different parts of your application have different scalability requirements, microservices can be useful. You can scale only those services that need the most resources, rather than scaling the entire application. Team Size: Small teams can be problems. Configurations, boilerplates, environments, tests, integrations, input and output processes. Resilience > Performance": in cases of uncertainty, for example, the volume of requests and how far it can reach, such as large e-commerces in periods of high access (Black Friday) where it is necessary for the software to be more resilient and perform better median. Comparative Checklist Figure 5 – Checklist Comparison between models. Conclusion In summary, the choice of the architectural model is crucial to the success of the project, requiring a careful analysis of needs and goals. Each architectural model has its advantages and disadvantages and we must guide the decision by aligning it with the specific requirements of the project. By considering company strategies, requirements and architectural surveys, it is possible to make a decision that will positively impact the application life cycle. The work (and support) of the architecture team is extremely important. It is also of great importance that management and related areas support by providing time to collect this entire range of information. Still in doubt? At first, start with the modular semi-monolith/monolith. Likewise, pay close attention to database modeling. References Gartner. (n.d.). Microservice. Retrieved from https://www.gartner.com/en/information-technology/glossary/microservice Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1994) Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. Bass, L., Clements, P., Kazman, R. (2013) Software Architecture in Practice (3rd ed.). Addison-Wesley. Microservices Architecture (12/2023). Retrieved from https://microservices.io/ Fowler, S. J. (2017) Production Ready Microservices. Novatec. ArchExpert Training. (n.d.). Premium Content. Retrieved from https://one.archoffice.tech/ Monolith First (06/2015). Retrieved from https://martinfowler.com/bliki/MonolithFirst.html Microservices. Accessed on 01/2024.