logo tech writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
ESG and Tech: Everything in common!
Tech Writers March 04, 2024

ESG and Tech: Everything in common!

What is ESG ESG is an acronym that brings together actions, practices and work fronts related to the theme of Environment, Social and Governance. This triad constitutes a set of corporate practices and principles that are already inescapable as a driver of investments, reputation and share of mind and hearts. The positive correlation between ESG performance (when executed strategically, holistically and genuinely) and a superior result in both financial and reputational terms has already been proven. These papers from Harvard Business Review and NYU's Center for Sustainable Business explain this correlation in detail. And this video from KPMG consultancy explains more succinctly what ESG is. Anyone who wants to take a quick course to dive a little deeper, I recommend this one from the consultancy, and Softplan partner, Deep ESG. Last year, Softplan began to delve deeper into this subject. Mapping the relevance of the company's impacts on its customers (the company's most important stakeholder alongside employees) is one of the multiple initiatives of an ESG program. In a company like Softplan, where we consider ourselves "experts in translating knowledge and technology into solutions that simplify and impact the lives of thousands of people in Brazil and the world on a daily basis", permeate this knowledge in the development area (here considering product, design , software engineering and architecture) may perhaps be the most effective strategy to assert this statement. We recently completed this mapping work for solutions that serve Public Sector clients, following this very strict standard from the Global Reporting Initiative. Impacts, facts and how ESG and Tech combine Speaking about Information Security, our "Jedi master" Alexandre Golin listed, in 2009, the 4 principles of the SAJ Courts at the time: Integrity, Authenticity, Non-repudiation and Non-retroactivity (I dare say which has a fifth hidden principle which is inviolability). Since then, many concepts, practices and requirements have been improved, including here at Softplan. Currently, we are very proud to say that these four (or five) principles guide all of the company's solutions. Well, Information Security, combined with Data Privacy, was considered a relevant impact by customers of all solutions for the public sector. In doubt about updating a certain application component? There are a lot of free Devops courses out there and you don't know which one to take? Issues related to information security and data privacy will always be a safe haven. Let's not forget the 'E' in Environment, which also has everything to do with Product, Design, Development and Architecture. Reducing carbon emissions is a relevant impact in practically all solutions. One thing that unites all Softplan solutions is the digitalization of processes and reduction of paper use. For example, 1Doc's motto is "Let's get it off the ground?". The balance in CO₂ emissions from replacing paper, in addition to everything that is not emitted by paper not being produced, transported, stored and discarded, by digital processes and their emissions arising from the energy consumption of data centers and infrastructures is usually positive in more of 75%. But what if I told you that in 2022 all 92 courts in Brazil printed more than 500 million sheets of paper? This data comes from the Courts as a whole, which are not necessarily from the judicial area. It's a damn shame, right? But why do they still print so much paper if more than 90% of their collection is digital? Is it just because it's more comfortable? Probably not. Our designers have invested heavily in bringing greater ergonomics and comfort to our screens, to minimize the need for printing. This subject offers a giant roadmap of epics and stories and discoveries. And the result metrics are right here in front of us. More and more public bodies have sustainable logistics programs where paper consumption is measured. And for those who like behavioral economics, here's a tip to insert that clever nudge at the footer of each page: “Not printing this paper prevented the deforestation of N trees”. Still within 'E', Devops/DBAs who manage data in the cloud, did you know that AWS consoles allow the adoption of sustainability criteria? You can select more efficient regions (more performant and new machines, renewable energy source), implement “design that guarantees high utilization and maximizes energy efficiency of the underlying hardware” and use managed services. For those who don't know, it's a new world and, in most cases, sustainability will generate financial savings and operational efficiency. Softplan's operations in the public sector allow our solutions to have a direct impact on millions of people. In 2022, the external access portals of our solutions had an average volume of 4,3 billion queries and 3,2 million new processes/demands per month! Ensuring quality delivery to customers means greater prosperity, efficiency and well-being for society, which now has access to more efficient and better quality public services. But how so? Let's think about a Court of Justice: How do we measure the efficiency (oops, another Impact, and this perhaps the most important, within the 'S' for Social) of a Court? We know what results are expected from an efficient Court. And we also know how our solution directly and indirectly contributes to these results. This knowledge, disseminated within the teams (to reiterate, our operating model requires knowledge and ownership at all functional levels of the end areas), provides a reference for both clients and us, of what to prioritize, what the pace of deliveries is and how Let's measure whether deliveries are bringing the expected results. And what does it matter for a Court of Justice? It needs to provide a channel for external users to consult and file lawsuits. Improving and expanding society's access to public bodies is also a mapped impact. This channel needs to have broad access (multiple services) and be agile (performative and with high availability). Did you realize that these characteristics, which have very objective metrics, already offer the necessary drivers for grooming? In terms of efficiency, we map a maximum of 4 indicators per solution. It was difficult (for us and for the clients), but focusing on the most important ones helps us think about the big picture. In the case of Courts, one of the indicators measures stock variations from one year to the next, another indicator measures the duration of processes, another the capacity to meet demand (which is dynamic and depends on inputs) and another, Goal 2 of the CNJ, determines that the oldest cases must be judged first. Here is a fantastic guide to encourage teams and clients to design and implement solutions that: reduce the stock of cases, prioritizing the oldest ones, within sustainable process durations and always linked to the benchmarking of Courts of the same size. For PMs and POs who love prioritization matrices, here is a full plate for scoring stories and tasks. This footprint works for all of our solutions! Let's think about Sider, which digitizes countless services within the structures of infrastructure departments and roads and highways departments. Here we also have the impact of citizens and companies’ access to these services. Access portals act as virtual service desks. And how do you measure the level of this service? Firstly, the service needs to be available and broad, you have to be able to do everything there: consult, open demands, request and access information and documents and pay for things. Then, the service time of the main services can be evaluated. The driver for development is here! We need to expand access and reduce service time as much as possible. To do this, we also have to be aware of the regulatory framework of the bodies. Complying with this framework is also a relevant impact indicated by customers. You have to reduce time, respecting legal procedures, procedures and deadlines. As Softplan has national capillarity and expertise, here we can act as a network, as a hub, absorbing, feeding, feedback, transferring and sharing knowledge and good practices between bodies across the country. Pretty sure someone just correlated this with digital transformation. Well, promoting digital transformation was considered a relevant impact on all of our solutions that serve the public sector. Here we expand the reach of ESG to resident, support, communication and relationship teams. Do you realize how ESG and you (fellow Dev, DBA, PM, PO, UXer and so on) have everything to do with each other? It's not simple, but it's also nothing out of this world. To recap: We need to define impacts and their indicators and metrics in a methodologically rigorous way, engaging customers and internal (and external) experts in a genuine way. Collect and make data and evidence available in the form of indicators. Use data as input, considering context and specificities. Do not underestimate the external environment, as it is complex and dynamic. Permeate knowledge in the team. Encourage discussions and critical analysis. Always focus on the big picture. What are the main impacts and indicators that move the needle? Focus on them as a priority. Ensure that, over time, relevant impacts remain relevant and appropriate indicators remain appropriate.  

Angular: Why you should consider this front-end framework for your company
Tech Writers February 02, 2024

Angular: Why you should consider this front-end framework for your company

A fear for every team is choosing a tool that will quickly become obsolete. If you've been developing applications for a few years, you've probably already experienced this. Therefore, choosing good tools is a task that involves responsibility, as it can guide the project (and the company) to success or to a sea of ​​problems and expenses. In this article, we will understand the uses and benefits of the Angular framework. Choosing a front-end framework is no different and also involves research and studies. Choosing a “stack”, as we call it in this world, is trivial both for the present and for the future. However, some questions will arise in the midst of this choice: Will we find qualified professionals to deal with this framework? Will we be able to maintain a pace of updates? Is there a well-defined plan for the direction the framework is going? Is there a community (we also mean large companies supporting it here) engaged? All of these questions must be answered before starting any project, as neglecting a screen can lead to devastating scenarios for the product, and consequently for the company and its profits. Motivations for using a framework Perhaps the most direct answer is that sometimes it's good not to keep reinventing the wheel. Routine problems such as dealing with routes for a web application, or even controlling dependencies, generating bundles optimized for publication in production, all of these tasks already have good solutions developed, and, therefore, choosing a framework that gives you this set of tools is perfect for gaining productivity, solidity in the development of an application and also keeping it always updated following best practices. As well as the direct motivations, I can also mention: The ease of finding tools that integrate with the framework The search for quality software, integrated with tests and other tools that will make the development process mature Many situations and problems have already been resolved ( because there are a lot of people working with the technology) Motivations for using the Angular framework: Built using Typescript, one of the most popular languages ​​at the moment MVC Architecture Control and Dependency Injection Modularization (with lazy load option) Good libraries for integration Community large and engaged 1835 contributors in the official repository Officially supported and maintained by the Google team The solidity of Angular Currently, we can clearly state that the framework is stable, receiving frequent updates due to its open-source nature. This is because it is maintained by the Google team, which always seeks to make the roadmap of what is to come as clear as possible, which is very good. Furthermore, the Angular community is very active and engaged. It's difficult to have a problem that hasn't already been resolved. One of the concerns of every developer is regarding drastic changes to a tool. Anyone who lived through the change from V1 to V2 of Angular knows this pain, the change was practically total. However, the framework was correctly based on Typescript, which brought robustness and another reason for its adoption: with Typescript, we have possibilities that Javascript alone cannot solve: strong typing, integration with the IDE, making life easier for developers , error recognition at development time, and much more. Currently, the framework is in version 17 and has been gaining more and more maturity and solidity, with the increase in innovative features such as the recently launched defer. Easy upgrade The framework provides a guideline for every upgrade through the website https://update.angular.io, this resource helps a lot to guide the update of your project. Complete CLI Angular is a framework. Therefore, when installing your package we will have the CLI ready to launch new projects, generate components, run tests, generate the final package and maintain updates for your application: To create your first project, simply open your terminal and run the command a follow: Solid interface designs If you need a design for your application that provides ready-to-use components such as alerts, modal windows, snackbar notices, tables, cards, one of the most popular possibilities is choosing Angular Material, a good The point to follow your software with it is because it is maintained by Google, so whenever the framework advances in version, Material usually follows this update. In addition to Material, there are other options in the community, such as PrimeNG, which brings a very interesting (and large) set of components. Nx library support Angular has full support for the Nx project, which makes it possible to scale your project in a very consistent way, mainly guaranteeing caching and advanced possibilities for you to maintain and scale your local application or in your CI environment. Here are some specific examples of how Nx can be used to improve an Angular project: You can create an Angular library that can be reused across multiple projects. You can create a monorepo that contains all your Angular projects, which makes cross-team collaboration easier. You can automate common development tasks like running tests and deploying your projects. Tests (unit and E2E) In addition to Karma and Protactor that were born with the framework, you are now free to use popular projects like Jest, Vitest and Cypress. State with Redux One of the most used libraries by the community is the NgRx Store, which provides reactive state management for Redux-inspired Angular applications. Brazilian GDEs In Brazil we currently have two Angular GDEs, which is important for our country and also for generating Angular content in Portuguese, bringing always updated news and insights to our community straight from the Google team. Loiane Gronner William Grasel Alvaro Camillo Neto Large companies using and supporting Perhaps the most notorious is Google, the official maintainer of the framework. The company has several products built using Angular and in recent years has been further supporting the development and evolution of the tool. An important point when choosing a framework is knowing which large companies are using it, because it gives us a signal that that tool will have support for updates and evolution since no one likes to keep rewriting products from scratch, here I will mention some global companies that use it Angular in your products, websites, web services: Google Firebase Microsoft Mercedes Benz Santander Dell Siemens Epic Blizzard's On the national scene we also have examples of large companies using the framework successfully, we can mention a few: Unimed Cacau Show Americanas Checklist Fácil Picpay Want to know more? Interested in starting with Angular?  

Architectural Model: how to choose the ideal one for your project
Tech Writers January 17, 2024

Architectural Model: how to choose the ideal one for your project

What is an Architectural Model and why is it important? Basically, an architectural model is the abstract structure on which your application will be implemented. “The software architecture of a program or computer system is the structure or structures of the system that encompasses the software components, the externally visible properties of those components, and the relationships between them.” (Bass, Clements, & Kasman, Software Architecture in Practice) To define the model that will best suit your project, we need to know well the company's short, medium and long-term strategies, the software's non-functional and architectural requirements, as well as the user growth curve over time and the volume of requests. As well as the points mentioned throughout this article, there are still others to take into account when deciding which architectural model to apply. As an example, we can list: Security concerns; Data storage; Lockins; Total volume of users; Volume of simultaneous users; TPS (transactions per second); Availability plan/SLA; Legal requirements; Availability on one or more types of platforms; Integrations. The survey of architecture, RAs (architectural requirements), VAs (architectural variables), RFs (functional requirements), RNFs (non-functional requirements) and the criteria that define each of these items directly influence the choice of the correct model. The choice of architectural model can impact the entire life cycle of the application. Therefore, this is a subject that we must treat with great attention. The use of MVPs (especially those that do not go into production) can greatly help with this task. They give a unique opportunity to make mistakes, adjust, make mistakes again, prove concepts, adjust and make mistakes as many times as necessary so that in the end the software has the architecture in the most correct version, thus bringing the true gains of this choice. How the architectural models are divided It is ideal to make it clear that like many definitions in the software world, what architectural models are and what they are can vary. Therefore, in this article I tried to divide them into four large groups: monolithic, semi-monolithic (or modular monolith), distributed monolith (or microlith) and microcomponentized. Monolithic Model in which all components form a single application or executable integrated into a single source code. In this case, it is all developed, deployed and scaled as a single unit. Figure 1 – Example of a Monolithic Model. Pros Simplicity: As the application is treated as a single, cohesive unit, it becomes simpler as all parts are contained in a single source code. Greater adherence to Design Patterns: taking into account that we have a single source code, another factor that makes it easier is that the design patterns themselves (Design Patterns, 01/2000) were written in times of monolith dominance, making the application of even more adherent. Greater performance: due to low latency in communication, monoliths tend to have good performance, even using more outdated technologies. Lower resource consumption: low complexity, simplicity and lower communication overhead between layers favor lower resource consumption. Easier troubleshooting: Creation of development and debug environments is made easier in monoliths, as the components share the same processes in them. Another factor that we can take into account is that monoliths have fewer external failure points, simplifying the search for errors. Cons Limited team size: breakdowns related to Continuous Integration and merge conflicts happen more regularly in monoliths, creating difficulties in parallel work for large teams. Scalability: Scalability may be limited in certain aspects. Even with ease in vertical scalability, horizontal scalability can often become a problem that could affect the growth of the application. Availability of windows: normally, for a monolith, executables are exchanged, which requires a window of availability without users accessing the application, which does not happen with other architectural models that can use other deployment techniques such as Blue-Green or even work with images or pods. Single technology: low technological diversity can often become an impediment to the growth of the application by only serving one type of operating system, for example, or not fully meeting new features requested by customers due to not having updates that have the capacity to solve complex problems. Greater expenditure on compilation and execution: large monoliths generally take a long time to compile and execute locally, generating a greater commitment in development time. When to Use Low Scalability and Availability: if the application has a limited scale where, for example, the number of users is low or high availability is not mandatory, the monolithic model is a good solution. Desktop Applications: the monolithic model is highly recommended for desktop applications. Low seniority teams: monolithic models, due to their simplicity and location of components, enable low seniority teams to work with better performance. Limited resources: for a limited infrastructure with scarce resources. Semimonolithic (or Modular Monolith) Model in which applications are composed of parts of monolithic structures. In this case, the combination tries to balance the simplicity of the monolithic model and the flexibility of the microcomponentized model. Currently, this architectural model is often confused with microservices. Figure 2 – Example of a Semimonolithic Model. Pros It brings benefits of the monolithic and microcomponentized models: with this, it is possible to maintain parts as monolithic structures and only microcomponentize components that have a real need. Technological diversity: possibility of using different technological approaches. Diversified infrastructure: this model can be developed to use both On-Premise and Cloud infrastructure, favoring migration between both. Supports larger teams: the segmentation of components allows several teams to work in parallel, each within its own scope. Technical Specialties: due to segmentation, the team's hard skills are made better use of, such as frontend, UX, backend, QA, architects, etc. Cons Standardization: due to the large number of components that can appear in a semi-monolithic model, standardization (or lack thereof) can become a major problem. Complexity: the complexity inherent to this type of model also tends to increase with new features. Therefore, new features such as messaging, caching, integrations, transaction control, testing, among others, can add even more complexity to the model. Budget: in models that support the use of different technologies with large teams, more specialist professionals with a higher level of seniority are needed, often resulting in greater expenditure on personnel expenses. Complex troubleshooting: the complexity of the model and the diversity of technologies make troubleshooting the application increasingly difficult. This is due to the large number of failure points (including external to the application) that come to exist and the communication between them. When to Use Accepted in Various Scenarios: it is a flexible model that can meet various scenarios, but not always in the best way. Little Definition: in projects that have uncertainties or even that do not have the full definition of their requirements, this model is the most suitable. In medium and large teams: as mentioned, the division of components into several groups facilitates parallel work in medium and large teams. Typically, groups have their own code repositories, which makes parallel work more agile. Diverse Seniority: this model benefits from teams with this format, as semi-monolithic software presents varied challenges, both in the frontend and backend layers and in infrastructure issues (IaC – Infrastructure as a Code). Infrastructure: for a Cloud-based or hybrid infrastructure, this model is more applicable. It is a model that allows, for example, gradual adoption between On-Premise and Cloud, facilitating adaptation and minimizing operational impacts. Distributed Monolith This modeling is a "modern" modeling that has also been implemented and confused with the microcomponentized/microservices model. "You shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile." (Fowler, Martin. 2015) In summary, in this architectural model the software is built on the basis of the monolithic model, but implemented according to the microcomponentized model. Currently, many consider it an antipattern. Figure 3 – Example of Distributed Monolith Model. It wouldn't be worth listing the pro features (I don't know if there are any), but it's still worth mentioning features that go against it: this architectural model brings together the negative points of the other two styles with which it is confused. In it, services are highly coupled and also have various types of complexity, such as: operational, testability, deployment, communication and infrastructure. The high coupling, especially between backend services, brings serious difficulties in deployment, not to mention the significant increase in points of failure in the software. Microcomponentized Software model in which all components are segmented into small, completely decoupled parts. Within microcomponents, we can mention: Microfrontends Microdatabases Microvirtualizations Microservices Microbatches BFFs APIs Figure 4 – Example of a Microcomponentized Model. "A microservice is a service-oriented application component that is tightly scoped, strongly encapsulated, loosely coupled, independently deployable, and independently scalable" (Gartner, n.d.). Opinions converge to say that every microservice that worked was first a monolith that became too big to be maintained and reached a common point of having to be separated. Pros Scalability: Scalability in this model becomes quite flexible. Depending on the need, the components are scaled in a specific way. Agile Development: Teams can work independently on each component, facilitating continuous deployment and accelerating the development cycle. Resilience: if a component fails, it does not necessarily affect the entire application. This improves the overall resilience of the system. It is important to note that there are single point of failure approaches to avoid this type of problem. Diversified Technology: each component can be developed using different technologies, allowing the choice of the best tool for each specific task. Furthermore, it also favors the existing skills of each team. Ease of Maintenance: changes to one component do not automatically affect others, facilitating maintenance and continuous updating. Decoupling: components are independent of each other, which means that changes to one service do not automatically affect others, facilitating maintenance. Cons Cost: high cost of all components of this model (input, output, requests, storage, tools, security, availability, among others). Size: microcomponentized software tends to be larger in essence. Not only the size of the application, but the entire ecosystem that permeates it from commit to the production environment. Operational Complexity: there is an exponential increase in complexity in this model. Designing good architectural components so that this complexity is managed is of great importance. It is important to choose and manage logging tools, APM and Continuous Monitoring, for example, well. Managing many microservices can be complex. Additional effort is required to monitor, orchestrate, and keep services running. Latency: Communication between microservices can become complex, especially in distributed systems, requiring appropriate communication and API management strategies. Network Overhead: Network traffic between microservices can increase, especially compared to monolithic architectures, which can affect performance. Consistency between Transactions: Ensuring consistency in operations involving multiple microservices can be challenging, especially when it comes to distributed transactions. Testability:  Testing interactions between microservices can be more complex than testing a monolithic application, requiring efficient testing strategies. Infrastructure: You may need to invest in robust infrastructure to support the execution of multiple microservices, including container orchestration tools and monitoring systems. Technical Dispersion: at this point, we can say that there is an action of "Reverse" Conway's Law, as teams, as well as technologies and tools, tend to follow dispersion and segregation. In teams, each person becomes aware of a small part of a larger whole. This way, for technologies and tools, each developer uses the framework or tools that suit them best. Domain-Driven Design: to increase the chances of success of this model, teams must have knowledge of DDD. When to Use Volumetrics: the microservices/microcomponents architecture has proven to be effective in high-volume systems, that is, those that need to deal with large amounts of transactions, data and users. Availability: one of the main reasons for adopting this type of architecture is availability. When well constructed, software that adopts microcomponentization does not tend to fail as a whole when small parts present problems. Therefore, other components continue to operate while the problematic component recovers. Scalability: If different parts of your application have different scalability requirements, microservices can be useful. You can scale only those services that need the most resources, rather than scaling the entire application. Team Size: Small teams can be problems. Configurations, boilerplates, environments, tests, integrations, input and output processes. Resilience > Performance": in cases of uncertainty, for example, the volume of requests and how far it can reach, such as large e-commerces in periods of high access (Black Friday) where it is necessary for the software to be more resilient and perform better median. Comparative Checklist Figure 5 – Checklist Comparison between models. Conclusion In summary, the choice of the architectural model is crucial to the success of the project, requiring a careful analysis of needs and goals. Each architectural model has its advantages and disadvantages and we must guide the decision by aligning it with the specific requirements of the project. By considering company strategies, requirements and architectural surveys, it is possible to make a decision that will positively impact the application life cycle. The work (and support) of the architecture team is extremely important. It is also of great importance that management and related areas support by providing time to collect this entire range of information. Still in doubt? At first, start with the modular semi-monolith/monolith. Likewise, pay close attention to database modeling. References Gartner. (n.d.). Microservice. Retrieved from https://www.gartner.com/en/information-technology/glossary/microservice Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1994) Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. Bass, L., Clements, P., Kazman, R. (2013) Software Architecture in Practice (3rd ed.). Addison-Wesley. Microservices Architecture (12/2023). Retrieved from https://microservices.io/ Fowler, S. J. (2017) Production Ready Microservices. Novatec. ArchExpert Training. (n.d.). Premium Content. Retrieved from https://one.archoffice.tech/ Monolith First (06/2015). Retrieved from https://martinfowler.com/bliki/MonolithFirst.html Microservices. Accessed on 01/2024.

GraphQL in dotNET applications
Tech Writers January 15, 2024

GraphQL in dotNET applications

In this article I will talk about GraphQL with a focus on dotNet applications. Here, I'll show how the inherent problems of REST motivated the creation of GraphQL. Next, I will present the basic concepts of the specification of this language. Then I will introduce the Hot Chocolate library, which is one of the many libraries that implement the GraphQL specification. Finally, I will show a small example of using this library in a dotNet application. REST Before we talk about GraphQL it is necessary to talk about REST. The term was coined by Roy Thomas Fielding (2000) in his doctoral thesis. In this work, Fielding presents REST as an architectural pattern for web applications defined by five restrictions: Client-server: This restriction defines that the user interface must be separated from the system components that process and store data. Stateless: This restriction says that the client does not need to be aware of the server's state, nor does the server need to be aware of the client's state. Cache: This restriction indicates that, when possible, the server application must indicate to the client application that data can be stored in cache. Layered system: This restriction indicates that the application must be built by stacking layers that add functionality to each other. Uniform interface: This restriction indicates that the application's resources must be made available in a uniform manner, so that, when learning how to access one resource, one automatically knows how to access the others. According to Fielding's work, this is one of the central characteristics that distinguish REST from other architectural patterns. However, the author himself states that this degrades the efficiency of the application, as resources are not made available in a way that meets the specific needs of a given application. What REST looks like in practice In Figure 1 you can see part of Microsoft's OneDrive API. In this image you can see the uniformity in access to resources. This is noticeable when we notice that, to obtain data, simply send a GET request to an endpoint that starts with the term drive and is followed by the name of the resource and its ID. The same logic applies to creating resources (POST), modifying resources (PUT) and removing them (DELETE). Accessing the Google Drive documentation, we can see the typical return of a REST API. The aforementioned documentation shows the large volume of data that a single REST request can bring. Despite being large, a client application may still need to make extra requests to obtain more data about the owner of a file, for example. Considering the restrictions determined by Fielding and the examples shown, it is easy to see two problems inherent to REST. The first of these is the data traffic that the consumer does not need and the second is the possible need to make several requests to obtain the data necessary to create a web page. Figure 1 - Access the full article here. Understanding GraphQL GraphQL emerged in 2012 on Facebook as a solution to the problems found in the REST standard. In 2015, the language became open source and in 2018 the GraphQL Foundation was created, which became responsible for specifying the technology. It is important to highlight that GraphQL is not a library or tool. Like SQL, GraphQL is a language for searching and manipulating data. While we use SQL in the database, GraphQL is used in APIs. Table 1 shows an SQL expression to retrieve an order number and customer name from a database. Similarly, Table 2 shows a GraphQL expression to obtain the same data from an API that supports GraphQL. In the examples, we can see two major advantages of GraphQL over REST. The first is present in the fact that GraphQL allows the consumer to search only for the data they need to create their web page. The second is present in the fact that the consumer could search for order and customer data in a single call. Table 1: Example of a select in a relational database. Table 2: Example of a GraphQL expression. I consider it interesting to mention two more characteristics of a GraphQL API. The first of these is the existence of a single endpoint. Unlike REST, where an endpoint is created for each resource, in a GraphQL API all queries and mutations are sent to the same endpoint. The second is the fact that a GraphQL API only supports the POST verb. This is yet another difference in relation to a REST, where different HTTP verbs must be used depending on the intention of the request. Therefore, while in a REST API we must use the GET, POST, PUT and DELETE verbs, in a GraphSQL API we will use the POST verb to get, create, change and remove data. Schema Definition Language Let's now talk a little about SDL (Schema Definition Language). When using a relational database, it is first necessary to define the database schema, that is, it is necessary to define the tables, columns and relationships. Something similar happens with GraphQL, that is, the API needs to define a schema so that consumers can search the data. To create this schema, SDL is used. The official GraphQL website has a section dedicated to SDL. In that section you can find a complete description of the language for creating GraphQL schemas. In this text, I will present the basic syntax for creating a GraphQL schema. In Figure 2 you can see part of a GraphQL schema created using Apollo. We can see that the scheme begins with the definition of two fundamental types: Query and Mutation. In the first type we define all the queries that our API will have. In our example, consumers will be able to search for customers, products and orders. The Mutation type defines which data manipulation operations will be available to the consumer. In the example presented, the consumer will be able to create, change and remove customers and products. However, when it comes to orders, he can create, add an item, cancel and close the order. In addition to the Query and Mutation types, you can see the presence of the Customer and Product types. In both, there are ID, String and Float properties. These three types, together with Int and Boolean types, are called scalar types. The schema also shows the definition of an enumerate called OrderStatus. Figure 3 shows the definition of Input types that are used to provide input data for queries and mutations. I think it's important to point out that the way to create the schema varies depending on the library you choose. When using the Apollo library for javascript, the schema definition can be done through a string passed as a parameter to the gql function or through the creation of a file (generally called schema.graphql). However, when using libraries such as Hot Chocolate for dotNet, the schema definition is done by creating classes and configuring services in the application. Therefore, the way in which a GraphQL schema is created can vary greatly depending on the language and library chosen. Figure 2. Figure 3. Basic elements of the GraphQL language As mentioned earlier, GraphQL is a language and therefore has a syntax. You can find the complete guide with language syntax on the official GraphQL website. However, in this article, I will describe the basic elements of it.   Data is searched through queries, which must begin with the keyword query followed by the name of the query. If it has parameters, you must open parentheses and, inside them, you must place the name of each parameter followed by its value. The colon (:) must be used to separate the parameter name from its value. Having finalized the list of parameters, the parentheses must be closed. Then, you must open braces ({) and place the name of the fields you want inside them. With the list of fields finalized, you must close the brace (}). Table 3 shows a simple example of a query. Table 3: Example of query. There are scenarios where the query parameters can be complex. When a parameter is complex, that is, it is an object with one or more fields, braces must be opened immediately after the colon. Within the keys, you must place the value of each field of the object and its respective value, both of which must be separated by a colon (see table 4). There are also scenarios where the query fields can be complex. In these cases, you must open curly braces right after the field name. Inside the keys, you must place the names of the object field (see table 5). Table 4: Example of query. Table 5: Example of query. The rules described so far also apply to mutations. However, these must be started with the keyword mutation instead of query. It is interesting to note that there are other elements in the GraphQL syntax, but the elements described so far are sufficient to execute most queries and mutations. Being a language, GraphQL needs to be implemented by some application or library. For our API to support queries and mutations, we generally need a library. Of course, we could implement the language specification on our own, but that would be very unproductive. The “Code” section of the GraphQL.org website shows a list of libraries that implement GraphQL for the most varied languages. For the dotNet world, for example, there are the libraries “GraphQL for .NET”, “Hot Chocolate” and others. When talking about GraphQL implementations, it is necessary to talk about the concept of “resolvers”. A resolver is a function that is triggered by the library that implements GraphQL. This function is responsible for effectively fetching the data requested by the query. The same occurs with mutations, that is, when the library receives a request to execute a mutation, the library identifies the resolver that will execute the changes in the database (insert, update and delete). Note, then, that in most libraries, searches and changes to data are carried out by their own code. It can be seen, then, that the libraries that implement GraphQL are responsible for interpreting the query/mutation sent by the caller and discovering the appropriate function to resolve the requested query/mutation. To see an example of a simple API that uses Hot Chocolate, visit my GitHub. To sum it all up, GraphQL is a language created by Facebook with the aim of overcoming the problems inherent to REST. The language provides a simple syntax for obtaining data from an API as well as changing data from it. It is implemented by a wide variety of libraries for the most diverse languages, allowing the developer to create a GraphQL API using their favorite language. References “GraphQL.” Wikipedia, 9 June 2022, en.wikipedia.org/wiki/GraphQL. Accessed on 6 Nov. 2023. The GraphQL Foundation. “GraphQL: A Query Language for APIs.” Graphql.org, 2012, graphql.org/. Thomas Fielding, Roy. “Fielding Dissertation: CHAPTER 5: Representational State Transfer (REST).” Ics.uci.edu, 2000, ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm. Accessed on 6 Nov.  

Multi-brand design systems: what they are and main benefits
Tech Writers December 01, 2023

Multi-brand design systems: what they are and main benefits

What are multi-brand design systems Multi-brand design systems are systems with attributes that make them flexible for use in different contexts, visual patterns and interface design. They are developed for cases in which a single library aims to serve products from different brands. Generally, this type of design system is also independent of frameworks, platforms or technologies — they are called tech-agnostic design systems. Currently, the most popular agnostic design system is Lightning, developed by Salesforce, also the creator of the concept. Benefits In addition to being a single source of truth, the multi-brand design system shares the cost of operation, making work truly collaborative between teams. According to Volkswagen group designers, the implementation of GroupUI brought the following results: Increased agility, efficiency and cost reduction are some of the benefits of multi-brand design systems. Scalability Developed based on the concept of design tokens, they enable the same library to be replicated in different products, regardless of the framework in which they are developed. At the same time, they allow each of these products to use their own visual standards. Another very relevant point is the sharing of characteristics such as good practices, responsiveness, accessibility, performance, UX and ergonomics. Use in different technologies Currently, it is common to find in design systems, even those that serve a single brand, different libraries for web, iOS and Android products. This is due to the existence of different specifications for desktop and mobile browsers, as well as between devices with native operating systems, such as Apple and Google. Working independently of these technologies, it is possible to instantiate the same design system in different library components to meet these particularities. Gain in efficiency According to data released by UX and design systems leaders at the Volkswagen Group, through the presentation Multibrand Design System within the Volkswagen group & its brands, there is a great increase in agility, productivity and efficiency when working with the multi-brand concept . Operational efficiency with the use of multi-brand design systems. (Source: YouTube) Comparing the effort required between a product without a design system, going through one that has its design system, and arriving at one that adopts the multi-brand methodology, it is possible to notice an incremental and considerable reduction in UI efforts ( interface design) and development. This implementation enables a way of working that is more oriented towards user experience and discovery, by freeing up resources for these activities, which until then were being consumed in the design and implementation of interfaces. Standardization A detailed and well-specified design system becomes a single source of truth. When shared within the organization, in addition to making the work of teams much easier, it enables consistent standardization, avoiding the need for the same discussions, discoveries and definitions, which become ready to be reused as a result of the constant development of a design system. Easy customization According to experts, the main characteristic of a multi-brand design system is flexibility. In this context, making customizable means allowing each product to apply its visual design decisions. To make this possible, token design libraries are created. They can be easily duplicated and customized, generating distinct visual patterns for each brand and product. Design tokens can be interpreted as variables that carry style attributes, such as a brand color, which, applied as a token, allows, when changing the value carried by the variable, to reflect the change in all places where the color is displayed on the interface. In the example above, we have brand color specifications for three different design systems, and in the left column we have the token, which will remain the same across all products. The value carried by the variable is different in each case. These definitions apply to any other visual attribute, such as typography, spacing, borders, shading and even animations. Structure of multi-brand design systems According to Brad Frost, one of the most influential design systems consultants today and author of the book Atomic Design, it is recommended that multi-brand design systems have three layers: Three-level structure of a design system. (Source: Brad Frost) Tech-agnostic (1st layer) The agnostic level of a design system is the basis for the others, therefore, it only includes html, css and java script codes, with the aim of rendering components in the browser . This layer is extremely important in the long term, as it allows the future reuse of a design system. For example, in the current scenario, it can be said that the most popular language is React. However, this was not always the case and it is not known which technology will be the next to stand out. For this reason, it is important to have a base layer, which can be applied to new technologies, without having to start a new design system from scratch. In this first layer, designers and developers build the design system components in a workshop environment, documented in a tool such as Figma and Zeroheight. The result of this work are components rendered in the browser, considering that the framework adopted today may not be the same as the one adopted in the future. Tech-specific (2nd layer) The technology-specific level is where there is already a dependency on some technology and/or platform and, in addition, there is the opportunity to generate a design system layer for all products that use the same technology. A good example of this type of design system is Bayon DS, which serves SAJ products. It is also possible to use it to develop any other product that uses React technology. Prod-specific (3rd layer) The third layer is where everything becomes very specific and all the effort is made for a particular product. At this level, documentation can be created relating to very unique standards that only apply to that particular context. Following the Atomic Design concept, this layer creates components with greater complexity and less flexibility, such as pages and templates, in order to generate product patterns. In the third layer, individual applications consume the specific version of the selected technology, via package managers such as npm and yarn. How we are putting these new concepts into practice A few months ago, after the announcement of the Inner Source initiative, we began studying a way to transform Bayon, so that it could "receive" this new concept. Personally, I began in-depth research into the topics discussed in this article. Furthermore, my managers gave me the opportunity to participate in an advanced bootcamp on design systems, which brought me a lot of learning. In parallel to the research, we brought together some professionals with knowledge of Bayon, represented by colleagues from the architecture and product design teams of the JUS verticals, to discuss the possibilities of action to convert our design system to the most recent standards. Together, we diagnosed the most correct way to create and apply a design token library, allowing us to remove our current framework, Material UI, so that, in its place, there is the implementation of Softplan's new agnostic design system. Web components and Stencil Through recurring meetings with representatives of Softplan Group companies, the possibility of developing a library of web components is discussed. In it, each visual attribute or design decision is applied through design tokens, allowing complete customization that guarantees that each component will present the visual characteristics of the corresponding product. Web components are a set of APIs that allow the creation of custom, reusable and encapsulated HTML tags for use in web pages and applications.

What is information security and how to protect your data
Tech Writers October

What is information security and how to protect your data

Information security refers to the set of practices, measures and procedures adopted to protect the sensitive information and data of an organization or individual against threats, unauthorized access, loss, theft, damage or unwanted changes. The objective is to guarantee the confidentiality, integrity and availability of data, as well as preserving its authenticity and preventing it from falling into the wrong hands. Information security is essential in a highly connected world dependent on computer systems, networks and digital technologies. It covers several areas, including: Confidentiality: Ensuring information is only accessible to authorized people by preventing unauthorized access through authentication, encryption and access control. Integrity: Ensuring that data is accurate, complete and does not undergo unauthorized changes during storage, transmission or processing. Availability: Making sure that information is available and accessible when needed, avoiding interruptions caused by failures, attacks or disasters. Authenticity: Ensuring that information is correctly attributed to its authors and that the origin of the data is verifiable. Non-repudiation: Ensuring that the author of an action cannot deny the authorship or completion of a transaction. Risk management: Identify, analyze and mitigate threats and vulnerabilities to protect information from possible security incidents. Impacts of the lack of information security The lack of information security can cause a series of significant negative impacts for individuals, organizations and even society as a whole. Some of the main impacts include: Data theft: Hackers and cyber criminals can break into unprotected systems and steal confidential information such as personal data, credit card numbers, banking information and other sensitive data. This type of theft can lead to financial fraud, identity theft and extortion. Information Leakage: Leakage of sensitive information, such as trade secrets, intellectual property, or confidential government information. As a result, these leaks can harm companies' competitiveness, national security and people's privacy. Damage to reputation: When an organization suffers a security breach, its reputation can be seriously compromised. Public perception of a lack of care for customer data can negatively affect the trust of customers and business partners. Business interruption: Cyberattacks, such as ransomware or denial of service (DDoS), can render systems inoperable and disrupt business operations. This disruption can cause lost productivity, financial harm, and customer frustration. Financial loss: Such as the cost of repairing compromised systems, paying ransomware ransoms, or facing litigation related to data breaches. Regulatory and legal violations: In many countries, there are laws and regulations that require adequate protection of customer data and sensitive information. Lack of security can lead to violations of these laws, which can result in fines, penalties and legal action. Espionage and cyberwar: Facilitate cyber espionage and even large-scale cyberattacks between countries, undermining national security and geopolitical stability. Damage to digital trust: Undermining overall trust in digital technologies, which can slow the adoption of new technologies and harm the digital economy. Security Measures To mitigate such impacts, it is essential that individuals and organizations invest in information security measures, such as encryption, strong authentication, security awareness training, regular software updates and security audits. Furthermore, it is essential that governments and industrial sectors work together to develop more robust cybersecurity policies and standards. Some common elements to ensure information security include firewalls, intrusion detection and prevention systems (IDS/IPS), antivirus, encryption, regular backups, strong password policies, security awareness training, and monitoring for suspicious activity. It is the duty of each of us to ensure information security, recognizing the importance of a vigilant stance in our daily actions. The responsibility does not just fall on technology experts or specific departments, but on each individual. Therefore, distrusting, verifying and validating the information received is a fundamental practice to avoid falling into social engineering traps. After all, this form of cyber attack can originate from unexpected and seemingly trustworthy sources. Only by taking on the role of our own ally in digital security can we build a solid foundation to protect our privacy, personal data and sensitive information. By strengthening our awareness and adopting preventive measures, we can significantly contribute to a safer and more trustworthy online environment for everyone. It is important to pay attention to artificial intelligence tools such as Chat GPT or Google Bard. Never use a corporate email address to access these tools, as it may pose a significant risk to the security of sensitive information of the company you work for. When using these platforms, always choose to use a personal email.