Transformation beyond digital

Artificial Intelligence (AI) is a field of computer science that focuses on developing systems and machines capable of perform tasks that would normally require human intelligence. 

These software are designed to simulate human cognitive processes, such as learning, reasoning, problem solving, and perception. Thus, it is possible to automate complex tasks and make decisions based on data.

AI encompasses a wide range of techniques and approaches, including machine learning, artificial neural networks, natural language processing and computer vision. These technologies allow AI systems learn from data, recognize patterns, make predictions and interact more naturally with humans

Artificial intelligence is already a reality 

Artificial Intelligence has become a dominant force in today's society, impacting many aspects of our daily lives. After all, this technology offers results that often resemble the results produced by human beings.

Recent advances in the area of ​​machine learning (machine learningg) they even allowed software systems to become capable of creating new and original content in the form of texts, images, audios and videos. 

The main consequence of these changes is the significant increase in the use of AI in people's daily lives. 

Energy sustainability and AI

Energy sustainability in the development of Artificial Intelligence (AI) technologies proactively contributes to the sustainable future of the industry and the planet. 

A company aware of the challenges related to reducing environmental impact must implement solid strategies to enable the application of this technology.

Organizations, whenever possible, should choose to solutions based on distributed computing or cloud computing, where resources are shared between several organizations, avoiding the need for excessive investments in local hardware and allowing greater energy efficiency. 

In this context, two factors allow for energy savings during the training and execution of machine learning models: the lightest model optimizers and deep learning algorithms.

Ethics of Artificial Intelligence models

Machine learning models they can reflect and amplify human prejudice, perpetuating discrimination against certain groups of people. 

Artificial neural networks can “learn” what is right and wrong according to the data on which they are trained, just as children can absorb prejudiced biases on different topics. 

Furthermore, lack of transparency in some machine learning models can make it difficult to identify biases and prejudices. Algorithmic discrimination can reproduce existing patterns of discrimination and inherit biases present in the training data. 

The increasing concern of companies on this topic is illustrated in the text launch of the Claude 3 model from the company Anthropic, in March 2024: “We remain committed to advancing techniques that reduce bias and promote greater neutrality in our models, ensuring that they are not biased toward any specific posture.”

These challenges must be faced by companies decisively, adopting bias and prejudice mitigation practices in their machine learning models, such as:

  • Data diversification: include representatives of diverse demographic and cultural groups in training, validation, and testing datasets;
  • Continuous monitoring: carry out periodic analyzes to detect possible biases and prejudices in the results produced by artificial intelligence models. 

The hallucination of Generative Artificial Intelligence

The concept of hallucination in machine learning models refers to situations in which algorithms generate inconsistent, inaccurate, or even false results. 

These errors can be caused by a variety of factors, including data insufficient training facilities, incorrect assumptions made by the model, or biases in the data used to train the model. 

According to one search carried out in 2023 by McKinsey, 79% of people already use Generative Artificial Intelligence (GIA) in their daily lives. However, AI hallucinations can be a problem for all IAG systems.

The following measures can be taken to significantly reduce the chances of hallucination in machine learning models:

  • Use high-quality training data: Ensuring that IAG models are trained on diverse, balanced, and well-structured data can greatly reduce the chances of hallucination on the part of the models;
  • Clearly define the purpose of the model: Explaining how you will use the AI ​​model – as well as any limitations on using the model – will help reduce hallucinations. Teams must establish the responsibilities and limitations of the chosen AI system. As a result, tasks will be solved more efficiently and with fewer irrelevant and hallucinatory results;
  • Carry out constant human supervision: human validation and review of AI results is a final measure to prevent hallucinations. A human reviewer can identify hallucinations in the AI ​​results and alert the development team to take the necessary technical actions to fix the identified issue.

Data security and AI

The Open Worldwide Application Security Project – OWASP is an open, non-profit, international community where organizations can securely develop, operate, and maintain their software. 

In the context of the application of AI, OWASP states that cybersecurity remains a constant concern when it comes to the use of artificial intelligence. However, it also points out that This technology has added new aspects related to data security and privacy

Ensuring that artificial intelligence tools meet the highest standards of cybersecurity and protection is the best way to mitigate the new risks brought by this technology. 

Among these concerns is the using the data provided for tools for training or fine-tuning the Artificial Intelligence model. In these cases, the tools may inadvertently share sensitive or confidential data provided by users.

In this scenario, considering that AI systems deal with sensitive information, strict measures have been implemented to protect the privacy of data used by IAGs and, consequently, also protect the rights of citizens, as happens in the General Data Protection Law (LGPD).

With notable advances in the area of ​​machine learning and the growing AI adoption In everyday life, it is essential to consider ethical impact, energy sustainability and data security. 

Companies and researchers are increasingly aware of the importance of address these issues proactively, implementing practices to mitigate bias, ensure data security and ecological sustainability. 

By facing these challenges with responsibility and innovation, we can make the most of the transformative potential of Artificial Intelligence, promoting a more ethical, sustainable and safe future for everyone.

Mauricio Seiji

Mauricio Seiji

Computer Scientist with a master's degree in Artificial Intelligence from the Federal University of Rio Grande do Sul and a PhD in Knowledge Engineering from the Federal University of Santa Catarina. She has been working on software development projects for 28 years and currently leads an Applied Artificial Intelligence team at the company Softplan. Maurício has been a professor of undergraduate and postgraduate courses since 2003 and is currently a minister in the Artificial Intelligence course at PUC in Rio Grande do Sul.

Leave a comment