Artificial Intelligence: The next technological revolution

Dear investors,

The invention of steam engines at the end of the 18th century initiated a period of great advancement in economic productivity that became known as the first industrial revolution. This rapid progress was interpreted at the time in two completely opposite ways: on the one hand, a utopian hope that the increase in productivity would be so great that people would be able to work less and have more free time to enjoy their lives; on the other hand, the fear that machines would replace workers and throw them into unemployment and inevitable poverty. This second interpretation even gave rise to Luddism, a movement of workers who invaded factories and destroyed machines in an attempt to prevent technological advances that, supposedly, would lead them to ruin.

Jumping two centuries ahead, today there is a somewhat similar discussion about the impact that artificial intelligence (AI) will have on the world. Enthusiasts see the potential for us to go through a new era of accelerated advancement in productivity levels and the consequent production of material wealth for the enjoyment of humanity, and others see a great threat, going to the extreme of predicting an apocalyptic future where machines controlled by superintelligent AIs they conclude that the world would be better without the human race and decide to eliminate it.

We are not experts on the subject, but it is obvious that the impact of this technology will be substantial, so it seems pertinent to us to follow the matter closely, both to seek to identify investments that could benefit from the advancement of AI and to remain alert to the potential risk that the transformation that comes ahead can represent certain business segments.

We will share our current vision, certainly incomplete and imperfect, about the possible impacts of AI, seeking to be as pragmatic and realistic as we are currently able to. Any criticisms, disagreements or additions are welcome.

What is Artificial Intelligence today?

Current AI tools are based on machine learning, a branch of computational statistics focused on developing algorithms capable of self-adjusting their parameters, through a large number of iterations, to create statistical models capable of making predictions without using a pre-programmed formula. In practice, these algorithms analyze millions of pairs of data (eg: a text describing an image and the image described) and adjust their parameters in the way that best fits this huge data set. After this calibration, the algorithm is able to analyze only one of the elements of a similar pair of data and, with that, predict the other element (eg: from a text describing an image, create an image that fits the description) .

Without going into the philosophical discussion of whether this prediction mechanism based on statistical models is equivalent to human intelligence, the fact is that these models perform tasks, until then impossible for software, quickly, cheaply and better than most of the humans would perform. For example, an AI called DALL-E generated the following images from the description: “a diverse group of economists and computer scientists, accompanied by a white and brown dog, trying to learn about AI near a river.”

Note that the images are not photos. They are entirely AI-created and have certain imperfections that a skilled human painter would not make (look at people's hands), but they are produced very quickly, at a low cost and by newly released technology, which certainly still has a lot of room to improve. perfect.

Another notable example is the Chat GPT, the chatbot launched in November 2022 that has an “almost human” ability to converse (in several languages) and a certainly superhuman ability to answer questions on a multitude of topics, deriving its “knowledge ” of the enormity of written content available on the internet. The tool still has some limitations. For example, sometimes it produces false information and the quality of the written texts, although quite reasonable, does not seem to us to exceed human ability (compared to erudite people). Either way, it's undeniably impressive technology, especially considering its few months of massive use.

A curiosity is that the ideas behind these algorithms are not new. They emerged around the 1960s and went through several cycles of enthusiasm and disillusionment in the scientific community. What made these new solutions gain popularity and notoriety was the advancement of computational processing power, which allowed AI algorithms to use datasets large enough to achieve assertiveness rates good enough to become really useful.

Potential Uses of Artificial Intelligence

There are software created to completely automate tasks that could be performed by people and software totally aimed at amplifying human productivity in certain tasks (for example, the Office suite). Artificial Intelligence is a new class of software, also used both for task automation and for amplification of human capacity. There is, however, an important difference compared to traditional software.

Software without AI can only perform activities whose step by step can be formally described and programmed in logical expressions. Identifying whether there is a dog in a photo, for example, is an impossible task for traditional software, due to the difficulty of describing what a dog is in logical and mathematical language. In contrast, an AI tool can be trained to recognize dogs by showing it millions of sample dog photos. After this calibration, it will be able to recognize dogs in unpublished photos as well as people and much faster.

This ability to recognize patterns in images is already extremely valuable. For example, it is very likely that doctors who today report diagnostic imaging tests (X-ray, ultrasound, magnetic resonance, etc.) will be replaced by AI software that will issue reports faster, with a higher accuracy rate and for a fraction of the current cost. On the one hand, this can be seen as a threat to the role of radiologists, but it is something that will enhance the work of other physicians, who will almost instantly receive the reports necessary to define how to treat their patients.

Another very likely use is for AIs to replace people employed in customer service. Today, there is already widespread use of chatbots in customer service (you certainly have interacted with one), but most of them are so limited that, in most cases, the customer is still transferred to a person. Soon, AI tools will reach a sufficient level to offer better service than a human, with complete knowledge of all information related to the products and services offered by a certain company and infinite patience to cordially serve any customer.

The threat of mass unemployment

Current AI technology allows the creation of tools that serve a specific purpose, that is, that can become extremely efficient in performing a given task, but are not capable of learning tasks of different natures. The great fear of those who advocate against AI is that this technology evolves into a Generic Artificial Intelligence (AGI), a software capable of learning anything that, once created, could evolve beyond human capacity and replace people in any situation. cognitive function. However, there is still nothing close to AGI and there is a big debate whether one day humanity will be able to develop something like this.

In the absence of super-powered AI, it seems more likely that AI's impact will be similar to that of other disruptive technologies that have emerged in history. For example, before there were computers, every form of mathematical calculation was performed manually by people. Engineering and scientific research activities employed a large number of employees to do accounts and review calculations, in positions seen as of low added value. The emergence of computers completely eliminated this professional category, but it greatly amplified the productivity of people who depended on calculations in their professions and in no way diminished the importance of mathematical knowledge.

This same dynamic can be used in countless cases of technological advancement, even in completely different fields. Today there are agricultural machines that make a single farmer as productive as hundreds of people working in a non-mechanized field. In all cases, increasing productivity by reducing the human effort required for the task is obviously a good thing.

Another obvious fact is that technology, which since the 18th century has provoked in some people the fear of making humans obsolete, has not yet done much to reduce the workload required of humanity. People still spend most of their days working to support themselves as the average standard of living has greatly increased over time and maintaining this standard requires more human effort.

For example, no one in the last century imagined that Internet access would be considered a “basic need”, but guaranteeing this today requires gigantic fiber optic networks, a series of data storage and transmission equipment working uninterruptedly and billions of electronic devices for use. guys. All of the tasks involved in broad access to the internet were completely unimaginable a century ago. As more and more products and services are included in our daily lives, new jobs emerge and the demand for human labor continues to exist.

The Hype Cycle

Gartner, a leading technology consulting firm, has created a schematic representation of how expectations related to a new technology evolve over time. In the first phase, the technology gains attention around speculation about its potential, but there is little clarity as to its potential use or commercial viability. In a second step, experiments begin to be successful and a huge expectation is created around the transformation potential that the new technology will bring to the world. Afterwards, expectations prove to be not so realistic, or more difficult to achieve than initially imagined, and generalized excitement loses ground to collective frustration in the face of unfulfilled promises. However, the problems are gradually being overcome and better ways of using the new tools are being discovered until the technology matures and enters a phase of productivity and realistic expectations. The graph below represents this cycle.

Today, it seems to us that AI is in its peak hype phase. Everyone sees enormous potential in an abstract way, but we still don't know how far this technology will evolve, how fast this evolution will be and which applications will be economically efficient to the point of being widely disseminated. Thus, it is likely that much of what is currently being said about AI has some hints of exaggeration and speculation. New technologies usually take several years to mature and reach productivity plateaus, so the future of AI is still difficult to predict in a concrete and detailed way.

AI impact on investments

Finally, what always interests us: how can this topic interfere in investment decisions? Investing in AI companies is probably a bad idea at this point, as in times of hype the valuations of companies at the forefront of the industry are often hyperinflated and it is very difficult to predict which company will be the definitive leader in its segment. A great example of this is the history of Yahoo and Google. Yahoo was founded in 1994 and was the dominant search engine until the early 2000s, being worth over U$ 125 billion at its peak, more valuable than Ford, Chrysler and GM combined at the time. Even so, Google, founded in 1998, beat Yahoo and became the absolute leader among search engines. Today, OpenAI is one of the most prominent companies in the industry, due to the success of ChatGPT, but it is very difficult to predict whether its future will be analogous to Yahoo or Google.

Possibly more interesting investment theses could be found by looking for opportunities among companies that are suppliers to the AI sector, following the maxim that, “in the gold rush, those who sell shovels and picks make the most money”. It is also not an easy task because, although it does not take much effort to reach the conclusion that AI depends on robust datacenters to store and process massive volumes of data, it is necessary to analyze each sector of suppliers to judge whether they do not face strong competition. and risks of technological disruption.

Another possibility is to look at the sectors that should benefit from the use of AI. For example, the music industry has benefited greatly from technologies that have made it possible to record audio and distribute music albums on discs, tapes, CDs and now through streaming. Some record labels have become multibillion-dollar companies due to the scalability that technology has brought to their businesses. Likewise, AI will certainly make it possible for some businesses to become more efficient, scalable and profitable for their shareholders.

As important as seeking to identify the companies that will benefit is understanding which businesses AI poses a risk. New technologies can destroy companies, a fact well illustrated by the history of Kodak, which was founded in 1888 and for more than a century was one of the most important names in the camera and film market. When digital camera technology emerged in the 2000s, Kodak was slower than its competitors in developing new products and fell behind. In 2012, the company declared bankruptcy.

Today, we are still more focused on detecting AI-related risks, especially for the companies we have in our portfolio, than on developing investment theses based on the progress of this technology, due to the uncertainty, which is still very large, surrounding the subject. However, we believe that AI has the potential to become as revolutionary as the emergence of computers, the internet and smartphones and, therefore, it is an issue that we intend to continue to monitor closely.

Would you like to sign up to receive our next letters?

Sign up for our newsletter

en_US

Invest with us

Before you leave, would you like to sign up to receive our upcoming letters?