Is optimism the driving force behind AI? 5 key insights to understand its progress and future

Who is benefiting most from the latest AI advancements?

While many investors are drawn to new developments in artificial intelligence and don’t want to miss out on this boom, others wonder if everything is as positive as it seems. Some say that all the value generated is being distributed unequally, benefiting some more than others. Perhaps this is only the first phase, in which the transmission chain is being greased. Who is gaining the most value? Big tech companies like Google, Facebook, or Microsoft? Tech consultancies? Chip companies like NVIDIA? Small businesses? Individuals? The investment community also wants a reasonable return for the risks it’s taking. Are investors paying an exaggerated price for startups selling AI products and services?

Excessive optimism?

As Colin Powell once said, “Perpetual optimism is a force multiplier.”

OpenAI and DeepMind have dared to publish their scaling levels, counting down towards general artificial intelligence and outlining the path to what may be the future. Is this optimism reasonable, or are the expectations exaggerated? Investment in generative AI is expected to reach $12 billion in 2024, showing continued investor interest despite doubts about long-term viability, according to an EY report. It seems investors, who largely fuel the machine, continue betting on optimism. But for how long?

Will we have enough data?

More and more data is needed to build new AI models. Moreover, artificial intelligence requires a huge amount of data to develop, and many companies still lack the infrastructure, processes, and culture needed to properly manage the data lifecycle, which may limit their ability to fully leverage AI technologies. Will we soon run out of data? Let’s not forget that many existing data, despite being public, cannot be freely used to train models, and other high-quality data remains privately protected.

Real intelligence or brute force?

Scientific advances in this field occur progressively, with a few disruptions. However, it seems that brute force—understood as more infrastructure, more data, more computing power, etc.—is what produces better AI products. Additionally, as these tools are adopted, the marginal cost and effort of performing cognitive tasks will likely drop to nearly zero, and in the medium term, these tasks may become commoditized. Will real progress continue to come from brute force or scientific advancement? This could radically transform the economy and the value we place on certain skills and knowledge.

Are we going too fast?

New versions of LLMs are being released without even consolidating the previous versions or giving the market enough time to absorb them properly. Has the EU AI Act and other countries’ regulations arrived at the right time to slow down progress and allow time to get things in order?

Moreover, as AI rapidly advances, significant risks remain related to biases and prejudices in models, which could result in unfair or discriminatory decisions. Similarly, the extensive use of personal data raises concerns about privacy and information security. Without proper regulation and a deeper understanding of these issues, the accelerated deployment of AI could have negative consequences for fairness and personal data protection. It is also challenging to incorporate ethical behavior and usage if it hasn’t been designed that way from the beginning, following an “ethics by design” approach. This underscores the importance of integrating ethical considerations at all stages of AI development to prevent harm and ensure responsible technology use.

Tags: