Synthetic data is becoming one of the most strategic components for the next era of artificial intelligence. For Bruno Maia, director of innovation for Latin America at SAS, this technology is more than just a technical solution. It is the key to unlocking productivity, protecting privacy, and accelerating corporate adoption of AI without compromising security.

“Synthetic data protects individuality while maintaining the logic necessary to train models,” Maia stated on Revolução AI , a NeoFeed program supported by Magalu Cloud.

The technique allows for the creation of artificially generated datasets that are statistically faithful to the originals – in other words, it's information generated to mimic real-world data. This solves one of the most critical bottlenecks in AI, which is the time spent preparing the data.

According to Maia, approximately 70% of the entire analytical cycle is consumed in cleaning, integration, and processing. With synthetic data, these steps become faster, more scalable, and safer.

Another benefit is the protection of sensitive information, since models trained on real data can expose databases or even reproduce biases. The synthetic version, in turn, helps eliminate such vulnerabilities without sacrificing performance.

"This approach allows companies to exchange intelligence without sacrificing privacy or their business strategy," he said.

While synthetic data is already becoming a reality in the AI industry, the next major technological frontier (and perhaps the most disruptive) may be quantum computing. He predicts that enterprise applications will begin to appear in about three years, profoundly altering security and processing paradigms. But he warns that it is necessary to look at the subject now.

While acknowledging all the benefits of the technology, the expert argues that the adoption of AI should be guided less by hype and more by strategy. In his view, not all company problems need to be solved with artificial intelligence. Instead, it is necessary to have clarity about where the technology generates value so that it can be applied, as well as understanding how to avoid risks and biases and how to ensure governance.

“In the past, interpreting a model was simple. Today, with machine learning, it’s impossible to explain where the decision tree is going,” he said, highlighting that this opacity has real consequences. “If you can’t explain what the model is doing, first, the client becomes dissatisfied. Second, you could be fired.”