SAN FRANCISCO – There was a time when Google offered a wonderful vision of the future, with unmanned car, augmented reality glassesunlimited email and photo storage e predictive texts to complete the sentences in progress.
A more modest Google was shown Wednesday as the company kicked off its annual developer conference. The Google of 2022 is more pragmatic and sensitive, a little more like its business-centric competitors from Microsoft than a land of fantasy games for tech buffs.
And this, it seems, is by design. Bold vision is still out there, but it’s a long way off. The professional executives who now run Google are increasingly focused on wasting money from those years of R&D spending.
The company’s biggest bet on artificial intelligence doesn’t mean, at least for now, science fiction comes to life. It means more subtle changes to existing products.
“Artificial intelligence is improving our products, making them more useful, more accessible and delivering innovative new features for all,” said Sundar Pichai, CEO of Google, on Wednesday.
Read more about Artificial Intelligence
In a brief presentation of exciting moments, Google pointed out that its products were “useful”. In fact, Google executives used the words “help”, “help” or “helpful” more than 50 times during two hours of key speeches, including a marketing campaign for its new hardware products with the line: “When it’s about helping, we can’t help but help. “
It has introduced a cheaper version of its Pixel smartphone, a round screen smartwatch, and a new tablet coming next year. (“The most useful tablet in the world.”)
The biggest applause came from a new Google Docs feature in which the company’s AI algorithms automatically summarize a long document into a single paragraph.
At the same time, it wasn’t immediately clear how some of the other groundbreaking jobs, such as language models that understand natural conversation better or that can break a business down into smaller logical steps, will ultimately lead to the next generation of Google’s touted computing. .
Certainly some of the new ideas seem useful. In a demonstration on how Google continues to improve its search technology, the company showed a feature called “multi-search“, where a user can snap a photo of a shelf full of chocolates and then find the most reviewed nut-free dark chocolate bar from the image.
In another example, Google showed how you can find an image of a specific dish, such as Korean stir-fried noodles, and then search for nearby restaurants that serve that dish.
Much of these capabilities are fueled by the deep technological work that Google has been doing for years using the so-called automatic learning, image recognition and natural language understanding. It’s a sign of an evolution rather than a revolution for Google and other big tech giants.
Many companies can create digital services easier and faster than in the past thanks to shared technologies such as cloud computing and storage, but building the underlying infrastructure, such as AI language models, is so expensive and time-consuming. of time that only the richest companies can invest in them.
As is often the case at Google events, the company hasn’t spent a bit of time explaining how it makes money. Google raised the topic of advertising, which still accounts for 80% of the company’s revenue, after an hour of other ads, highlighting a new feature called My Ad Center. It will allow users to request fewer ads from certain brands or highlight topics they would like to see more ads on.