AI is Learning to Smell, Touch & Taste

AI Is Advancing Way Beyond Text With Some Graphics

By Tony Burlinson

To date, AI has been defined by a single dominant input: Text. A new frontier is emerging. AI is being trained on multimodal sensory data that we humans take for granted: Smell, taste, and touch.

Even the most advanced AI models - capable of writing code, analyzing documents, and generating images – are still primarily relying on language as their window into the world.

When people talk about multimodal AI, they are usually referring to text, images, audio, and video. Adding all five human senses into the mix will allow AI to interpret the physical world with a richness that more closely represents our own experiences.

Smell is one of the most surprising breakthroughs. Using chemical signatures and molecular structures, AI models can now predict the scent profile of compounds with astonishing accuracy.

This allows AI systems to design new fragrances and even detect early signs of disease through chemical biomarkers.

Taste is following a similar trajectory. By analyzing molecular patterns, AI can estimate flavor, sweetness, bitterness, and even the sensory experience of food combinations.

Early systems are already helping food companies design healthier products that still taste great.

As models improve, AI could personalize nutrition, optimize recipes for individual palates, or create entirely new categories of cuisine that no human chef has imagined.

Touch is advancing through high‑resolution tactile sensors. Robots can now feel pressure, texture, and temperature with increasing precision.

This unlocks dexterity. The ability to handle delicate objects, perform fine‑grained tasks, or navigate environments where vision alone is not enough.

Tactile AI is transforming manufacturing, healthcare, and robotics, enabling machines that can assist with surgery and provide physical care.

AI is on a trajectory to evolve from a system that reads the world to one that senses it.

As models integrate smell, taste, and touch with language, vision, and sound, they will develop a multidimensional understanding of reality, similar to our own. In fact, in some cases it might well surpass our own understanding.

This could enhance human experiences and even create new products and industries that we can’t quite imagine today.

Share: LinkedIn