Unveiling Emotional Intelligence in Hume's AI-driven Conversations

Artificial intelligence can now understand human emotions, pull-off sarcasm, and even express anger. New York-based startup Hume AI last week launched the first voice AI with emotional intelligence which can generate conversations for emotional well-being of its users.

Founded in 2021 by Alan Cowen, a former researcher by Google DeepMind, the startup also raised $50 million in Series-B funding from EQT Group, Union Square Ventures, Nat Friedman, Daniel Gross, Northwell Holdings, Comcast Ventures, LG Technology Ventures, and Metaplanet days after the launch.


What is Hume AI?

Hume’s voice interface is powered by its empathic large language model (eLLM) which emphasises on tones of voice behind words to understand different emotions.

It can further emulate similar tones across 23 different emotions such as admiration, adoration, frustration etc, to generate human-like conversations.

The conversational AI chatbot is trained on data from millions of human conversations across the world to voice tonality, human reflexes and feelings. These responses are further optimised in real-time depending on user’s emotional state.


How is it useful?

While expressive AI chatbots in areas such as virtual dating have been around, Hume’s product is gaining accolades for its probable uses in robotics, healthcare, wellness etc.

Early predictions by some AI researchers show that AI assistants powered by Hume’s eLLM could not only make conversations but also help in daily tasks.

“Imagine an AI assistant that understands your frustrations or joys, a customer support agent that can empathize with your complaints, or even a virtual therapist capable of offering genuine emotional support,” according to a post on X.

Cowen in a LinkedIn post said, "Speech is four times faster than typing; frees up the eyes and hands; and carries more information in its tune, rhythm, and timbre.”

“That's why we built the first AI with emotional intelligence to understand the voice beyond words. Based on your voice, it can better predict when to speak, what to say, and how to say it."

Hume AI is preparing to release the platform APIs to developers next month in beta mode to integrate with various applications.

It can also integrate with other large language models such as GPT and Claude to add flexibility depending on enterprise use-case.

Besides empathetic feature, the voice assistant also offers transcription and text-to-speech capabilities.


Comments