⚡️ Goodbye Google Translate

PLUS: Don't forget to load up on Nvidia

Good morning. Imagine being able to communicate in 100 languages without learning them…and having an AI instructor for your programming classes. Oh, and while you are on your path to enlightenment, don’t forget to invest in some Nvidia stocks (not financial advice).

Intrigued? Let’s get you up to speed.

DEEP DIVE

Meta’s AI Can Translate Almost 100 Languages in Text and Speech

Imagine being able to communicate with anyone in the world, regardless of the language they speak or write. That’s the vision of Meta which has released an AI model that can translate and transcribe close to 100 languages across text and speech.

The model, called SeamlessM4T, is available in open source along with a new translation dataset, SeamlessAlign. Meta claims that SeamlessM4T is a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

"SeamlessM4T implicitly recognizes the source languages without the need for a separate language identification model."

— Meta’s announcement

SeamlessM4T is the result of scraping publicly available text and speech from the web, totalling tens of billions of sentences and 4 million hours.

Meta says that SeamlessM4T outperforms the current state-of-the-art speech transcription model, especially against background noises and “speaker variations” but admits that it has some issues with gender bias and lexical diversity.

Will SeamlessM4T make human interpreters obsolete?

Probably not. Human interpreters have their own unique skills—they can also handle nuances, contexts and emotions that AI models might miss or misinterpret.

But SeamlessM4T could be a useful tool for people who want to connect with others across linguistic barriers, opening up new possibilities for communication and collaboration in a globalized world.

Let’s hope that Meta’s AI can help us achieve that goal without compromising our linguistic diversity and cultural identity.

PUNCHLINES

Google Bard or Google Bad? Fake ads for the chatbot lead users to malware-infected webpages.

Look into my eyes: AI can reveal Parkinson’s disease in your retina before you even know it.

The ultimate Python tutor: New study shows that LLMs can not only solve Python tasks correctly, but also provide textual explanations.

Chip in: Nvidia’s shares soared to a record high as it prepares to announce its earnings call on Wednesday.

Inpainting is in: Midjourney finally lets you edit individual regions of your AI-generated images with text prompts.

TRENDING TOOLS

📝 Pitchpower: Generate AI-powered proposals and pitches in seconds

🎥 HeyGen: Create engaging avatar videos from text

🎙️ EchoHQ: 24/7 AI customer agents with humanlike voice

🤖 Chapple: One-stop shop to generate AI-content

📋 Hurd.ai: Transcribe, summarize and tag meetings and workshops

TLDR

AI’s uphill battle for copyright protection: A federal judge ruled that AI-generated images are not eligible for copyright protection, affirming the U.S. Copyright Office’s stance that only human creations can be copyrighted. This decision, part of the Thaler v. Perlmutter case, highlights the ongoing debate about AI and copyright laws.

OpenAI introduces fine-tuning for GPT-3.5 Turbo: OpenAI announces a new feature for its lightweight text-generating model, GPT-3.5 Turbo, that allows customers to customize it with their own data. Fine-tuning can not only improve the model’s reliability, behavior, and output quality for specific use cases, but also reduce the cost and latency of API calls. OpenAI says that fine-tuned versions of GPT-3.5 can match or surpass GPT-4 on some tasks.

VMware and Nvidia collaborate on generative AI: VMware and Nvidia have announced a new fully-integrated solution for generative AI training and deployment, providing enterprises with everything they need to fine-tune LLMs and run private generative AI applications on their proprietary data in VMware’s hybrid cloud infrastructure. It aims to address corporate data privacy, security, and control concerns while empowering enterprises to run their generative AI workloads adjacent to their data by early 2024.

Goldman Sachs on AI stock market trades: Goldman Sachs has identified two trades in the stock market related to the emergence of AI technologies. The first is to bid up companies that are poised to immediately benefit from the technology, such as Nvidia, Microsoft, Alphabet, Meta, etc. The second, long-term AI trade is to invest in companies that will see surging corporate profits due to labor productivity gains from AI-adoption. Goldman estimates that the median Russell 1000 stock could see earnings rise 19% between 2025 and 2030.

That’s all for today—if you have any questions or something interesting to share, please feel free to reply to this email. We’d love to hear from you!

P.S. If you want to sign up for the Supercharged newsletter or share it with a friend, you can find us here.

What did you think of today’s newsletter?

Feedback helps us improve!

Reply

or to participate.