• Bionic Marketing
  • Posts
  • Issue #34: Creativity models, simulate whole worlds, new Mistral

Issue #34: Creativity models, simulate whole worlds, new Mistral

Good morning.

Here we are again revisiting the "AI can't replicate human creativity". 

Turns out it can, and it is.

MKHSTRY AI’s is one model, out of many, that are “creative” in a way that at the very least mimics and looks like human creativity: 

More on this—and other incredible models for creativity—below



  • Why LLMs (and AI) are the creative tools of the century.

  • Create whole new worlds with prompts (thanks to Genie).

  • The fastest chat experience ever? Groq’s tech on fire.

  • Talk to Mistral Large in Le Chat.

Let’s dive in.

The MKHSTRY model is part hype, part reality

This model was built to mimic Fortune 500 CMO, Jeff Charney’s own mind—and is creativity on tap.

"Light-bulb moments are what power human ingenuity. Today, our ideation engine is focused on marketing, but ultimately our same engine that sparks new creative marketing ideas can be used to spark any type of new idea, from business models to new products – even ideas for future AIs. The scale is quite literally limitless." 

Imagine this: 

The “brain” of an experienced CMO, in the writing room with you, riffing on your marketing campaigns right beside you.

This model could be a huge help for entrepreneurs and marketers, for any work or project.

High-impact marketing campaigns? Just a few prompts away.

This is not the only model made for creativity out there. There are many—and you can customize models, or use with a RAG, to achieve the same results.

Feed it your ideas, collaborate, and let it be an endless supply of creativity.

"It's powered by human intelligence, not a replacement. It shifts you into a new creative dimension." 

Pretty cool.

AI isn't here to destroy your job. It's the tool of the century for those daring to think outside the box.

And, speaking of outside-of-the-box creativity:

Google's Genie rewrites the play on interactive worlds

Google's latest innovation, Genie, is not just stepping out of the box—it's demolishing it. 

Imagine crafting playable worlds from a mere sketch, photo, or synthetic image.

Genie doesn't settle for single-frame prompts:

“Instead of just one frame. It is likely that it would in-context adapt to the environment dynamics that you show in these frames,” says Tim.

Latent Action Modeling (LAM) is the secret sauce that enables Genie to pinpoint and animate the key subject within any scene.

Take a skateboarder gliding through a cityscape: Genie captures the essence of motion, translating static visuals into a vivid narrative of action and intent.

Why you should care:

  • Insightful Analysis: Genie breaks down complex visuals into clear, actionable insights.

  • Adaptable Across Realms: Its versatility shines through various scenarios, enhancing everything from gaming to training simulations.

  • Revolutionary Potential: Genie is set to transform interactive gaming and even marketing, creating branded environments that engage and captivate.

Genie's toolkit:

  • Video Tokenizer: Cutting through the clutter, it simplifies video data, addressing the big challenges of scale, speed, and memory.

  • Dynamics Model: With a deep understanding of actions and contexts, Genie anticipates what comes next, breathing life into any visual.

  • Latent Action Modeling (LAM): The secret ingredient that allows Genie to understand and animate the protagonist in any video feed, turning passive viewing into an interactive experience.

Use cases? 

Interactive advertising and immersive simulations (of, for example, a shopping experience).

Creating an immersive world instantly from anything you can draw? Sign me up, sounds like fun.

Who knows if this model will ever become an actual product, as Genie is an in-house research project for now.

Either way, I’ve said it before and I’ll say it again: 

This is the worst AI will ever be.

Blazing fast chat responses with Groq

One of the biggest bottlenecks for running LLMs and, really, any model is speed—or lack thereof.

ChatGPT is pretty fast at responding to your prompts. Not flawless, but decent.

If you ever deploy your own models, latency (speed) is the toughest things to solve.

I’m going to side-step a much more complicated discussion around this and simply say: 

Solving speed is hard. If you can do it, you’ll win the prize.

Well, Groq is doing that: 

"Groq is not just competing in the AI chip race—it’s designing a future where AI technology is more accessible, efficient, and transformative."

You can read more here, it’s worth your time.

Try it out and watch how fast you get a response.

Here's the scoop on TSP:

  • One-to-one mapping: Each AI operation directly corresponds to a hardware instruction, cutting through the need for complex optimizations.

  • Dynamic computation: Flexibly adapts to a range of AI models and workloads, no alterations needed.

  • Efficient memory use: A unified memory system that reduces data transfers and boosts performance.

What does it mean?

Groq’s is a new category of chip. It’s not designed to train LLMs —it’s designed to run them extremely fast. 

The chips, designed by Groq founder and CEO Jonathan Ross, are designed for rapid scalability and for the efficient flow of data through the chip

Where does it win?

  • Ideal for chatbots and voice assistants. Minimal latency.

  • Real time image generation, even at high resolutions.

  • Text-to-speech and vice-versa can happen in real time, allowing for natural conversations with an AI assistant, including allowing you to interrupt it.

Groq's tech has already made its mark in self-driving cars, medical imaging, and manufacturing.

High-performance, low-power consumption, and easy to use? Yeah, businesses are all over that.

In short, Groq's TSP and high-performance chips are shaking up the AI computing scene.

So, keep an eye on these guys. They're going places. Everyone wants what they’ve got.

Yet another model from Mistral

Mistral’s models are some of the best open source models out there, most of them on par with GPT-3.5 (which is amazing).

They released a new model, Mistral Large along with Le Chat, which is like a ChatGPT for their models.

Mistral Large's spectacle doesn't quite outshine the reigning champion, ChatGPT. Sure, it keeps pace, stride for stride, on many fronts, but does it eclipse?

Not quite. But almost.

As you can see, ChatGPT still wears the crown. And while Mistral's competitive pricing nudges at GPT-4's heels, the difference isn't earth-shattering. 

It's a game of inches, not miles.

But let's talk strategy – this isn't just about tech specs or token costs. It's about reading between the lines.

This move is less about the tech and more about chess. By integrating with Azure, Mistral isn't just expanding its reach—it's helping Microsoft.

Here's the lowdown:

  • Pricing: Mistral Large dangles a competitive carrot with its $8/$24 for 1 million tokens, shadowing GPT-4's $10/$30. Close, but no cigar.

  • The Bigger Picture: This isn't about groundbreaking leap. It's about the subtle art of refinement within the bounds of compute resources and regulatory hoops.

  • The Strategic Move: Microsoft and Mistral are joining forces, not just to bolster Azure's offerings but to weave through the competitive landscape with agility and foresight.

The larger implication for this kind of announcement is that it signals saturation of LLM capabilities.

But by now, you’ve surely paid attention, and you’re running an uncensored LLM native from your computer, and you’re not relying on the sterile vacuum of a closed-source LLM on your browser anyway, right?

Good. I thought so.

The underwhelming nature of this rollout seems to be more of a play for Microsoft to expand their offerings. 

The company used the “news drop to announce a partnership with Microsoft. In addition to Mistral’s own API platform, Microsoft is going to provide Mistral models to its Azure customer”. 

A solid play to grow Azure’s catalog and help the giants sway any anti-competitive scrutiny aimed their way.  

Some positives: 

  • A beta version of Le Chat is available, making it one of the more capable FREE ChatBots out in the wild. Great for new users to get in there and play around with AI without any cost.

  • European users may feel more comfortable using a European model

If you take away anything from this issue, I hope you embrace the idea that AI isn't a threat to our creativity. 

It's a tool that's unlocking it.

Co-creator and co-intelligence

Remember those concepts.

Talk again soon,
Sam Woods
The Editor