• Bionic Marketing
  • Posts
  • Issue #40: SEO is dead again, GPT-4o, more on AI assistants

Issue #40: SEO is dead again, GPT-4o, more on AI assistants

Good morning.

Google I/O just happened, and there are some exciting new developments from some of the biggest players in AI.

Wired called Google’s most recent update a “change in the world order”.

If you're in the SEO game and this doesn't make you sit up and take notice, you're not just asleep at the wheel—you're in a full-on coma.

Google Search is about to undergo a seismic shift. We're talking fundamental changes to the way search as we know it is structured.

This new frontier includes fully integrated AI, more personalization, more summarized results – and it’s just the beginning.

But that's not the only thing going on with AI.

OpenAI recently released their latest model: GPT-4o, also known as Omni—a multimodal model that can understand text, images, video, and audio.

The race for AI assistants is also heating up, something I mentioned in Issue #38.

Microsoft just threw its hat in the ring with Copilot +PCs.

Windows 11 is about to get a major AI boost, and new Surface devices are waiting in the wings with AI functionality built right in.

I never thought I’d see that day that I contemplated becoming a Windows guy, but here we are.

In Issue #38m I also predicted that Apple would enter the race this year with some kind of AI Assistant running locally on a Mac device, or at the very least, a smarter Siri.

Looks like they’ve answered the call with plans for AI-enabled Siri 2.0.

So, what does all this mean for marketers?

For humanity?

As I've said before, AI isn't just a tool—it's a paradigm shift.

But it's not about replacing humans. It's about enhancing what we're capable of.

And as marketers, we have a unique opportunity to harness this power and create experiences that are more personalized, more engaging, and more effective than ever before.

There’s some big things going on.

We’ve got a lot to unpack.

Let’s get into it.

—Sam

IN TODAY’S ISSUE 👨‍🚀 

  • Will SEO survive the search apocalypse?

  • GPT-4o, OpenAI's next step in Multimodal AI (and Assistants)

  • Microsoft’s AI Assistants have entered the chat

  • The future of AI is Assistants (agents, workflows under the hood)

Let’s dive in.

Is Google Gemini the final deathblow to SEO? Again?

It’s become a bit of a joke that “SEO is dead” over the years. 

It’s been dead or dying for a while now.

But perhaps there’s something to the rumors this time: 

Google recently held their annual developer conference, Google I/O, and it's all about integrated AI. 

Specifically, how they're supercharging search with Gemini’s newest capabilities

And this latest set of features should be enough to make any SEO sweat. 

Here’s another snippet from that Wired article I mentioned earlier: 

But before we ponder the death knell of search as we know it, let’s dive into the features: 

1. AI Overviews:  gives you a quick and dirty summary of your search topic, complete with all the relevant info you need to answer your question. No more piecing together bits and pieces from a dozen different sites. 

2. Plan Ahead: trying to figure out what to cook for dinner this week? Planning a vacation? Just ask Google.

You can say something like, "Hey Google, give me a 3-day meal plan that's easy to throw together," and you'll get recipes from all over the web. 

And if you want to keep track of your master plan, you can shoot it over to Google Docs or Gmail – it’s that easy.

3. Video Search Querying: remember dear Aunt Sally? Next time she can’t get her DVD player to work all she has to do is record a video of it on the fritz, load it into Google, and Gemini will give her some troubleshooting tips.

(Personally, I think this feature is the most impressive).

4. AI Organized Search Pages: if you've ever fallen down a rabbit hole trying to find the perfect recipe or product, you know the struggle of clicking through page after page of search results.

Google is using GenAI to whip up search pages that are easier to navigate, with the most helpful results grouped under AI-generated headlines.

And they're not just sticking to recipes—this is going to work for everything from movies to music to shopping.

Pretty cool. 

Worthy of its own newsletter alone, Google also unveiled Project Astra:

It’s incredible to see how far this technology has come in the last two or three years.

We’re another step closer to truly companionable AI—a technology that can search and respond alongside your life in real time. 

You don’t have to read between the lines here to see how this spells major problems for traditional search. 

During a pre-I/O press conference a reporter asked CEO Sundar Pichai what this means for the future of search and his answer was ambiguous: 

Based on this response, Google seems focused on the UX of search, leveraging AI to create a better, more intuitive, conversational search experience. 

And why shouldn’t they? 

Don’t we as users want a better search experience? 

Some have criticized this latest rollout as Google’s next move in total world domination aimed to keep people inside of Google’s SERPs rather than drive traffic to outside sources.

I won’t get into the weeds with that chatter, but you can check out more here, and here

So, what does this all mean for the future of search? 

If you were paying attention, you should have seen this coming. 

Gemini’s latest updates are really an expansion upon SGE (Search Generative Experience). 

Which was an early step from Google in transforming the search experience with GenAI.

This article was published back in June of 2023, before Google’s latest unveiling, and warns SEOs of the potential negative impacts of SGE: 

In other words, SEOs you need to get more creative, and you need to stay on your toes.

Here are a few suggestions on what you should do now:

1. Use AI to Analyze Data & Trends: Generative AI isn't just for content creation, it’s also incredibly adept at analyzing trends. By analyzing massive amounts of search data, it can predict future trends and get inside the heads of users, understanding the intent behind every query.

This is a game-changer for SEO strategies. Imagine being able to create content that directly addresses user needs and queries, ensuring optimal visibility and relevance in a landscape where understanding user intent is the key to search engine success.

2. Structure Your Content for AI: implement schema markup and other forms of structured data to help generative AI models better understand and accurately represent your content. This can improve your visibility and the accuracy of AI-generated summaries or responses related to your website. 

3. Keep an Eye on Monetization Strategies: search engines and chatbots are going to look for new ways to monetize. 

We are seeing the rise of hybrid search experiences that blend traditional results with AI-generated responses and sponsored content. 

The way users find information is evolving into a more streamlined experience, combining search results, natural language responses, and relevant links into one neat package.

Keep an eye on ad spends, and follow the money to get downstream on some of your SEO strategies.

GPT-4o: OpenAI's next step in Multimodal AI (and Assistants)

OpenAI just can't seem to keep their name out of the headlines lately.

But all the hype, hoopla, and potential legal issues at handAnd this time, they recently released GPT-4o.

Now, I know what you're thinking. 

Wasn't Google's Gemini supposed to be the multimodal king of the hill? 

It's true, both models leverage native multimodal architecture, and they're both incredibly impressive. 

But this time it isn't just another incremental update amidst the cage fight of AI giants for top contender.

GPT 4o comes with an impressive list of capabilities that showcase what's possible with multimodal AI.

Want to chat using a mix of text, images, and voice? Go ahead, it can handle it. This is a far cry from the clunky, disjointed experience of using separate models for each type of input and output.

The "o" in GPT-4o  stands for "omni," and that's not just some fancy marketing speak.

This model was trained end-to-end on text, vision, and audio simultaneously, meaning it can understand and generate content across all these modalities with a level of nuance and context that was previously unavailable in one place.

What really clinched things for OpenAI is that they made this model available to its free users –  a show of goodwill from the company towards the democratization of AI.

Probably a good move since they’ve been slammed by many for their open-but-not-so-open source methods.

Let’s unpack some of the better features:

1. It’s Multimodal: The model can handle text, images, and voice all in one unified system. 

2. Real-Time Conversation: One of the standout features is its ability to engage in real-time conversation. You can interrupt it, change the topic, or ask it to adjust its tone, and it'll keep up without missing a beat. This is a huge step towards more human-like interaction with AI.

3. Visual Problem-Solving: Just like what Google promises with Astra, it can also reason through visual problems in real-time. Just point your camera at something, and GPT-4o can analyze it and provide guidance or answers. 

4. Live Translation: Support for 50 languages. A game-changer to help break down language barriers and facilitate communication across cultures.

5. Memory & Continuous Learning: It keeps a record of your interactions, which means it can build upon your previous conversations. This continuity allows for a more personalized and context-aware experience over time

6. Searchable Conversations: Ever struggled to remember a specific detail from a previous chat? With GPT-4o, you can easily search through your conversation history to find what you're looking for.

7. Real-Time Knowledge: GPT-4o doesn't just rely on its pre-trained knowledge – it can also access up-to-date information from the web. This allows it to provide relevant and timely responses to a wide range of queries.  

Overall, it’s a pretty impressive list of capabilities that dramatically improve the AI experience for even the most basic user.

And let’s remember, these capabilities are available on the free tier. 

Previously, you’d have to create some of your own GPTs to harness this kind of contextual understanding.

And even then, the level of sophistication we’re seeing from Google and OpenAI’s models is something that hasn’t happened yet.

Not in the same place anyway.

While GPT-4o's impressive capabilities keep pace with Google’s newest rollout, they’re still not the only players in the game.

Tech giants like Microsoft and Apple are also making bold moves to bring AI assistants closer to users than ever before by making moves to embed them directly into their devices.

Microsoft’s AI Assistants have entered the chat

Microsoft is throwing its hat into the AI assistant ring.

They just unveiled its new Surface Laptop, with Copilot embedded right in.

These new Surface devices are packing some serious heat under the hood. 

Microsoft claims they're the "fastest, most intelligent Windows PC ever built," and they might just be onto something.

These PCs are able to run large language models while connected to Microsoft's Azure Cloud service, and leverage their small language models like Phi-3 mini.

All of the new models feature the Snapdragon X series processors, which enable the Copilot+PC capabilities, including a dedicated NPU (Neural Processing Unit) with up to 45 TOPS (Trillion Operations Per Second) performance.

And the best part?

They’ve got a dedicated Copilot button directly on the device, so it’s literally a keystroke away at all times. 

Microsoft isn't the only tech giant making moves in the AI assistant space. 

Apple's been cooking up something special with their new BFF, OpenAI.

While they're keeping the details under wraps for now, this partnership is a clear sign that Apple's doubling down on AI.

But Apple's not putting all its eggs in one basket. They're also flirting with Google's Gemini on the side.

Hey, in the world of AI, it pays to keep your options open.

But let's be real. A partnership can only take Apple so far.

If they want to dominate the AI game, they're gonna have to ditch the training wheels and build their own chatbot.

And not just any chatbot—one that's deeply integrated into every Apple product you can think of.

For now, Apple's betting on a combo of homegrown AI features (both on-device and in the cloud) and the OpenAI deal to keep them in the race.

But Apple's got some serious catching up to do, and they know it.

Let's take a step back and reflect on what this means for us as marketers.

It's easy to get caught up in the hype. I’ve been in this world of Machine Learning since 2016 and Generative AI since 2019, and it’s even hard for me to resist getting caught up in the insane things being built right now.

But my advice to you is to approach AI with equal parts enthusiasm and thoughtfulness.

It’s okay to believe the hype.

As long as you actually do and build something with it (and discover what’s true or not).

What does all this mean for you as a marketer?

It means leverage.

It means the chance to create experiences that are more engaging, more impactful, and more finely tuned to your audience than ever before.

It means thinking beyond the hype and focusing on how AI can solve real problems and create real value for your customers.

But it’s not easy.

Most AI tools, tricks, and prompts have a half-life of a few weeks or months. 

You should still use these things but understand where this is all going:

The biggest trend right now is toward AI assistants both on the consumer and business side.

A single “app” (if you can even call it that) through which you can do anything—and have most things done for you. 

Under the hood, it’s all agents and workflows doing your bidding.

This can either go insanely well—or incredibly wrong.

I chose optimism because pessimism is cowardice.

What a time to be alive.

Fingers on the pulse, eyes on the prize,
Sam Woods
The Editor

0