• Bionic Marketing
  • Posts
  • Issue #36: Is Elon Musk saving marketing? AGI incoming? True value of Generative AI?

Issue #36: Is Elon Musk saving marketing? AGI incoming? True value of Generative AI?

Good morning.

I’m going to quote myself here: 

We’re right in the middle of one the greatest technological advancement mankind has seen since the telephone. 

It’s an exciting time to be alive. 

As we ride this wave, it’s important to remember that with innovation also comes great responsibility. 

“Should we be doing this?” is a phrase used ad-nauseum by the AI naysayers.

It’s the 100,000 ft general critique of AI that frankly, gets us nowhere.

We are already doing it.  

But there’s nuance in this discussion—open source LLMs, AGI, big data, and so on. 

There’s been a lot going on in the last couple of weeks. 

Let’s get into it. 

—Sam

IN TODAY’S ISSUE 👨‍🚀 

  • Elon open sources Grok Chat 

  • Are we even taking the right path to AGI?

  • 3 ways you can leverage Generative AI

Let’s dive in.

Elon open sources Grok

There are players with nefarious intent for AI technology.

Think large-scale surveillance schemes and other mechanisms of Orwellian control. 

They want to keep AI in the hands of the government only. 

They want to know your every move, thought, fear, and weakness. 

Or in the hands of a chosen few large corporations—the only ones “allowed” to develop and use AI.

And then they want to monetize it. 

This is why open-source AI is critical—it's a defense against the plans these players have for total control. 

It’s the only way you, as a marketer, will have access to more tools that are free or cheap.

And tools that are capable of doing what you need.

Instead of you being given artificial limits to how many, and what kind of, ads you’re “allowed” to make—you’ll have tools that can produce any kind of ad you want, in any style, for any channel.

This might seem trivial to you, but fundamentally, what they’re trying to do with regulations right now are: 

  • Regulatory capture, for a select few companies, so they can control it all.

  • Restrict math. AI is math. Regulating AI is like telling you what kind of math you’re allowed, and not allowed, to use.

To that end, love him or hate him, Elon announced that his startup xAI will open-source its Grok chatbot. 

Elon’s been far from quiet on his stance for open-source AI for some time now, and this announcement comes on the heels of his recent lawsuit against OpenAI. 

Here’s the scoop: 

  • Musk dropped Grok back in November, calling it the “ChatGPT killer with a rebellious streak”. Grok made ripples due to its real-time internet access thanks to the X/Twitter hookup.

  • Musk has ambitious goals for Grok, including a live search engine tied to X, a 25k context window, and eventually running Grok natively in all Teslas.

  • Open-sourcing Grok’s code puts xAI firmly on the side of Meta and Mistral AI. A contrast to OpenAI’s hush-hush approach. 

But why should us marketing plebs care about this pissing match between tech titans? 

Because Musk cracking open Grok’s code is a haymaker of a move—both philosophically and strategically. 

What this is really about is a high-stakes tug-of-war over the soul of AI. 

As AI becomes ubiquitous in our lives, we’ve got to ask ourselves: 

Do we want this technology to serve the greater good, or be just another tool for the corporate fat cats and oligarchs to stuff their pockets—and for our governments to use as means of control against you and me?

The AI community is getting rightfully antsy about the centralization we’re seeing in Big Tech, especially around Generative AI. 

AI could easily become a tool for consolidating power in the hands of a few. 

To me, that sounds more like an AI Apocalypse than a Renaissance.  

And I’ve seen first-hand how a new Renaissance is better for us all.

The push for open-source LLMs? 

It’s not just some geek crusade to get their hands on each other's source code. 

It’s a stand against the centralization of power in the AI world. 

We’re at a crossroads, and the decisions we make now about how AI is developed, and more importantly who has access to it, will shape the future in a big way. 

And the scientific community generally agrees for the case of open-sourcing LLMs. 

The Allen Institute for Artificial Intelligence published this paper in February to make the case for open source models. 

Here’s what they had to say: 

A Stanford paper On the Societal Impact of Open Foundation Models also mirrors this sentiment citing the customization of open-source technology: 

All of these implications are important for us to consider as users of AI. 

Who’s really got their hands on the wheel when it comes to these insanely powerful AI systems?

And are they steering us in the right direction? 

There are some who believe that LLMs will never lead us into the AI Renaissance.

Let’s explore.

Are we even taking the right path to AGI? Verses AI gives us some perspective

AGI (Artificial General Intelligence) is the AI holy grail. 

Some claim we have achieved it, some say it’ll never happen. 

But what is it exactly? 

I touched on the topic a bit in last week’s issue with the announcement of Claude 3. 

Essentially, AGI is AI with human-like intelligence that can perform tasks on par with human abilities. 

From crunching numbers, to penning the next great American novel, to giving relationship advice —AGI for all intents and purposes should be able to act (and reason) exactly as a human would. 

But here’s the catch: 

Science still hasn’t figured out exactly how to define human intelligence. 

It’s a tricky thing. Human intelligence can’t be put into neat little boxes of skills and abilities. 

It’s an amalgamation of complex cognitive functions. 

You’ve got logical reasoning, memory recall, creative problem solving, the ability to learn from past mistakes—it’s a long list. 

And distilling all of that complexity into a single, measurable set of benchmarks—well that’s a tall order. 

The folks at DeepMind took a stab at it and performed a study that quantified the qualities of AGI into 6 primary criteria: 

Big Tech is racing to achieve true AGI with LLMs—Meta, Amazon, Google—they’ve all got horses in the race. 

But according to experts like Yann LeCun, they’re running the race on the wrong track. 

If Auto-Regressive LLMs won’t get us to the AGI promised land, what will?  

Enter Verses AI, the new kid on the block with a fresh perspective: 

Their stance: 

  • Distributed Intelligence is the natural path to AGI, using biology as the starting point for learning. They believe that AGI can only be achieved with a system that can self-organize and retrain in real time – like natural organisms do. 

  • They also believe that the AI we have today through LLMs is just sophisticated pattern matching. They can spit out impressive outputs based on input data, but they’ve got no real agency, no real “intelligence” of their own. 

  • They believe that AGI requires a model that can autonomously learn and act in the world. 

Yann unpacks the nuance of this argument much better than I ever could in a fascinating interview on Lex Fridman’s podcast

In the episode Yann uses Moravec’s Paradox to explain the shortcomings of LLMs to truly master human learning: 

Yann urges that LLM’s will never get us to AGI, arguing that “everything we learn, and everything animals learn, has nothing to do with language.” 

I’m not sure I agree with him. Not everyone does, either.

I think language plays an integral, critical, and crucial role in how we learn (and think, act, etc.)

But will Distributed Intelligence modeling be the key to unlocking AGI? 

Only time will tell. 

What’s important is that people are asking the big questions around AI technology. 

And well-intentioned scientists, researchers, and even some corporations are doing their best to answer them. 

Why?

Because in the fight for the soul of AI, some do believe that this technology can and should be used for the greater good. 

Like myself, they believe deeply in the power of human ingenuity and creativity. 

So where does that leave us as individual co-creators with AI technology? 

It’s not about some distant future where AI might match human intelligence. 

The choices we make today, with the AI tools already at our fingertips, matter. 

Here are 3 things you can do right now to leverage Generative AI

As consumers of this technology, we can do our part to stop proliferating more crap and dribble on the internet. 

Here’s some things you can do right now to be a better consumer, contributor, co-creator of AI: 

1. It sounds “tired” but truly, don’t mass-produce crappy content. 

There’s enough of it on the internet already.

And Google continues to press on in their quest to remove this crap from search indexes thanks to their latest update

Again, repeating myself here, but AI isn’t going to replace great marketers and great copywriters. 

It’s going to make the great even better, and it’s going to make the mediocre fade into the deluge, never to be heard from again. 

Here’s what I tell most copywriters who ask me—and what many don’t want to hear: 

Hiring content creators, writers, media buyers, etc. will become an active choice by the client. 

Not because they have to, not because AI won’t be good enough. 

But because they’ll want to.

And in hopes that you bring something to the table that’s unique and not about copywriting at all. 

Because there are models (LLMs, RAG bots, etc.) that can produce superior copy, much faster, at a fraction of the cost. 

I’ve built these models and systems for a few years now. 

Believe me: they can write better copy and marketing materials than (perhaps) most humans can. 

Don’t let your interactions with ChatGPT only make you think “AI will never be good enough”. 

You and I are using a limited, restricted, and filtered version of GPT-4. We’re not getting the good stuff. 

But there are thousands of open source models. And many of them write better copy, and better content, than a human can.

Lots of copywriters will make it. 

Lots of copywriters won’t. 

But this has always been true, in every profession, even before AI.

2. Do NOT skimp on your research.

It’s critical that you should never take any AI research or output from an LLM at face value.  

This article does a great job of unpacking ways to find quality AI research, and then run said research through additional tools to ensure accuracy. 

It’s been well documented that LLMs have been known to outright hallucinate and spit out information that is factually incorrect:  

For anything in the research and analysis realm, AI is going to do it all around better and faster than a human.  

But we still have a responsibility to work as co-researchers to ensure that factual, relevant research gets the traction it deserves in our content. 

3. Don’t plagiarize. Yeah, turns out that matters.

Don’t rip off someone else’s work. This should go without saying. 

But as content creators rush to incorporate AI tools in their everyday workflows, it’s possible that plagiarism can happen unintentionally. 

Google can already detect plagiarized content, long before AI was a thing that it is now. 

Their machine learning algorithms and models are far more sophisticated than they used to be. 

I don’t think they’ll ever be able to reliably tell the difference between “human” content and “robot” content. 

But they sure can model, “understand”, and detect plagiarized and crappy content. 

Use copyright catchers like Patronus to run your outputs through an additional layer of analysis. 

When the cost of producing content—and the value of it—is quickly approaching zero…

Your choice becomes: 

Mass produce cheap content with the help of AI (this actually works, to a degree, but not for very long and your reputation becomes the collateral). 

Or…

Raise the bar and go the extra mile—so that what you’re creating with AI is top-notch and pushes the boundaries of what’s possible. 

Generative AI doesn’t have to be a race to the bottom and just another tool for flooding the internet with useless dribble. 

Instead, we can use AI to augment and enhance human creativity. 

We can use it to create works that are more engaging, more valuable than what we could produce alone.

Talk again soon,
Sam Woods
The Editor