How Can You Leverage AI for Customer Retention in Digital Marketing?
In the competitive digital marketing landscape, customer retention holds a commanding role, given its cost-effectiveness …
The world of artificial intelligence is moving at breakneck speed, with new language models popping up everywhere. Recently, Meta dropped Llama 3.1, a 405 billion parameter model, into the mix. As someone who's spent a fair bit of time tinkering with these models, I thought I'd share my thoughts on where we're at and where we might be heading.
For those keen to get their hands dirty with Llama 3.1, you can grab a semi-okay client from GitHub called Bedrock Client. It's available for Mac users, but I'm unsure about Windows. This client lets you switch between different models on Bedrock, including Llama 3.1 and Claude Sonnet 3.5.
Now, if you've danced with AWS before, you know it can be a bit of a headache when services are spread across different regions. For some reason, they don't host all the language models in one spot. So, you might find yourself juggling East 1 for Llama 3.5 and West 2 for Sonnet. It's not ideal, but it's workable.
As for pricing, Sonnet through Bedrock is pretty much on par with going straight to the Claude API. The tool was a bit buggy, but the interactions were generally fine.
I played around with Llama 3.1, and to be honest, it was okay. Nothing made me go, "Oh my God, wow, this is amazing." I did notice some weird quirks, though. After asking for information, it seemed to carry on a conversation it thought we were having. It was very strange, and I'm unsure if it was a bug in the client, the model, or the API.
When we discuss models like GPT 4.0, Claude Sonnet 3.5, and Llama 3.1, we are dealing with absolutely massive systems. They are so big that even with the thriving open-source development community, the average person or small business cannot break free from corporate control of these models.
The simple fact is that these models need enormous GPUs to run. If you try to rent the hardware, you'll pay a hefty hourly rate. Alternatively, you'll pay for an API or subscription. Either way, it's not cheap.
I reckon we will see a split in how language models develop. On one side, enterprises will use these massive models to power specific functions in their products. They might even fine-tune them to hone in on particular skills.
Conversely, the general public will likely be left with access to smaller, lower-quality models. That might sound a bit doom and gloom, but hear me out.
Over the past few years, I've explored how these models are built, dabbling in machine learning and grappling with complex concepts. Based on what I've seen, we will see some strong, smaller models hit the market.
Picture this: a marketplace full of highly specialised, small language models people can buy and use on their devices. These models would be experts in specific areas, like having a chat group full of specialist friends.
Instead of going to one all-knowing AI, you'd have a botanist AI for plant questions, a doctor AI for health queries, and so on. These specialised AIs could even handle multimodal inputs. Imagine taking a photo of your wilting plant, and the botanist AI tells you what's wrong and how to fix it.
This setup could open up a whole new market. Knowledge experts could build these specialized models using some open-source foundation and then sell access or the models for independent use.
It's not hard to imagine a future where we each have a personal AI that acts like a router. This AI would have a mind map or word cloud of different knowledge areas. When you ask a question, it sends it off to the relevant expert AIs, gets their responses, and combines them into an answer you can understand.
Let's explore this personal AI router concept a bit more. Imagine you have a central AI that knows where all the specialized knowledge is stored. When you ask a question, it doesn't try to answer itself. Instead, it determines which expert AIs best handle your query.
It might send your question to three or four different expert AIs, similar to the mixture of experts approach. Then, it takes their responses, combines them into something coherent, and presents them to you in a way that makes sense.
This system could use a mix of small, self-hosted AIs, enterprise-hosted AIs, and even API connections to platforms like Hugging Face. Of course, there are potential issues to iron out, like latency, processing power, and data quality, but the possibilities are exciting.
We will see a significant separation between enterprise AI and consumer AI. Businesses will have access to these massive, powerful models, while the average person will work with smaller, more specialized tools.
It's still early days, and the speed at which these technologies develop is mind-boggling. But I believe this divergence is coming and will shape how we interact with AI in the future.
While the idea of specialized, accessible AI is exciting, it's not without its challenges. Here are a few things to consider:
The AI landscape is changing rapidly, and it's hard to predict where we'll end up. However, I'm excited about the possibility of more accessible, specialized AI tools. I can see a future where we each have our AI assistant who knows when to call in the experts.
This shift could democratize AI in a way we haven't seen before. Instead of relying on a handful of tech giants, we could have a diverse ecosystem of AI models created by experts in various fields. It's a future where AI becomes more personal, more specialized, and hopefully, more useful in our day-to-day lives.
As we stand on the brink of this AI revolution, it's clear that the future holds both challenges and opportunities. The divide between enterprise and consumer AI may grow, but with it comes the potential for more specialized, accessible tools.
Developing personal AI routers and specialized models could change how we interact with AI, making it a more integral and natural part of our lives. It's an exciting time, and I can't wait to see how it unfolds.
What do you think about this potential future? Are you excited about the possibility of having your own personal AI assistant that can tap into a network of specialized experts? Or do you see challenges that I've overlooked? I'd love to hear your thoughts on where we're heading in this rapidly evolving world of AI.
Some other posts you may like
How Can You Leverage AI for Customer Retention in Digital Marketing?
In the competitive digital marketing landscape, customer retention holds a commanding role, given its cost-effectiveness …
July 29, 2024
Read MoreHow can AI enhance content personalisation in email marketing?
As business owners or executives, you've likely felt the pressure of email marketing and …
July 29, 2024
Read More