Podcast: Play in new window | Download
Today’s guest is Tim Hayden from Brain+Trust and he’s schooling us on the real A.I. you need to be worried about (and it’s not ChatGPT or anything on the market – yet.)
Artificial Intelligence: what was once a plot point in many a science fiction story, is now shaping up to be the next biggest technological leap forward since the advent of the internet. So what does this mean for us “Organic Intelligences” and how we do business?
A.I. has become something of a catch-all term for a number of different technologies that are all in their infancy, as well as whatever fictional, sentient machines you can think of (HAL 9000, Mr. Data, Skynet, etc.).
This creates a lot of misconceptions about the A.I. we have now, and what we will have in the future. Artificial General Intelligence (AGI) is the hypothetical intelligence that we see in the media: a sentient machine that can think, make decisions, and learn, similarly to how we do. As it stands, this only exists in our imaginations, but there’s no shortage of people trying to make it a reality.
The A.I. that exists today is something of a precursor to that, known as a Large Language Model. LLMs work by taking the prompt given by the user, cross referencing it with a massive pool of data in the form of written language, and trying to “predict” what words (and in what order) and response should contain. Think of it like an advanced version of a smart phone’s predictive text feature; it looks for keywords in the prompt, pulls up data containing those words, and “averages out” the data into a response. It is a form of what is known as “Generative Artificial Intelligence.
Because Large Language Models don’t “think” in the traditional sense, but rather “reference,” it’s only as intelligent as the data you feed it.
Today’s LLM’s are trained on a diverse dataset that ChatGPT says includes:
- Books, literature, and written fiction/non-fiction.
- Articles from various publications, including newspapers, magazines, and online platforms.
- Websites covering a wide array of topics, such as science, technology, history, arts, entertainment, and more.
- Academic papers and research articles from various fields, including mathematics, computer science, psychology, sociology, and more.
- Conversational data from online forums, social media platforms, and messaging apps.*
- Technical documentation, manuals, and guides.
- Wikipedia articles and other encyclopedic content.
- Creative writing, poetry, and storytelling.
So ultimately what comes out of today’s “A.I.” is only as good as the data that goes in.
Companies collecting, using, and selling user information is nothing new, tracking demographics and behaviors to direct what ads you see, what products you get recommended, etc. Now, these companies are buying and selling data to train their A.I. Just recently, the social media platform Reddit has struck a deal with an unnamed company developing A.I., to sell their user data and activity for $60 million a year.
This presents some new complications. A.I. can’t detect when inaccurate, or outright false information is included in its training. Tim Haden, CEO of Brain+Trust points out how this can be something of a foil when it comes to how we use A.I. going forward, “…there are all kinds of ways that this is going to be consequential whenever we think about the data sources that will be used for A.I., and what happens and what changes with human behavior?”
Hayden says A.I. will almost certainly replace search engines in the near future, and with that in mind, the quality of the training data becomes paramount. He goes on to say ‘Google gives us lists when it gives us search results. If you think about that statement, Google Search never worked. It never did. It always gave us lists, it didn’t give us answers. With A.I., “ you’ve got your answer right in front of you.”
With A.I. that can formulate direct answers to your inquiries, search engines become obsolete. So what then becomes of things like SEO, or search analytics? Much of our advertising and marketing industry as we know it today hinges on the existence of search engines and how people interact with them, so removing them from the equation entirely creates a conundrum; How to “optimize for A.I.”?
We cover this topic in much more detail in the podcast. What do you think? Will A.I. become sentient in your lifetime?