
Often, product teams and stakeholders turn to me and say “Can AI solve that?” I’m as excited as they are to solve problems with cutting-edge technology, but there is an incorrect underlying assumption. Sometimes, there’s the expectation that “AI” is going to magically produce results or insights that wouldn’t be possible any other way.
For example, given 1,000 resumes – magically find the best match for the job. Or, write an outreach email magically explaining why a prospect should buy our product.
AI is not magic, what we have is complex contraptions for automating what humans already do.
It seems like magic when a computer wins at chess, or pilots a self-driving car. But we shouldn’t expect that an algorithm is going to “figure out what is great” about 1 in 1,000 candidates, or “infer what the customer needs to hear.”
AI algorithms are improving rapidly. More and more software functionality seems like magic every day, with a broader range of problems solved in inspiring ways. However, there are very few situations where we can just “whip out some AI to solve the problem” because there is no actual magic, just a finely-tuned contraption.
Let’s stick with that resume example. Can we use AI to filter 1,000 resumes to find the best candidate for the job? Well, we can use a vector store that matches similar keywords (so we can match resumes to the job description without requiring the actual keywords to match). We can use LLMs to summarize the resume into features that we can feed into a classifier. We can train (using 100 manually evaluated examples) the classifier to pick resumes similar to the ones that we like. And if we have multiple human users, we can collaboratively filter resumes (recommend candidates other people liked).
No part of that contraption fundamentally “understands” what makes a great candidate. It’s doing the same thing as if you reviewed all those resumes manually. Maybe it’s faster. If we’re successful, our system has recreated your bias perfectly. It picks the same ones you would’ve picked.
This isn’t to say there aren’t insights to be had. For example, our classifier might be able to tell us (after training on the training set) that the most important feature of any candidate is that they live in Seattle – which we didn’t explicitly encode.
“Understanding” is a blurry term these days. You can ask an LLM to justify its reasoning. In the example above, you could sort of build a system that recommends candidates “because they appear well rounded” or because “they have best-in-the-world skills in area x” (without the resume explicitly stating that). With a bit of statistically-weighted hallucination, the LLM can often make some good inferences. You’d still need a separate system to evaluate each candidate individually and then rank them. That sounds very similar to how a human would do it (with no magic).
Magic means “apparently having supernatural powers,” a definition that I won’t grant to AI. However, I think it’s also used to mean “beyond, or simply faster than, comprehension.” Electricity and running water are magic to me. I love it when my code does magical things. So, yes please let me create some AI magic for you.