AI’s Middle Path Deserves More Attention

Extreme AI narratives overshadow practical conversations. AI should be viewed as normal technology. Small Language Models provide focused, efficient solutions for businesses seeking measurable returns on investments, avoiding extreme prophecies.
Published on
February 3, 2026
Steven DeAngelis
A serial entrepreneur, technology pioneer, and thought leader exploring the future of business, AI, and global affairs.
Published on:
February 3, 2026

By Stephen DeAngelis

In a recent article, freelance writer Jackie Snow, asked, “What if AI doom and bloom prophecies are derailing more important conversations?” She explains, “Scroll through any tech news feed and you'll find two dominant narratives about AI. On one side, venture capitalists like Marc Andreessen publish manifestos declaring that ‘intelligence is the ultimate engine of progress’ and promising technology will liberate the human soul. On the other, respected researchers sign open letters warning of ‘extinction risk’ and comparing AI development to nuclear weapons proliferation. Both camps share something in common beyond their certainty. They're drowning out everyone else, including plenty of smart AI researchers who don't subscribe to either vision.”[1] She insists there is a middle path between doom and bloom: “A technology that's genuinely important but ultimately ordinary.”

Artificial Intelligence as Normal Technology

Snow cites an article by Arvind Narayanan, a professor of computer science at Princeton University, and Sayash Kapoor, a Senior Fellow at Mozilla and a Laurance S. Rockefeller Fellow in the Princeton University Center for Human Values, in which they “articulate a vision of artificial intelligence (AI) as normal technology.”[2] They observe, “To view AI as normal is not to understate its impact — even transformative, general-purpose technologies such as electricity and the internet are ‘normal’ in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity. … We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.”

There are sufficient experts sounding grave warnings about the future and how AI will affect humanity that those warnings shouldn’t be ignored. One such expert is Dario Amodei, CEO and Co-Founder at Anthropic, who posted a 38-page essay detailing the risks he sees.[3] He begins his essay with a scene taken from Carl Sagan’s book Contact. The scene involves an international panel questioning an astronomer who is being considered as a candidate to represent Earth in making contact with aliens. The panel asks her, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” Amodei believes the world remains in the adolescent state of AI maturity. A state that is “both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” Writing about Amodei’s concerns, journalist Shannon Carroll observes, “He lands on the uncomfortable thesis that the AI prize is so glittering (and its strategic value is so obvious) that nobody inside the race can be trusted to slow it down, even if the risks are enormous.”[4]

Trust in AI is another challenge. A short paper published last year by Vishal Sikka, CDO of Vianai Systems, and his son, Varin Sikka, a student at Stanford University, discussed the problems associated with large language models (LLMs) and AI agents, especially when dealing with very complex situations.[5] When journalist Steven Levy asked the elder Sikka, “Should [we] forget about AI agents running nuclear power plants?” He answered, “Exactly.”[6] Nevertheless, large tech companies are working relentlessly to try and overcome challenges associated with errors and hallucinations. Snow reports that both “doomsayers” and “bloomsayers” admit there are challenges needing to be overcome. She writes, “We need to think about discrimination in hiring algorithms, the erosion of the free press, labor displacement in specific industries, and all the problems that come with any powerful new tool.” The bottom line for most businesses, however, is that they will be better off focusing on the middle path (i.e., treating AI as a normal tool). 

The Value of Small Language Models on the Middle Path

“For many companies,” writes author and keynote speaker, Dean DeBiase, “LLMs are still the best choice for specific projects. For others, though, they can be expensive for businesses to run, as measured in dollars, energy, and computing resources. ... I suspect there are emerging alternatives that will work better in certain instances — and my discussions with dozens of CEOs support that [prediction]."[7] One of those alternatives is Small Language Models (SLMs). In many cases, SLMs are a better alternative than LLMs. In a LinkedIn post, I wrote, “There's often a misconception that small language models are less effective than large language models — but the reality is, each serves different functions/needs based on the source data and information they use and the trustworthiness they bring.” SLMs are generally more trustworthy due to their use of data from recognized, corporate knowledge bases.

As the staff at SymphonyAI explains, "Small Language Models are specialized in specific tasks and built with curated, selective data sources. A small language model is a type of foundation model trained on a smaller dataset compared to Large Language Models. This focused training allows SLMs to learn the nuances and intricacies of specific domains, providing higher quality and more accurate results, increased computational efficiency, and faster training and development times."[8] In other words, SLMs are AI as normal technology. 

During a discussion between DeBiase and Steve McMillan, president and CEO of Teradata, McMillan noted, “As we look to the future, we think that small and medium language models, and controlled environments such as domain-specific LLMs, will provide much better solutions [for many companies].” Why? DeBiase explains, “A critical advantage of [SLMS is that] the data is kept within the firewall domain, so external SLMs are not being trained on potentially sensitive data. The beauty of SLMs is that they scale both computing and energy use to the project's actual needs, which can help lower ongoing expenses and reduce environmental impacts.”

As I have noted previously, small language models have the potential to level the playing field in the logistics and retail sectors because they are a more trusted, affordable and scalable solution. However, it's important to remember that they aren’t simply plug and play solutions. Positioning any language model to perform AI-powered advanced analytics takes time and extensive training before they’re ready to tackle complex operational challenges. SLMs also need access to both real-time data and vast historical datasets to serve as industry-specific knowledge bases, as well as in-house AI expertise that can ensure they are trained and scaled effectively. 

Concluding Thoughts

Snow believes there is a reason that the doomsayers and bloomsayers receive so much press. She explains, “The extreme narratives are useful precisely because they justify extreme responses, whether that's showering AI labs with cash or exempting them from oversight. The boring middle justifies nothing except careful, deliberate work.” She adds, “The middle path requires something harder than prophecy. It requires patience, empiricism, and the willingness to admit we don't know how this plays out. That's not a satisfying story. But it might be the right one.” The “satisfying story” is the one that sees businesses enjoy a return on their AI investments. Stephen Ochs, Head of Marketing at Selector writes, “Instead of asking ‘Are you using AI?’ today, we’re asking a far more pragmatic, or even brutal question: ‘Where’s the AI actually being used, and what’s it allowing my team to do that they hadn’t been able to do before?’ This is essential. This is the difference between investing in an intangible buzzword of a capability and investing in understanding the business. The value of operational intelligence isn’t in the scale of the model and the marketing budget; it’s the way this system thinks about your messy world.”[6] When AI is seen as a tool — and companies invest in AI on those terms — they will find the middle path a course with fewer obstacles and greater benefits.

Footnotes

[1] Jackie Snow, “What if AI doom and bloom prophecies are derailing more important conversations?” Quartz AI & Tech, 27 January 2026.

[2] Arvind Narayanan and Sayash Kapoor, “AI as Normal Technology,” Knight First Amendment Institute at Columbia University, 15 April 2025.

[3] Dario Amodei, “The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI,” January 2026.

[4] Shannon Carroll, “Anthropic's CEO has a stark warning about AI,” Quartz, 26 January 2026.

[5] Vishal Sikka and Varin Sikka, “Hallucination Stations,” 2025.

[6] Steven Levy, “The Math on AI Agents Doesn’t Add Up,” Wired, 23 January 2026.

[7] Dean DeBiase, "Why Small Language Models Are The Next Big Thing In AI," Forbes, 25 November 2024.

[8] Staff, “Small Language Model (SLM or SMLM),” SymphonyAI.

[9] Stephen Ochs, “The Most Important Question in Operational AI: Show Me Where It Actually Works,” RT Insights, 21 January 2026.

Share this post
Share this article