<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4957385&amp;fmt=gif">

Dear MSP and IT Vendor Community,

 

At Humanize IT, we've been closely monitoring the rapid evolution of AI Large Language Models (LLMs) and Machine Learning (ML) technologies with considerable enthusiasm. These powerful tools have already proven invaluable for our internal research initiatives and proof-of-concept development. In fact, we even utilized Claude to help polish and proofread this very open letter—a perfect example of leveraging AI responsibly for productivity while maintaining human oversight and final editorial control.

My introduction to artificial intelligence dates back to approximately 1999, during algorithm analysis and data structures coursework. While the technical specifics have faded over the years, what remains vivid is the palpable excitement surrounding AI's potential impact on our industry—tempered by the understanding that meaningful advancement would require decades of development.

Here we are, decades later, witnessing the dawn of the AI era.

It's crucial to emphasize that we are truly in AI's infancy. Much like the early dot-com period, we have substantial learning ahead of us, particularly regarding data security. (Who among us doesn't remember the concerning prevalence of SQL injection vulnerabilities?)

I anticipate we'll soon see the first major AI-related security breach make headlines. While there have been whispers of inadvertent data exposures in cloud environments, we haven't yet experienced a catastrophic incident. However, it's likely only a matter of time before credentials and critical access keys are compromised through AI systems. LLM-jacking represents an entirely new frontier of cybersecurity challenges.

This reality has shaped our current approach. We've observed numerous organizations rushing to implement AI solutions at any cost, driven by sales pressures and the desire to showcase cutting-edge technology adoption.

However, we must remember a fundamental truth:

We are the guardians of our clients' digital kingdoms.

As technology professionals, we bear a unique responsibility to approach AI integration with exceptional care.

This principle has guided Humanize IT's decision to adopt a measured approach to AI implementation. In late 2024, we initiated an ambitious AI integration project using Froala, aiming to lead the market with innovative solutions. However, during our comprehensive project review, we discovered a significant security vulnerability involving plain text API keys. This discovery prompted us to pause development, recognizing that the AI ecosystem simply wasn't mature enough for production deployment.

We strongly encourage our colleagues throughout the technical consulting industry to adopt similar review practices. Thoroughly examine what AI components are accessing, including all API connections and the systems they control. Due diligence in this area isn't optional—it's essential.

Does this mean Humanize IT will abandon AI altogether?

Absolutely not! Instead, we're cultivating strategic partnerships with established AI providers who prioritize security and understand MSP/IT professional workflows.

We're excited to announce that our 2026 roadmap will feature several compelling AI partnerships that place security at the forefront while ensuring IT professionals maintain complete operational control. These solutions will undergo rigorous security reviews and must meet our stringent requirements for administrative oversight.

Pending successful security validation and control verification, we plan to make these AI-enhanced tools available to our clients beginning in 2026. However, we refuse to compromise data integrity, security or confidentiality in order to add a convenience tool into our software. 

We hope all in the IT industry will adopt this careful approach to AI integration so we can take the lessons of the past and apply them to today.

 

 

Sincerely,

Adam Walter

We