We look forward to presenting Transform 2022 in person again on July 19 and virtually from July 20 to 28. Join us for insightful conversations and exciting networking opportunities. Register today!
Artificial intelligence (AI) is steadily penetrating the mainstream of enterprises, but significant challenges remain in getting it to a place where it can make a meaningful contribution to the operating model. Until then, the technology risks losing its reputation as an economic game changer, which could stifle adoption and leave organizations without a clear path forward in the digital economy.
For this reason, this year the focus was on questions relating to the use of AI. Taking a technology from the lab to production is never easy, but AI can be particularly problematic given that it offers such a wide range of possible outcomes for any problem it is designed to solve. This means that companies must act both carefully and quickly in order not to fall behind in an increasingly competitive environment.
Steady progress in the use of AI in production
According to IDC, 31 percent of IT decision makers say they have brought AI to production, but only a third of this group consider their deployments mature. This is defined as the moment it begins to leverage enterprise-wide business models by improving customer satisfaction, automating decision-making, or streamlining processes.
As you might expect, managing the data and infrastructure at the scale that AI needs to deliver real value remains one of the biggest hurdles. Building and maintaining a data infrastructure of this magnitude is not easy, even in the cloud. It’s also difficult to properly prepare data to remove bias, duplication, and other factors that can skew results. While many companies use pre-trained, off-the-shelf AI platforms that can be deployed relatively quickly, they tend to be less adaptable and difficult to integrate with legacy workflows.
However, scaling is not only a question of size, but also of coordination. Sumanth Vakada, Founder and CEO of Qualetics Data Machines says that while infrastructure and lack of dedicated resources are the main barriers to scaling, so are issues like the siled architectures and siled work cultures that still exist in many organizations. These tend to block important data from reaching AI models, leading to inaccurate results. And few organizations have given much thought to enterprise-wide governance that not only helps leverage AI for common goals, but also critically supports functions like security and compliance.
The case for an on-premises AI infrastructure
While it may be tempting to use the cloud to provide the infrastructure for large-scale AI deployments, a recent white paper from Supermicro and Nvidia contradicts that notion, at least in part. The companies argue that on-premises infrastructure is more appropriate in certain circumstances, namely these:
- When applications require sensitive or proprietary data
- If the infrastructure can also be used for other data-intensive applications such as VDI
- When data loads start driving cloud costs to unsustainable levels
- When certain hardware configurations are not available in the cloud or adequate performance cannot be guaranteed
- When enterprise-level support is required to complement in-house staff and expertise
Of course, an on-premises strategy only works if the infrastructure itself is within a reasonable pricing structure and physical footprint. However, when direct control is required, an on-premises deployment can be designed around the same ROI factors as any third-party solution.
Still, many organizations appear to have put the AI cart in front of the horse, both in terms of scale and operational capabilities – that is, they want to reap the benefits of AI without investing in the right means to support it.
Jeff Boudier, head of product and growth at AI language developer Hugging Face, recently remarked to VB that without proper support, it becomes extremely difficult for data science teams to effectively version and share AI models, code, and datasets. This, in turn, increases the workload of project managers as they strive to implement these elements into production environments, which only adds to the sobering up about the technology, as it’s designed to make things easier, not harder.
In fact, many organizations are still trying to force AI into the era of traditional pre-collaboration and pre-version control software development, rather than using it as an opportunity to create a modern MLops environment. Like any technology, AI is only as effective as its weakest link. So if development and training are not adequately supported, the whole initiative could stall.
The use of AI in real environments is probably the most important phase of its development, because this is where it will finally prove to be a blessing or a curse for the business model. It may take a decade or more to fully appreciate the value, but for now, at least, there is a greater risk of implementing AI and failing than holding back and risking being outplayed by increasingly intelligent competitors.
VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Learn more about membership.