Artificial intelligence (AI) utilization in enterprises is rapidly expanding, and organizations have to keep up with AI’s pace of growth. Once customers start to benefit from smart solutions such as predictive schemes or recommendation systems, there is no return. But even the latest models face inherent decay over time due to the dynamic character of data with effects such as the degradation in prediction performance and poor customer experience. In this context, a new field has emerged: AI-as-a-service (AIaaS). AIaaS is a cloud-based resource, integrating AI components (huge amounts of data, learning algorithms, corresponding computing hardware) and services, and enabling enterprises to utilize AI for large-scale use cases.
Major cloud service providers (Amazon, Google, Microsoft, IBM) are also major AIaaS providers. However, there are some issues that hamper the development of desired AI systems. One issue is that AI products are usually fully bundled packages, implying less interoperability between different vendors and limiting the flexibility or extension of functionalities in practical implementations. Moreover, different AI products focus on specific aspects.
The article addresses these AIaaS issues by proposing a layered modularized architecture so that customers can better understand each vendor’s offer, differentiate between several vendors, and make the best decision when acquiring certain products. The AI tech-stack model consists of seven levels, from bottom to top: AI infrastructure, AI platform, AI framework, AI algorithm, AI data pipeline, AI service, and AI solution. The model is meant to help organizations solve management and technology challenges, understand which level best matches a vendor offer, and promote interoperability between components.
The first part identifies the features of the main AIaaS products, including their weaknesses, strengths, and possible benefits. The next section details the logic behind the AI tech-stack architecture, the role of each layer in the hierarchy, and the principles on which it is based. For instance, the infrastructure layer concerns the hardware infrastructure needed to accomplish computing, storage, and network communication. The platform layer incorporates operation systems and programming environments; the AI framework layer contains AI-specific libraries; and the algorithm layer should hold open-source and self-developed algorithms. The data pipeline integrates various data architectures, and the AI service layer contains general-purpose application programming interfaces (APIs) such as image processing and natural language processing (NLP). The uppermost layer, the AI solution layer, provides AI-enabled solutions for different business domains.
The last part of the article presents a use case where the model is applied to generate AI-enabled applications, namely smart tourism recommendation systems (STRS) for four tourism companies in Taiwan. The four solutions are tailored according to existing internal data resources, specific tourism activities, and types of customers.
The article’s value can be found in how it addresses this practical perspective of customers wishing to implement an AI-enabled solution. Another merit is its concise and clear overview, often accompanied by synthesizing tables. It is an interesting framework that can also be applied and validated in other business areas.