Enterprise Gen-AI Series: Deployment Options – SaaS or DIY? Shared or Dedicated?

Screenshot 2024-05-17 120957

By Paul Shearer, Solution Architecture, VP

Introduction

Deploying large language models (LLMs) in various applications, from customer service bots to complex decision-making tools, requires careful consideration of several critical factors. The primary objective is to ensure that the deployment aligns with specific use cases while being cost-effective and efficient. This blog will explore the key considerations necessary to optimize the deployment of LLMs, focusing on model size, data sensitivity, and the distinction between reasoning and recall capabilities.

Understanding the Use Case

Before diving into the complexities of deploying large language models (LLMs), it's essential to clearly define a specific use case. What challenge is the LLM designed to tackle? Who will be interacting with it? These initial considerations are critical as they directly influence the choice of the model, the handling of data, and the potential risks associated with the deployment.

High-Stakes Applications

Imagine a scenario in which an LLM is tasked with supporting medical diagnostics. Here, the model needs to demonstrate not only high levels of accuracy but also a deep understanding of medical terminology, symptoms, and possible interactions. The risks in such settings are profound. A misdiagnosis or delayed diagnosis could lead to incorrect treatment, potentially endangering patient health. For instance, if an LLM incorrectly identifies a benign condition as malignant, the psychological impact on the patient, along with unnecessary medical procedures, could be substantial. The model must be meticulously tested (and likely certified by the FDA or similar agencies for those outside of the United States) and continuously monitored to ensure reliability in its outputs, given the life-altering consequences of its advice. In high stakes scenarios we need models to dramatically exceed human performance. 

Although not quite life or death imagine another scenario where you use an LLM to interface and schedule operational activities within your company. This could be things like employee work roster, updating and revising shipment dates, transportation scheduling, and many other related use cases. If you get it wrong, you can literally shut your company down.

Lower-Stakes Applications: Customer Service

Contrast this with a lower-stakes scenario such as a chatbot deployed to handle basic customer inquiries. Think of this as an FAQ on steroids. In this case, while accuracy remains important, the implication of providing incorrect information isn’t life-threatening but rather a matter of customer dissatisfaction and potential financial refunds. (This isn't meant to imply customer satisfaction in aggregate is anything but high stakes but that the impact of the mistake is limited in scope to a single customer.)  For example, in a recent case where a chatbot mistakenly communicated the wrong terms of a bereavement policy the company was out several hundred dollars and some legal fees —this is in stark contrast to the potentially grave consequences in medical applications or shutting down operations at a company.

Balancing Risk and Technology

These examples highlight the importance of aligning the LLM's capabilities with the specific requirements and risks of its application domain. In medicine, the premium is on precision and reliability, warranting a deployment strategy that leverages the most advanced models, possibly fine-tuned with specialized datasets. For the operational scenarios, analytical reasoning capabilities are at a premium. In contrast, customer service applications can potentially employ fewer complex models, focusing on efficiency and cost-effectiveness.

Understanding the stakes involved in different use cases allows organizations to strategically deploy LLMs that are not only technically capable but also appropriate for the level of risk they carry. This strategic alignment helps in maximizing the benefits of AI while minimizing potential downsides, ensuring that technology serves as a reliable and effective tool across various domains.

Consider Model Size: Reasoning vs. Recall

Model size plays a critical role depending on the application’s requirements for reasoning or recall:

  • Reasoning Tasks: Applications involving complex problem-solving or understanding subtle differences require larger models. These models, having in-excess 200 parameters, are adapt at nuanced reasoning. (I'll be publishing more about this in a future blog article.)
  • Recall Tasks: For applications that primarily retrieve and relay factual information, smaller models are adequate. These models can efficiently manage tasks that demand speed over depth, offering cost savings and reduced computational requirements.

Assessing Data Sensitivity and Regulatory Compliance

When deploying large language models, the sensitivity of the data involved significantly dictates the requisite level of security and privacy measures. Handling highly sensitive data, such as personal medical records which fall under HIPAA (Health Insurance Portability and Accountability Act) regulations, demands rigorous security protocols. Similarly, personal data from individuals in the European Union requires compliance with GDPR (General Data Protection Regulation), which mandates strict handling and processing guidelines to protect user privacy.

For example, medical records that include patient diagnoses, treatment information, or any personal identifiers must be protected with enhanced security measures such as encryption both at rest and in transit. Additionally, access controls must be stringent to ensure that only authorized personnel can access or process such sensitive information.

In contexts where GDPR applies, data like a user’s full name, location information, IP address, or any data that can be used to directly or indirectly identify a person also requires careful handling. 

With our earlier operational examples, the primary considerations are around protecting valuable company data. In most cases large SaaS base deployments will be fine as long as you have sufficient trust in the third parties and data protection policies.

Considering Deployment Options: Shared vs. Dedicated Tenancy / SaaS vs. DIY

The good news is also the bad news… there are a lot of options for deployment models!  You will need to decide SaaS or DIY and Public Cloud or Private Cloud? DIY GPUs or ARM? The sensitivity of the data will play a crucial role in this decision. Shared tenancy SaaS options (shared among multiple customers) are the easiest to get up and running; however, these might not always provide the level of isolation required for regulatory compliance. Although these environments can be cost-effective and resource-efficient, the potential risk of data leakage or insufficient data isolation might pose compliance challenges.

Dedicated tenancy, where resources such as servers and storage are exclusively used by one organization, provides greater control over data security and compliance. In dedicated environments, it is easier to implement custom security measures, manage compliance requirements, and ensure that access to sensitive data is tightly controlled. For organizations handling data covered by HIPAA or GDPR, dedicated tenancy often becomes a necessity to fully comply with legal requirements and protect personal data adequately.

Choosing the Right Deployment Model

With a clear understanding of the use case and data sensitivity, the next step is selecting an appropriate deployment model. This can range from cloud-based solutions to on-premises installations. Cloud deployments offer scalability and flexibility but may raise concerns about data security and residency. On-premises deployments, while more secure, require significant infrastructure and maintenance investments.

Cost and Resource Optimization

Balancing the costs associated with deploying and running LLMs is essential, especially when dealing with larger models. Consider the trade-offs between model performance and operational expenses. Smaller models might be less costly and require fewer resources, making them ideal for tasks with limited budgets or lower risk implications.

Choosing the right AI deployment model depends largely on an organization's specific needs such as budget, scalability requirements, and data privacy concerns. Each model presents a different set of advantages and trade-offs, making it important for decision-makers to carefully evaluate their options. Whether opting for the cost-effectiveness of shared cloud services or the robust control of private infrastructure, the key is to align the chosen model with strategic business objectives for optimal impact.

Want to see your question answered in the series, or just want to subscribe for alerts on future issues? Simply fill out the form below!