Are Large Language Models and Generative AI a panacea that ails the enterprise?

Generative AI models can sometimes ‘hallucinate’, or generate information not present in the training data resulting in inaccurate or misleading content.

By  Sanjeev MenonFeb 29, 2024 4:38 PM
Are Large Language Models and Generative AI a panacea that ails the enterprise?
Generative AI could inadvertently violate compliance rules in highly regulated industries. The generated content could potentially breach privacy laws or industry regulations.(Representative image by Possessed Photography via Unsplash)

Giant leaps in AI, particularly Large Language Models (LLMs) and generative AI, are promising to revolutionize businesses, but warrant closer scrutiny. The lawsuit against Ross Intelligence, an AI-powered legal research platform, serves as a cautionary tale. Accused by Thomson Reuters of plagiarizing content from their own AI platform, Westlaw Precision, this upcoming jury trial represents a landmark case with the potential to set precedents for future AI-related legal disputes.

The above is just one example of the complexities lurking beneath the AI hype. While LLMs and generative AI offer exciting possibilities, they also come with baggage—challenges and legal implications. As we explore these powerful tools, striking a balance between innovation and regulation is key. We need to ensure AI usage transforms responsibly and ethically, not just disruptively.

How does Generative AI Tackle Enterprise-Level Challenges?

Imagine a world where marketing messages resonate intimately with each customer, data reveals its hidden secrets in real-time, and customer service flows seamlessly around the clock via intelligent chatbots. This future, once relegated to science fiction, is becoming increasingly possible due to the transformative power of generative AI.

By analyzing vast datasets of text-based information, generative AI empowers data-driven decision-making, uncovering hidden trends and insights that previously remained invisible. Furthermore, this technology promises the potential for 24/7 customer support, handled by AI-powered chatbots that answer routine inquiries and free up human agents for complex issues and personalized interactions.

The Flip Side: Why Enterprises Should Exercise Caution with Generative AI?

While generative AI holds immense potential, here are some reasons why enterprises should exercise caution:

• Hallucination: Generative AI models can sometimes ‘hallucinate’, or generate information not present in the training data resulting in inaccurate or misleading content. • Compliance: Generative AI could inadvertently violate compliance rules in highly regulated industries. The generated content could potentially breach privacy laws or industry regulations. • Security: Generative AI models can be exploited by malicious actors to generate harmful content, such as deepfakes or phishing emails. There’s also a risk of reverse-engineering the training data used by these models, leading to data privacy issues. • Bias: Generative AI models can perpetuate biases present in the training data, leading to potentially unfair or discriminatory outcomes. • Prompt Toxicity: Toxic prompts can lead generative AI models to produce inappropriate or harmful content. It’s essential to have safeguards in place to detect and mitigate such risks.

Are Generative AI and LLMs Adequate for Organizations?

In enterprise automation, generative AI, despite its clear benefits, does not fully deliver on providing true intelligence. To achieve intelligence that is specific to an organization’s needs, several key components are required:

• Contextual and Localized Understanding: Organizations should seek localized insights by referencing specific corpora, contracts, and transactions within processes for informed decisions.

• LLM Trained on Enterprise Data: To achieve intelligence specific to the organization, LLMs must be trained on fair and inclusive enterprise data. Retrieval Augmented Generation ensures accuracy and eliminates hallucinations, preserving organizational context.

• Fine-Tuned LLMs for Specific Parameters: LLMs need fine-tuning on a range of enterprise parameters and tasks, ensuring adaptability to organizational requirements. Regular monitoring for fairness and optimization through techniques like grid search are essential.

• Generation and Intelligence from Transactional Memory: Enterprises need to derive intelligence from transactional memory to provide personalized outputs for individual roles or specific process instances.

• Seamless Integration with Enterprise Systems: Seamless integration of generative AI and LLMs with enterprise systems is imperative. This supports end-to-end automation across various processes.

• On-Premises/Private Cloud Deployments: Addressing data security and compliance requirements with on-premises or private cloud deployments is crucial for regulated industries like BFSI and healthcare.

• LLM as the Model of a Knowledge Base: LLMs must be recognized as the model of a knowledge base, providing the framework for intelligent content generation.

Beyond Generative AI and LLMs: The Need for Multifunctional AI Assistants in Enterprises

While generative AI serves as a promising starting point, enterprises require a more comprehensive approach to truly harness the power of artificial intelligence. A set of coordinating multifunctional agents or AI-led virtual assistants can make significant strides in enterprise intelligence.

Language Model Learners (LLMs) have their respective roles and advantages, but despite their capabilities, they lack the intelligence for complex enterprise tasks. Their monolithic structure often clashes with the modularized software module approach, which is preferred for its ease of debugging and problem resolution.

To move towards enterprise general intelligence, we need to incorporate intelligence through knowledge graphs and real-world entities. Multifunctional and intuitive AI assistants can bridge the gap, providing intelligence and adaptability for diverse enterprise tasks.

So, are generative AI and LLMs a panacea? Not quite, but if wielded thoughtfully and ethically with coordinating AI assistants, they can uplift your workforce by automating routine tasks, freeing them for higher-level strategic thinking and creative endeavors.

Sanjeev Menon is the co-founder and Head of Product & Tech at E42.ai

Views expressed are personal.

First Published on Feb 28, 2024 1:30 PM

More from Storyboard18

How it Works

DD Sports, FanCode to broadcast bilateral hockey series between India and Germany

DD Sports, FanCode to broadcast bilateral hockey series between India and Germany

How it Works

Kalyan Jewellers records 37% growth for Q2 FY2025

Kalyan Jewellers records 37% growth for Q2 FY2025

How it Works

"No assets in UAE, no attachment" responds Honasa Consumer over Dubai Court order

"No assets in UAE, no attachment" responds Honasa Consumer over Dubai Court order

How it Works

Prasar Bharati invites four applications for legal vacancies

Prasar Bharati invites four applications for legal vacancies

How it Works

Temp Tactics: Logistics, e-comm firms scaling up gig workforce this festive season

Temp Tactics: Logistics, e-comm firms scaling up gig workforce this festive season

How it Works

Gen Z Explainer: The rise of 'sleepmaxxing', a sleep revolution or overhyped trend?

Gen Z Explainer: The rise of 'sleepmaxxing', a sleep revolution or overhyped trend?

How it Works

Dubai court rejects Honasa Consumer grievance filed over fiasco with UAE distributor

Dubai court rejects Honasa Consumer grievance filed over fiasco with UAE distributor

How it Works

Akshay Kumar most visible star on TV with average visibility of 22 hours per day

Akshay Kumar most visible star on TV with average visibility of 22 hours per day