Databricks Proposes to Anthropics Claude

databricks microsoft openai

Anthropic Makes Its Latest Claude Models Available Through the Databricks Data Intelligence Platform. The Partnership Enables Databricks Customers to Develop and Implement AI Agents Based on Their Own Company Data.

Databricks Enters into a Partnership with Anthropic to Make Claude Models Available Within the Data Intelligence Platform. Companies Can Thus Build AI Agents Based on Their Own Data, with Support for Reasoning, Governance, and Integration Across Multiple Cloud Platforms.

Based on Own Data

More Than 10,000 Organizations That Are Customers of Databricks Can Now Use Anthropic’s Claude Models Within the Databricks Data Intelligence Platform. The Five-Year Partnership Integrates the Claude 3.7 Sonnet Models – Designed for Complex Reasoning Questions and Coding Tasks – Directly into the Platform, Available via AWS, Azure, and Google Cloud.

The Collaboration Focuses on Companies That Want to Develop AI Agents That Reason Based on Internal Data. Through Databricks Mosaic AI and Claude, Organizations Can Build Agents Tailored to Their Industry. They Maintain Control Over Governance, Access Management, and Model Performance.

Domain-Specific Agents

The Claude Models Can Process Context-Rich Datasets and Support Complex AI Workflows. In the Healthcare Sector, for Example, AI Agents Can Accelerate the Registration Process for Clinical Trials. In Retail, They Can Analyze Sales Data to Optimize Inventory Management and Store Layout.

The Models Are Directly Accessible via SQL Queries or Model Endpoints. Companies Don’t Need to Replicate Their Data, Saving Costs and Simplifying the Workflow. Additionally, They Can Fine-Tune Claude Models with Their Own Data, or Use Retrieval Augmented Generation (RAG) with Automatic Vector Indexing.

read also

What is RAG or Retrieval Augmented Generation?

The Partnership Also Provides Support in Terms of AI Governance. Databricks’ Unity Catalog Provides Access Control, Data Lineage Monitoring, and Security Measures. Organizations Can Use This to Comply with Regulations and Manage Risks Associated with AI Use.