Skip to main content

Table of Contents


Overview

The Microsoft Foundry integration connects your Azure AI Foundry project to the 4MINDS platform, allowing you to:
  • Browse deployments from your Azure AI Foundry project
  • Register models deployed in Foundry as 4MINDS models
  • Run chat completions against Foundry-hosted models (GPT-4o, GPT-4, Llama, Mistral, etc.)
  • Fine-tune models using 4MINDS datasets with Foundry compute
Two authentication methods are supported:
MethodBest ForRequires Admin Consent?
OAuthOrganizations with Azure AD admin consent grantedYes
API KeyIndividual users or orgs where admin consent is not availableNo

Azure Setup

Create an Azure AI Foundry Project

  1. Go to Azure AI Foundry and sign in with your Microsoft account
  2. Click + New project
  3. Select or create an Azure AI hub resource
  4. Give your project a name and select a region
  5. Click Create project
Tip: Choose a region close to your users for lower latency. Models available vary by region.

Deploy a Model

Before connecting to 4MINDS, deploy at least one model in your Foundry project:
  1. In your Azure AI Foundry project, go to Model catalog in the left sidebar
  2. Browse or search for a model (e.g., gpt-4o, gpt-4, Mistral-large)
  3. Click Deploy
  4. Choose a deployment name and configure settings:
    • Deployment name: A unique identifier (e.g., gpt-4o)
    • Model version: Select the desired version
    • Deployment type: Standard or Provisioned
    • Rate limits: Configure tokens-per-minute as needed
  5. Click Deploy

Get Your Endpoint URL and API Key

The endpoint URL and API key can be found in two places depending on which Azure portal you’re using.

Option A: From Azure AI Foundry (ai.azure.com) — Easiest

  1. Go to Azure AI Foundry and sign in
  2. Click on your project/resource
  3. On the Overview page you’ll see:
    • Microsoft Foundry project endpoint — copy this URL
    • API Key — click the copy icon to copy it
  4. The endpoint URL looks like:
    https://your-project-name.region.inference.ml.azure.com
    

Option B: From Azure Portal (portal.azure.com)

  1. Open your project in Azure AI Foundry
  2. In the left sidebar, go to Resource Management > Keys and Endpoint
  3. Copy the Endpoint URL
  4. Under Keys, you’ll see Key 1 and Key 2 — click the copy icon next to either key
Important: Copy the full endpoint URL including https://. Do not include a trailing slash.
Security: Treat your API key like a password. Do not share it or commit it to source control. You can regenerate keys at any time — regenerating a key immediately invalidates the old one.
Tip: Azure provides two keys so you can rotate them without downtime. Use Key 1 for your connection, and if you need to rotate, switch to Key 2 before regenerating Key 1.

OAuth App Registration (Admin)

If your organization wants to use the OAuth connection method, an Azure AD administrator must register and consent to the 4MINDS application:
  1. Navigate to Azure Portal > Microsoft Entra ID > App registrations
  2. Find the 4MINDS Foundry app registration (or create one if self-hosting)
  3. Go to API permissions
  4. Ensure the following permissions are configured:
    • https://ml.azure.com/user_impersonation — Access Azure ML resources on behalf of the user
    • User.Read — Sign in and read user profile
  5. Click Grant admin consent for [Your Organization]
  6. Confirm the consent prompt
Note: If you see “Admin approval required” when trying to connect via OAuth, your Azure AD admin has not yet granted consent. Use the API Key method as an alternative.

Connecting to 4MINDS

OAuth provides a seamless single sign-on experience and automatically manages token refresh.
  1. Open the Microsoft Foundry integration in 4MINDS Settings (or during onboarding)
  2. Make sure the OAuth tab is selected (it is selected by default)
  3. Click Connect with Microsoft
  4. A Microsoft login window will open — sign in with your Azure account
  5. Grant the requested permissions
  6. After successful authentication, you’ll be redirected back to 4MINDS
  7. If prompted, enter your Endpoint URL (see Get Your Endpoint URL)
  8. Click Save Endpoint URL
Your Foundry deployments will load automatically once the endpoint URL is saved.

Method 2: API Key

Use the API Key method when:
  • Your organization’s Azure AD admin has not granted consent for OAuth
  • You prefer not to use OAuth
  • You’re connecting from a personal Azure account without organizational admin access
Steps:
  1. Open the Microsoft Foundry integration in 4MINDS Settings (or during onboarding)
  2. Click the API Key tab
  3. Enter your Endpoint URL (see Get Your Endpoint URL):
    https://your-project-name.region.inference.ml.azure.com
    
  4. Enter your API Key (see Get Your API Key)
  5. Click Connect
  6. 4MINDS will test the connection and save your credentials if successful
Note: API keys do not expire automatically, but they can be regenerated by anyone with access to the Azure AI Foundry project settings. If your connection stops working, check that your key hasn’t been regenerated.

Creating an External Model with Microsoft Foundry

There are two ways to create a model backed by a Microsoft Foundry deployment: Guide Me (easy mode) and Advanced mode.

Guide Me (Easy Mode)

  1. From the home screen, click Guide Me
  2. On the Model Source step, you’ll see two options:
    • “Build custom AI”
    • A list of external model providers
  3. Select Microsoft Foundry from the list of providers
  4. Connect to your Foundry account using either OAuth or API Key
  5. Once connected, you’ll see a list of your deployed models — select the one you want to use
  6. Click Next to proceed to the Upload Data step

Advanced Mode

  1. Navigate to the Models page
  2. Click the Create Model button
  3. Enter a model name
  4. Under Or connect an external model, select Microsoft Foundry
  5. Connect to your Foundry account if not already connected
  6. Select a deployed model from the list
  7. Click Next to proceed to the Upload Data step

Using Foundry Models

Browsing Deployments

Once connected, your Foundry deployments appear automatically:
  • Each deployment shows its name, model type, and status
  • Click on a deployment to select it

Chat Completions

Registered Foundry models can be used anywhere in 4MINDS that supports model selection:
  • Chat conversations
  • Model comparison
  • Fine-tuning evaluation

Troubleshooting

”Admin approval required” when connecting via OAuth

Cause: Your Azure AD tenant requires admin consent for the 4MINDS application. Solutions:
  1. Ask your Azure AD administrator to grant admin consent
  2. Use the API Key connection method instead — no admin consent required

”Invalid credentials” or “Connection test failed”

Cause: The endpoint URL or API key is incorrect. Solutions:
  1. Verify the endpoint URL is copied exactly from Project Settings (including https://)
  2. Ensure there’s no trailing slash on the endpoint URL
  3. Re-copy the API key from Azure AI Foundry > Project Settings > Keys and Endpoint
  4. Check that the API key hasn’t been regenerated since you last copied it

”No deployments found”

Cause: No models are deployed in the Foundry project, or the endpoint URL points to the wrong project. Solutions:
  1. Verify you have at least one deployed model in your Azure AI Foundry project
  2. Check that the endpoint URL matches the correct project
  3. Ensure the deployed model’s status is “Succeeded” in Azure AI Foundry

OAuth token refresh failed

Cause: The OAuth refresh token has expired or been revoked. Solutions:
  1. Disconnect and reconnect via OAuth
  2. Check that admin consent is still granted in Azure AD
  3. Switch to API Key method if the issue persists

Connection works but chat completions fail

Cause: The selected deployment may have rate limits, be in a failed state, or the model may not support the requested operation. Solutions:
  1. Check the deployment status in Azure AI Foundry — ensure it shows “Succeeded”
  2. Verify rate limits haven’t been exceeded (check Azure portal for 429 errors)
  3. Ensure the deployment supports chat completions (some models only support embeddings or completions)
  4. Try a different deployment to isolate the issue