Skip to main content Link Menu Expand (external link) Document Search Copy Copied

LAB: Create a Language Service Client Application

The Conversational Language Understanding feature of the Azure AI Service for Language enables you to define a conversational language model that client apps can use to interpret natural language input from users, predict the users intent (what they want to achieve), and identify any entities to which the intent should be applied. You can create client applications that consume conversational language understanding models directly through REST interfaces, or by using language-specific software development kits (SDKs).

Clone the repository for this course

If you have already cloned AI-102-AIEngineer code repository to the environment where you’re working on this lab, open it in Visual Studio Code; otherwise, follow these steps to clone it now.

  1. Start Visual Studio Code.

  2. Open the palette (SHIFT+CTRL+P) and run a Git: Clone command to clone the https://github.com/MicrosoftLearning/AI-102-AIEngineer repository to a local folder (it doesn’t matter which folder).

  3. When the repository has been cloned, open the folder in Visual Studio Code.

  4. Wait while additional files are installed to support the C# code projects in the repo.

    Note: If you are prompted to add required assets to build and debug, select Not Now.

Create Language service resources

If you already have a Language service resource in your Azure subscription, you can use it in this exercise. Otherwise, follow these instructions to create one.

  1. Open the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription.

  2. Select the +Create a resource button, search for language service, and create a Language service resource with the following settings:

    • Default features: All
    • Custom features: none
    • Subscription: Your Azure subscription
    • Resource group: Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided)
    • Region: westus2
    • Name: Enter a unique name
    • Pricing tier: S
    • Responsible AI Notice: select check box to confirm
  3. Wait for the resources to be created. You can view your resource by navigating to the resource group where you created it.

Import, train, and publish a Conversational language understanding model

If you already have a Clock project from a previous lab or exercise, you can use it in this exercise. Otherwise, follow these instructions to create it.

  1. In a new browser tab, open the Language Studio - Preview portal at https://language.cognitive.azure.com.

  2. Sign in using the Microsoft account associated with your Azure subscription. If this is the first time you have signed into the Language Service portal, you may need to grant the app some permissions to access your account details. Then complete the Welcome steps by selecting your Azure subscription and the authoring resource you just created.

  3. Open the Conversational Language Understanding page.

  4. Next to +Create new project, select Import. Click Choose File and then browse to the 10b-clu-client-(preview) subfolder in the project folder containing the lab files for this exercise. Select Clock.json, click Open, and then click Done.

  5. If a panel with tips for creating an effective Language service app is displayed, close it.

  6. At the left of the Language Studio portal, select Training jobs to train the app. Click Start a training job, name the model Clock and keep default training mode (Standard) and data splitting. Select Train. Training may take several minutes to complete.

    Note: Because the model name Clock is hard-coded in the clock-client code (used later in the lab), capitalize and spell the name exactly as described.

  7. At the left of the Language Studio portal, select Deploying a model and use Add deployment to create deployment for the Clock model that’s named production.

    Note: Because the deployment name production is hard-coded in the clock-client code (used later in the lab), capitalize and spell the name exactly as described.

  8. The client applications needs the Endpoint URL and Primary key to use your deployed model. After the deployment is complete, to get those parameters, open the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription. On the Search bar, search for Language and select it to choose the *Azure AI Services Language service*.
  9. Your Language service resource should be listed, select that resource.

  10. On the left hand menu, under the Resource Management section, select Keys and Endpoint.

  11. Make a copy of your KEY 1 and your Endpoint.

  12. Client applications need information from the prediction URL endpoint and the Language service key to connect to your deployed model and be authenticated.

Prepare to use the Language service SDK

In this exercise, you’ll complete a partially implemented client application that uses the Clock model (published Conversational Language Understanding model) to predict intents from user input and respond appropriately.

Note: You can choose to use the SDK for either .NET or Python. In the steps below, perform the actions appropriate for your preferred language.

  1. In Visual Studio Code, in the Explorer pane, browse to the 10b-clu-client-(preview) folder and expand the C-Sharp or Python folder depending on your language preference.

  2. Right-click the clock-client folder and then select Open in Integrated Terminal. Then install the Conversational Language Service SDK package by running the appropriate command for your language preference:

    C#

     dotnet add package Azure.AI.Language.Conversations --version 1.0.0
     dotnet add package Azure.Core
    

    Python

     pip install azure-ai-language-conversations --pre
     python -m pip install python-dotenv
     python -m pip install python-dateutil
    
  3. View the contents of the clock-client folder, and note that it contains a file for configuration settings:

    • C#: appsettings.json
    • Python: .env

    Open the configuration file and update the configuration values it contains to include the Endpoint URL and the Primary key for your Language resource. You can find the required values in the Azure portal or Language Studio as follows:

    • Azure portal: Open your Language resource. Under Resource Management, select Keys and Endpoint. Copy the KEY 1 and Endpoint values to your configuration settings file.
    • Language Studio: Open your Clock project. The Language service endpoint can be found on the Deploying a model page under Get prediction URL, and the the Primary key can be found on the Project settings page. The Language service endpoint portion of the Prediction URL ends with .cognitiveservices.azure.com/. For example: https://ai102-langserv.cognitiveservices.azure.com/.
  4. Note that the clock-client folder contains a code file for the client application:

    • C#: Program.cs
    • Python: clock-client.py

    Open the code file, and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Language service SDK:

    C#

     // Import namespaces
     using Azure;
     using Azure.AI.Language.Conversations;
    

    Python

     # Import namespaces
     from azure.core.credentials import AzureKeyCredential
     from azure.ai.language.conversations import ConversationAnalysisClient
    

Get a prediction from the Conversational Language model

Now you’re ready to implement code that uses the SDK to get a prediction from your Conversational Language model.

  1. In the Main function, note that code to load the prediction endpoint and key from the configuration file has already been provided. Then find the comment Create a client for the Language service model and add the following code to create a prediction client for your Language Service app:

    C#

     // Create a client for the Language service model
     Uri endpoint = new Uri(predictionEndpoint);
     AzureKeyCredential credential = new AzureKeyCredential(predictionKey);
    
     ConversationAnalysisClient client = new ConversationAnalysisClient(endpoint, credential);
    

    Python

     # Create a client for the Language service model
     client = ConversationAnalysisClient(
         ls_prediction_endpoint, AzureKeyCredential(ls_prediction_key))
    
  2. Note that the code in the Main function prompts for user input until the user enters “quit”. Within this loop, find the comment Call the Language service model to get intent and entities and add the following code:

    C#

     // Call the Language service model to get intent and entities
     var projectName = "Clock";
     var deploymentName = "production";
     var data = new
     {
         analysisInput = new
         {
             conversationItem = new
             {
                 text = userText,
                 id = "1",
                 participantId = "1",
             }
         },
         parameters = new
         {
             projectName,
             deploymentName,
             // Use Utf16CodeUnit for strings in .NET.
             stringIndexType = "Utf16CodeUnit",
         },
         kind = "Conversation",
     };
     // Send request
     Response response = await client.AnalyzeConversationAsync(RequestContent.Create(data));
     dynamic conversationalTaskResult = response.Content.ToDynamicFromJson(JsonPropertyNames.CamelCase);
     dynamic conversationPrediction = conversationalTaskResult.Result.Prediction;   
     var options = new JsonSerializerOptions { WriteIndented = true };
     Console.WriteLine(JsonSerializer.Serialize(conversationalTaskResult, options));
     Console.WriteLine("--------------------\n");
     Console.WriteLine(userText);
     var topIntent = "";
     if (conversationPrediction.Intents[0].ConfidenceScore > 0.5)
     {
         topIntent = conversationPrediction.TopIntent;
     }
    

    Python

     # Call the Language service model to get intent and entities
     cls_project = 'Clock'
     deployment_slot = 'production'
    
     with client:
         query = userText
         result = client.analyze_conversation(
             task={
                 "kind": "Conversation",
                 "analysisInput": {
                     "conversationItem": {
                         "participantId": "1",
                         "id": "1",
                         "modality": "text",
                         "language": "en",
                         "text": query
                     },
                     "isLoggingEnabled": False
                 },
                 "parameters": {
                     "projectName": cls_project,
                     "deploymentName": deployment_slot,
                     "verbose": True
                 }
             }
         )
    
     top_intent = result["result"]["prediction"]["topIntent"]
     entities = result["result"]["prediction"]["entities"]
    
     print("view top intent:")
     print("\ttop intent: {}".format(result["result"]["prediction"]["topIntent"]))
     print("\tcategory: {}".format(result["result"]["prediction"]["intents"][0]["category"]))
     print("\tconfidence score: {}\n".format(result["result"]["prediction"]["intents"][0]["confidenceScore"]))
    
     print("view entities:")
     for entity in entities:
         print("\tcategory: {}".format(entity["category"]))
         print("\ttext: {}".format(entity["text"]))
         print("\tconfidence score: {}".format(entity["confidenceScore"]))
    
     print("query: {}".format(result["result"]["query"]))
    

    The call to the Language service model returns a prediction/result, which includes the top (most likely) intent as well as any entities that were detected in the input utterance. Your client application must now use that prediction to determine and perform the appropriate action.

  3. Find the comment Apply the appropriate action, and add the following code, which checks for intents supported by the application (GetTime, GetDate, and GetDay) and determines if any relevant entities have been detected, before calling an existing function to produce an appropriate response.

    C#

     // Apply the appropriate action
     switch (topIntent)
     {
         case "GetTime":
             var location = "local";           
             // Check for a location entity
             foreach (dynamic entity in conversationPrediction.Entities)
             {
                 if (entity.Category == "Location")
                 {
                     //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                     location = entity.Text;
                 }
             }
             // Get the time for the specified location
             string timeResponse = GetTime(location);
             Console.WriteLine(timeResponse);
             break;
         case "GetDay":
             var date = DateTime.Today.ToShortDateString();            
             // Check for a Date entity
             foreach (dynamic entity in conversationPrediction.Entities)
             {
                 if (entity.Category == "Date")
                 {
                     //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                     date = entity.Text;
                 }
             }            
             // Get the day for the specified date
             string dayResponse = GetDay(date);
             Console.WriteLine(dayResponse);
             break;
         case "GetDate":
             var day = DateTime.Today.DayOfWeek.ToString();
             // Check for entities            
             // Check for a Weekday entity
             foreach (dynamic entity in conversationPrediction.Entities)
             {
                 if (entity.Category == "Weekday")
                 {
                     //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                     day = entity.Text;
                 }
             }          
             // Get the date for the specified day
             string dateResponse = GetDate(day);
             Console.WriteLine(dateResponse);
             break;
         default:
             // Some other intent (for example, "None") was predicted
             Console.WriteLine("Try asking me for the time, the day, or the date.");
             break;
     }
    

    Python

     # Apply the appropriate action
     if top_intent == 'GetTime':
         location = 'local'
         # Check for entities
         if len(entities) > 0:
             # Check for a location entity
             for entity in entities:
                 if 'Location' == entity["category"]:
                     # ML entities are strings, get the first one
                     location = entity["text"]
         # Get the time for the specified location
         print(GetTime(location))
    
     elif top_intent == 'GetDay':
         date_string = date.today().strftime("%m/%d/%Y")
         # Check for entities
         if len(entities) > 0:
             # Check for a Date entity
             for entity in entities:
                 if 'Date' == entity["category"]:
                     # Regex entities are strings, get the first one
                     date_string = entity["text"]
         # Get the day for the specified date
         print(GetDay(date_string))
    
     elif top_intent == 'GetDate':
         day = 'today'
         # Check for entities
         if len(entities) > 0:
             # Check for a Weekday entity
             for entity in entities:
                 if 'Weekday' == entity["category"]:
                 # List entities are lists
                     day = entity["text"]
         # Get the date for the specified day
         print(GetDate(day))
    
     else:
         # Some other intent (for example, "None") was predicted
         print('Try asking me for the time, the day, or the date.')
    
  4. Save your changes and return to the integrated terminal for the clock-client folder, and enter the following command to run the program:

    C#

     dotnet run
    

    Python

     python clock-client.py
    
  5. When prompted, enter utterances to test the application. For example, try:

    Hello

    What time is it?

    What’s the time in London?

    What’s the date?

    What date is Sunday?

    What day is it?

    What day is 01/01/2025?

Note: The logic in the application is deliberately simple, and has a number of limitations. For example, when getting the time, only a restricted set of cities is supported and daylight savings time is ignored. The goal is to see an example of a typical pattern for using Language Service in which your application must:

  1. Connect to a prediction endpoint.
  2. Submit an utterance to get a prediction.
  3. Implement logic to respond appropriately to the predicted intent and entities.
  1. When you have finished testing, enter quit.

More information

To learn more about creating a Language Service client, see the developer documentation