Examples and Applications

This section describes the specific usage of common example VIs in the VIRobotics/AI Agent directory. Recommended reading order:

  1. Basic chat and streaming chat first

  2. Then tool calling and visual understanding

  3. Finally, run the complete Agent example


Example Entry Points

  • Path: LabVIEW install path\examples\VIRobotics\AI Agent

  • Menu: Help -> Find Examples -> Directory Structure -> VIRobotics -> AI Agent

Current examples are organized by LLM model provider:

  • Aliyun

  • Baidu

  • Deepseek

  • Doubao

  • Kimi

  • Ollama

  • Silliconflow

  • Zhipu

  • tools

Notes:

  • Each provider folder typically contains: basic chat VIs, streaming output VIs, tool calling VIs, and complete Agent VIs.

  • The Aliyun folder includes vision understanding VIs in addition to the common examples above.

  • The Doubao folder includes vision-related VIs (visual understanding, text-to-image, image-to-image, image fusion) in addition to the common examples.


Quick Index by Provider (Select API First, Then Examples)

If you have only obtained an API Key from a single provider, use the following table for quick reference:

Provider Folder Priority Examples
Aliyun basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi, vision understanding VIs
Baidu basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Deepseek basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Kimi basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Ollama basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Silliconflow basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Zhipu basic.vi, stream.vi, basic_call_tools.vi, stream_with_tools.vi, AI_Agent_Full.vi
Doubao Common examples + text_to_img_test.vi, image_to_image.vi, image_fusion.vi, vision understanding VIs
tools basic_tools and vi_advisor tool directories

Reading recommendations:

  • Core content is organized by capability categories (reducing redundant learning).

  • Single-provider users can refer to this table first, then jump to corresponding capability sections.


I. Basic Chat VIs

VI Name: basic.vi

Purpose: The most basic Q&A example, non-streaming output, suitable for initial model pipeline verification.

Implementation Flow:

  1. Initial_xxx.vi initializes the LLM instance and selects the model

  2. Set_System_Prompt.vi sets the system role

  3. Generate_Prompt.vi sends user query and retrieves complete response

  4. Release.vi releases resources

Input Parameters:

  • model: Model name

  • prompt: User query

  • system_prompt: System role (optional)

Output Results:

  • content: Model response

  • reasoning_content: Reasoning process (supported by some models)

Operation Steps:

  1. Open basic.vi

  2. Select model and fill in prompt

  3. Enter prompt text

  4. Run VI and check content

  5. After completion, release instance resources

Applicable Scenarios:

  • Initial integration verification

  • Simple Q&A and text generation

  • Non-real-time display scenarios

Common Issues

  • No response: Check API Key and account balance first

  • License error: Verify genai and vi_advisor status in the activation section


II. Streaming Output VIs

VI Name: stream.vi

Purpose: Streaming chat example with real-time model response display.

Implementation Flow:

  1. Initialize LLM instance and select model

  2. Set stream=True via property node

  3. Outer loop listens for send button

  4. Inner loop continuously reads streaming chunks and concatenates for display

  5. Click stop or release resources after task completion

Input Parameters:

  • model: Model name

  • prompt: User input

  • enable_thinking: Whether to enable chain-of-thought (supported by some models)

  • send / stop: Send and stop controls

Output Results:

  • Assistant: Real-time output content (finally concatenated into complete response)

  • reasoning_content: Chain-of-thought content (if supported by model)

Operation Steps:

  1. Open stream.vi

  2. Select model, enter prompt

  3. Set enable_thinking as needed

  4. Click send to run and observe real-time output

  5. Click stop after completion

Applicable Scenarios:

  • Long text generation

  • Real-time interactive interfaces

  • Chat flows requiring side-by-side debugging

Common Issues

  • No streaming output: Confirm model supports streaming and check API balance

  • Output interruption: Check network connectivity and proxy settings


III. Tool Calling VIs

VI Name: basic_call_tools.vi

Purpose: Demonstrates the complete pipeline for LLM calling external tools.

Implementation Flow:

  1. Define tool description (name, description, parameters)

  2. Pass tool list to LLM instance

  3. Send user request and retrieve Tools_calls

  4. Execute tool and produce tool results

  5. Use Generate_Tool_Call_Results.vi to return results

  6. Get final natural language response

Input Parameters:

  • prompt: User query

  • tools: Tool definition list (JSON)

  • messages_tools: Tool execution results

Output Results:

  • Tools_calls: Model's requested tool call information

  • content: Final response after integrating tool results

Operation Steps:

  1. Open basic_call_tools.vi

  2. Configure one or more tool definitions

  3. Enter queries with tool intent (e.g., weather, file I/O)

  4. After running, read Tools_calls

  5. Execute tool and return results, check final response

Applicable Scenarios:

  • Intelligent assistants

  • Business data queries

  • IoT/Device control

  • Automated business workflows


VI Name: stream_with_tools.vi

Purpose: Combination example of streaming output + tool calling.

Implementation Flow:

  1. Initialize streaming chat

  2. Load tool list

  3. Receive user requests and perform streaming inference

  4. Trigger tool calls and continue streaming output

Input Parameters:

  • model: Model

  • prompt: User query

  • tools: Select required tools

Output Results:

  • Assistant: Final response after integrating tool results

Operation Steps:

  1. Open stream_with_tools.vi

  2. Configure model and tools

  3. Enter tasks with tool requirements

  4. Run and observe streaming + tool pipeline

Applicable Scenarios:

  • Online assistants

  • Real-time control scenarios

  • Visual debugging scenarios

Common Issues

  • No continued output after tool execution: Check tool return data format

  • Significant output delay: Check tool execution time and network quality


IV. Visual Understanding VIs (VLM)

Prerequisites: Ensure AI Vision Toolkit for GPU is installed first.

VI Name: VL.vi / Qwen_VL.vi

Purpose: Image understanding and visual Q&A.

Implementation Flow:

  1. Use imread.vi to read image

  2. Use cvtColor.vi to convert color space (BGR -> RGB)

  3. Call Generate_Prompt_imgs.vi with image and text query

  4. Retrieve and display image understanding results

Input Parameters:

  • model: Vision-language model

  • imgName: Image filename or path

  • prompt: Image-related question

Output Results:

  • result: Image analysis text

  • picture: Original image display

Operation Steps:

  1. Open VL.vi or Qwen_VL.vi

  2. Select model and load image

  3. Enter image question

  4. Run and check result

Applicable Scenarios:

  • Image content understanding

  • Defect description and explanation

  • OCR pre-analysis and visual Q&A

Common Issues

  • Image cannot be read: Check path and format

  • Generic analysis results: Clarify objectives in prompt (recognition/statistics/description)


VI Name: VL_Draw_Box.vi

Purpose: Image object detection and visual annotation.

Implementation Flow:

  1. Input image and detection task description

  2. Call visual understanding interface to get object positions

  3. Parse bounding boxes and draw on image

  4. Output annotated image and detection results

Input Parameters:

  • Image input

  • Detection prompt or target category description

Output Results:

  • Detection labels, bbox, confidence scores

  • Annotated image with drawn boxes

Operation Steps:

  1. Open VL_Draw_Box.vi

  2. Load image and enter detection requirements

  3. Run and check box selection results

  4. Verify labels and positions for accuracy

Applicable Scenarios:

  • Industrial visual inspection

  • Intelligent monitoring

  • Robotic vision positioning

Common Issues

  • Box position offset: Check image scaling and coordinate mapping

  • Missed detections: Improve prompt clarity or switch models


VI Name: VL_Full.vi

Purpose: Comprehensive visual understanding example supporting multi-image input and batch analysis.

Implementation Flow:

  1. Load multiple images or image list

  2. Set unified analysis prompt

  3. Execute batch inference

  4. Aggregate and output analysis results

Input Parameters:

  • Image array/path list

  • Analysis prompt

Output Results:

  • Batch image analysis results

  • Optional structured result report

Operation Steps:

  1. Open VL_Full.vi

  2. Import multi-image input

  3. Set batch analysis questions

  4. Run and check overall results

Applicable Scenarios:

  • Batch image inspection

  • Multi-sample analysis

  • Automated visual reporting

Common Issues

  • Long batch processing time: Reduce batch size

  • Inconsistent result formats: Use fixed prompt templates


V. Image Generation VIs

Prerequisites: ImageGeneration examples are currently primarily based on Doubao capabilities. Ensure doubao-seedream is activated and API is configured; also install AI Vision Toolkit for GPU.

VI Name: text_to_img_test.vi

Purpose: Text-to-image generation.

Implementation Flow:

  1. Initialize image generation instance

  2. Set prompt and generation parameters

  3. Call text-to-image interface to generate image

  4. Display and save output image

Input Parameters:

  • Server="doubao"

  • model="doubao-seedream-4-5-251228"

  • prompt: Prompt text

  • size: Generated image size

  • watermark: Whether to retain "AI Generated" watermark

Output Results:

  • Generated image (front panel display/saveable)

Operation Steps:

  1. Open text_to_img_test.vi

  2. Select seedream model

  3. Enter prompt and set size

  4. Run and check output image

Applicable Scenarios:

  • Document illustrations

  • Prototype sketches

  • Creative design generation

Common Issues

  • Generation failure: Confirm seedream model is activated and API configuration is correct

  • Results not as expected: Add constraint words and negative prompts


VI Name: image_to_image.vi

Purpose: Image-to-image (redrawing/style transfer).

Implementation Flow:

  1. Input original image

  2. Set transformation description

  3. Call image-to-image interface

  4. Output and compare transformation results

Input Parameters:

  • Server="doubao"

  • model="doubao-seedream-4-5-251228"

  • prompt: Prompt text

  • size: Generated image size

  • watermark: Whether to retain "AI Generated" watermark

  • imgPath: Path of original input image

  • prompt: Prompt text

Output Results:

  • Transformed image

Operation Steps:

  1. Open image_to_image.vi

  2. Load original image and enter description

  3. Run and compare before/after images

Applicable Scenarios:

  • Style transfer

  • Image enhancement

  • Design variant generation

Common Issues

  • Large style deviation: Add "preserve subject" constraint in prompt


VI Name: image_fusion.vi

Purpose: Text-image fusion or multi-image fusion generation.

Implementation Flow:

  1. Load 2 or more images as input

  2. Set fusion prompt and parameters

  3. Execute fusion generation

  4. Output fused image

Input Parameters:

  • Server="doubao"

  • model="doubao-seedream-4-5-251228"

  • prompt: Prompt text

  • size: Generated image size

  • watermark: Whether to retain "AI Generated" watermark

  • images: Images to fuse

Output Results:

  • Fused image

Operation Steps:

  1. Open image_fusion.vi

  2. Load images and enter fusion target

  3. Set optional weights

  4. Run and check fusion effect

Applicable Scenarios:

  • Multi-source image synthesis

  • Teaching demo material creation

  • Visual scheme comparison

Common Issues

  • Unnatural fusion: Reduce number of fusion targets at once

  • Detail loss: Improve input image quality and resolution


VI. Complete Agent VIs

VI Name: AI_Agent_Full.vi (Key Example)

Purpose: Comprehensive single-Agent example integrating streaming output, tool calling, and context memory.

Implementation Flow:

  1. Initialize Agent and model configuration

  2. Enable useTools and load tool list

  3. Process user input and perform intent recognition

  4. Trigger tool calling when needed and return results

  5. Output streaming responses and maintain context

Input Parameters:

  • model: Model

  • thinking_type: Whether to enable thinking mode

  • prompt: Prompt text

  • useTools: Whether to enable tools

  • Tools: All optional tools, freely selectable

  • Session context parameters

Output Results:

  • Streaming response content

  • Tool calling results

  • Multi-turn conversation state

Operation Steps:

  1. Open AI_Agent_Full.vi

  2. Confirm model and API are available first

  3. Check useTools

  4. Enter tasks with tool intent and run

  5. Observe complete pipeline of reasoning, tool calling, and output

Applicable Scenarios:

  • Complex task automation

  • Multi-step workflows

  • Comprehensive capability acceptance testing

Common Issues

  • Tools not called: Check tool descriptions, switch status, and tool package license status

  • Result interruption: Check streaming pipeline and network status


VI Name: AI_Agent_No_Stream.vi

Purpose: Non-streaming complete Agent, suitable for batch processing and backend service scenarios.

Implementation Flow:

  1. Initialize Agent

  2. Enter task and trigger execution

  3. Return complete result at once

  4. Release resources

Input Parameters:

  • model: Model

  • prompt: Prompt text

Output Results:

  • One-time complete response result

Operation Steps:

  1. Open AI_Agent_No_Stream.vi

  2. Set model and task

  3. Run and wait for complete return

  4. Validate result content

Applicable Scenarios:

  • Batch processing

  • Backend tasks

  • API encapsulation

Common Issues

  • Slow response: Switch to streaming version for complex tasks

  • Output too short: Adjust prompt and model parameters


VII. Common Tools in tools Directory

tools/basic_tools common tools include:

  • web_search

  • exec

  • get_file_text

  • write_file

  • get_date_and_time

  • create_folder

  • get_img_size

  • img_resize

  • github_get

  • vlm

  • text_to_image

  • image_fusion

  • image_to_image

tools/vi_advisor common tools include:

  • get_labview_vi_content.vi

  • get_vis_of_a_folder.vi

  • get_labview_example_folder.vi

For tool creation and tool description specifications, refer to: Tool Development Guide.


Example Usage Recommendations

  • Beginner: basic.vi -> stream.vi

  • Intermediate: basic_call_tools.vi -> stream_with_tools.vi

  • Comprehensive: AI_Agent_Full.vi

  • Image Generation: text_to_img_test.vi -> image_to_image.vi -> image_fusion.vi

  • Visual Understanding: VL.vi, VL_Draw_Box.vi, VL_Full.vi

  • Custom Tools: See Tool Development Guide


Technical Support