Advanced Operation Settings
Learn how to fine-tune your agent’s operations with advanced configuration options
Customizing Operation Behavior
Each operation in your agent can be fine-tuned with advanced settings to control exactly how it functions. These settings help you create more reliable, efficient, and secure agents by controlling data flow and execution behavior.
Operation Configuration Tabs
When you select an operation in the builder interface, you’ll see several configuration tabs that control different aspects of the operation’s behavior:
The Details tab shows the original, generated function calling description that defines the operation’s purpose and behavior. This serves as a reference for the operation’s intended use.
This information is especially useful when you’re working with multiple similar operations and need to understand the exact purpose of each one.
The Details tab shows the original, generated function calling description that defines the operation’s purpose and behavior. This serves as a reference for the operation’s intended use.
This information is especially useful when you’re working with multiple similar operations and need to understand the exact purpose of each one.
The Instructions tab allows you to add custom descriptions to provide more context about the operation. This is particularly useful for:
- Giving the AI agent specific guidance on when to use this operation
- Explaining nuances about the data returned by the operation
- Setting expectations about rate limits or performance considerations
- Providing examples of effective usage patterns
Well-crafted instructions can significantly improve your agent’s decision-making when choosing and using operations. Be specific about when the operation should and shouldn’t be used.
The Input Schema tab lets you hard-code parameters to override values that would normally be generated by the AI. This gives you precise control over operation execution, which is useful for:
- Enforcing specific parameter values regardless of AI decisions
- Setting default values that the AI can override when needed
- Restricting certain parameters to prevent undesired behaviors
- Pre-configuring complex parameters that require specific formatting
Example: Hard-coding API Keys
Example: Hard-coding API Keys
If your operation requires an API key or authentication token, you can hard-code it in the Input Schema rather than having the AI provide it:
- Find the parameter that requires the API key (often called
apiKey
orauthToken
) - Click the parameter and enter your key as the default value
- Mark it as “Use Default Value” to ensure the AI cannot override it
This keeps sensitive credentials secure while allowing the AI to control other parameters.
The Output Schema tab lets you configure filtering that runs before payload data is returned to the AI. This is extremely valuable for:
- Reducing token usage by removing unnecessary data
- Removing personally identifiable information (PII)
- Focusing the AI on the most relevant parts of the response
- Preventing the AI from seeing sensitive or irrelevant data
How Output Filtering Works
How Output Filtering Works
When you configure the Output Schema, you’re essentially creating a template that determines which fields from the API response will be passed to the AI:
- Expand the schema tree to see all available fields
- Toggle fields on/off to include/exclude them from the AI’s view
- For nested data structures, you can selectively include specific sub-fields
For example, if a LinkedIn profile response contains personal contact information you don’t want the AI to access, you can exclude those specific fields while keeping professional details visible.
The Advanced tab contains additional settings for specialized use cases:
Real-time Filtering for Large Payloads
Real-time Filtering for Large Payloads
For operations that return large amounts of data, the Advanced tab provides real-time filtering capabilities. This allows the AI to perform in-place queries on large API payloads without consuming excessive tokens.
This is especially useful when working with:
- Large datasets with hundreds or thousands of records
- APIs that return verbose responses with many fields
- Data that requires complex filtering logic
The AI can specify search criteria at runtime, and only matching results will be returned, dramatically reducing token usage and improving response times.
Operation Execution Settings
Controlling Operation Repetition
Controlling Operation Repetition
The blue rotating icon in the operation settings allows you to configure how many times an operation can be executed:
You can set operations to:
- Execute once: The operation can only be used a single time in a given session (useful for initialization operations)
- Execute infinitely: The operation can be used repeatedly as needed (default for most operations)
This setting is particularly important for operations that:
- Have rate limits or usage quotas
- Should only be performed once per session for logical reasons
- Need to be used repeatedly to gather comprehensive data
For example, a search operation might need to run multiple times with different criteria, while a configuration or initialization operation may only need to run once.
Practical Examples
Example: Sanitizing LinkedIn Profile Data
Example: Sanitizing LinkedIn Profile Data
When building a LinkedIn researcher agent, you might want to remove personal contact information before passing profile data to the AI:
- Select your “Get Profile Data” operation
- Click the “Output Schema” tab
- Expand the response schema tree
- Uncheck fields like
emailAddress
,phoneNumbers
, and other personal fields - Leave professional information like
title
,company
, andskills
checked - Save your changes
Now when your agent retrieves profile data, it will only see the professional information without any personal contact details.
Example: Optimizing for Large Datasets
Example: Optimizing for Large Datasets
If your agent needs to work with a large dataset of company posts:
- Select your “Search Posts” operation
- Click the “Advanced” tab
- Enable “Allow in-context filtering”
- Configure which fields should be available for filtering
- Save your changes
Now your agent can efficiently search through large post datasets by specifying filtering criteria, and only the relevant posts will be returned, saving significant token usage.
Best Practices
- Start with minimal output: Only include fields the AI actually needs to complete its task
- Use input defaults wisely: Hard-code parameters that should remain constant, but allow flexibility for case-specific values
- Provide clear instructions: Help the AI understand exactly when and how to use each operation
- Test thoroughly: After configuring advanced settings, test your agent with various inputs to ensure it behaves as expected
- Monitor token usage: Use the Activity view to track how your configuration changes impact token consumption