Perplexity AI provides conversational AI models for generating human-like text responses
6
Available Tools
0
Triggers
Create an async chat completion request. Same as /chat/completions but designed for long-running tasks like sonar-deep-research. Returns immediately with a request_id that can be used to poll for results. Async jobs have a 7-day TTL. Use this action when you need to run resource-intensive queries that take longer than typical synchronous requests. The response includes a request ID that can be used to retrieve results later.
Perplexity AI Chat Completions endpoint using Sonar models. Provides web-grounded conversational AI responses with optional citations. The model searches the web to provide accurate, up-to-date answers with sources. Supports various models optimized for different use cases: - sonar: Fast, cost-effective for simple queries - sonar-pro: Enhanced quality for complex questions - sonar-reasoning-pro: Advanced reasoning capabilities Features include: - Web search grounding for accurate, current information - Citations showing sources for claims - Streaming responses for real-time interaction - Configurable temperature, top_p, and other generation parameters Note: presence_penalty and frequency_penalty cannot be used simultaneously.
Generate vector embeddings for independent texts (queries, sentences, documents). This action takes one or more input texts and generates vector embeddings using Perplexity AI's embedding models. Embeddings are useful for semantic search, similarity matching, and machine learning downstream tasks. Supported models: - pplx-embed-v1-0.6b: Smaller, faster model (1024 dimensions) - pplx-embed-v1-4b: Larger, more accurate model (2560 dimensions) The output embeddings are base64-encoded for efficient transmission. Use the dimensions parameter to reduce embedding size for faster processing when full precision is not required (Matryoshka representation).
Agent API - Orchestrates multi-step agentic workflows with built-in tools (web search, URL fetching), reasoning, and multi-model support. Use this action when you need to run complex queries that benefit from automatic web search, URL fetching, and multi-step reasoning. The agent can automatically decide when to use tools and how to synthesize responses. Supports various models including OpenAI models (e.g., gpt-5.2) and Perplexity's sonar models. Configure tools like web_search to enable automatic information gathering.
Perplexity Ai Search interfaces with Perplexity AI to perform search queries and return responses from a range of models. This action manages requests to Perplexity AI and processes the resulting completions, which may include text, citations, and images based on selected models and settings. Key features include: Autoprompting to enhance and refine queries, choice of AI models for various content and performance requirements, temperature settings to manage response randomness, Top K and Top P filters to fine-tune response generation. Beta features include citations and images in results, and response streaming for dynamic interaction. Note: The parameters 'presence_penalty' and 'frequency_penalty' are mutually exclusive and cannot be used simultaneously.
Search API (Raw Results, No LLM) - Returns ranked web results directly from Perplexity's index without LLM processing. Faster and cheaper when you don't need a generated answer. Use this action when you need raw web search results without AI-generated summaries. Supports advanced filtering by date, domain, language, and geographic location.
Get started with Agent Jam and connect Perplexity AI along with 700+ other apps to supercharge your workflow.