///
Fabric provides a powerful and flexible command-line interface (CLI) for interacting with AI models and managing your patterns, contexts, and sessions. This guide details all available flags and optio
1434 views
~1434 views from guests
Guest views are estimated from total page views. These include anonymous visitors and users who weren't logged in when they viewed the page.
Fabric provides a powerful and flexible command-line interface (CLI) for interacting with AI models and managing your patterns, contexts, and sessions. This guide details all available flags and options, categorized for easier navigation.
These flags control the general behavior of the fabric command, including setup, version information, and output handling.
-S, --setup: Run setup for all reconfigurable parts of Fabric. See the Getting Started guide for installation and setup instructions.--version: Print the current version of Fabric.-c, --copy: Copy the output to the clipboard.-o, --output=: Specify a file path to save the output.--output-session: Output the entire session (including temporary ones) to the specified output file.--config=: Provide a path to a YAML configuration file for default settings.--shell-complete-list: Output raw lists without headers or formatting, useful for shell completion scripts.--notification: Send a desktop notification when the command completes.--notification-command=: Provide a custom command to execute for notifications, overriding built-in notifications.-g, --language=: Specify the language code for the chat output (e.g., -g=en for English, -g=zh for Chinese).-h, --help: Display the help message.These options are used when running AI patterns or engaging in direct chat, allowing you to define input, context, and how the model processes information.
-p, --pattern=: Choose a specific pattern from the available patterns.-v, --variable=: Provide values for pattern variables (e.g., -v=#role:expert -v=#points:30).-C, --context=: Choose a context from the available contexts.--session=: Choose a session from the available sessions to continue a conversation.-a, --attachment=: Specify a path or URL for an attachment (e.g., for OpenAI image recognition messages).-s, --stream: Enable streaming mode to receive immediate, real-time results from the AI model.-r, --raw: Use the default settings of the model without sending chat options (like temperature, etc.) and employ the user role instead of the system role for patterns.--input-has-vars: Enable variable substitution within the user input itself.--no-variable-replacement: Disable pattern variable replacement entirely.--dry-run: Show what would be sent to the model without actually sending it, useful for debugging prompts.--readability: Convert HTML input into a clean, readable text format before sending to the model.--strategy=: Choose a prompt strategy from the available strategies (e.g., "Chain of Thought").Control which AI model and vendor Fabric uses for a given task, along with various model-specific parameters.
-m, --model=: Choose a specific AI model. See Advanced Features for more on Model Management.-V, --vendor=: Specify the vendor for the chosen model (e.g., -V "LM Studio" -m openai/gpt-oss-20b).-t, --temperature=: Set the model's temperature (default: 0.7), controlling creativity.-T, --topp=: Set the top P value (default: 0.9), another parameter for controlling output randomness.-P, --presencepenalty=: Set the presence penalty (default: 0.0), discouraging new topics.-F, --frequencypenalty=: Set the frequency penalty (default: 0.0), discouraging repeated phrases.-e, --seed=: Provide a seed for Large Language Model (LLM) generation to enable reproducible outputs.--modelContextLength=: Specify the model's context length (only affects Ollama models).--disable-responses-api: Disable the OpenAI Responses API (default: false).--thinking=: Set the reasoning/thinking level (e.g., off, low, medium, high, or a numeric token value for Anthropic or Google Gemini models). Learn more about AI Reasoning.These commands are used to view available patterns, models, contexts, and other configurable elements.
-l, --listpatterns: List all available patterns. Explore patterns further in Patterns and Prompting.-L, --listmodels: List all available AI models.-x, --listcontexts: List all available contexts.--list-gemini-voices: List all available Gemini Text-to-Speech voices.--list-transcription-models: List all available transcription models.-X, --listsessions: List all available sessions.-n, --latest=: Specify the number of latest patterns to list (default: 0, lists all).--listextensions: List all registered template extensions.--liststrategies: List all available prompt strategies.--listvendors: List all configured AI vendors.Commands for managing your stored patterns, contexts, sessions, and template extensions.
-U, --updatepatterns: Update built-in patterns from the Fabric repository. Learn more about managing patterns.-d, --changeDefaultModel: Change your default AI model and vendor settings.-w, --wipecontext=: Delete a specified context.-W, --wipesession=: Delete a specified session.--printcontext=: Print the content of a specified context.--printsession=: Print the content of a specified session.--addextension=: Register a new template extension from a specified configuration file path. See Advanced Features for more on extensions.--rmextension=: Remove a registered template extension by its name.Flags for interacting with YouTube videos and playlists to extract transcripts, comments, and metadata.
-y, --youtube=: Provide a YouTube video or playlist URL to fetch its content. This feature is part of the Helper Applications.--playlist: Prefer processing the playlist from the URL over a single video if both IDs are present.--transcript: Grab the transcript from a YouTube video and send it to the chat (default behavior).--transcript-with-timestamps: Grab the transcript with timestamps from a YouTube video.--comments: Grab comments from a YouTube video.--metadata: Output video metadata.--yt-dlp-args=: Pass additional arguments directly to yt-dlp (e.g., --cookies-from-browser brave).Options for extracting content from websites or performing web searches using Jina AI.
-u, --scrape_url=: Scrape a website URL and convert its content to Markdown using Jina AI. This feature is part of the Helper Applications.-q, --scrape_question=: Perform a search query using Jina AI.--search: Enable the web search tool for supported models (Anthropic, OpenAI, Gemini). Learn more in Advanced Features.--search-location=: Set the location for web search results (e.g., 'America/Los_Angeles').Flags specifically for generating images using supported AI models. More details in Advanced Features.
--image-file=: Save the generated image to a specified file path (e.g., 'output.png').--image-size=: Set image dimensions: 1024x1024, 1536x1024, 1024x1536, or auto (default: auto).--image-quality=: Set image quality: low, medium, high, or auto (default: auto).--image-compression=: Set the compression level (0-100) for JPEG/WebP formats (default: not set).--image-background=: Set the background type: opaque or transparent (default: opaque, only for PNG/WebP).Tools for converting audio/video to text, and text to speech. See Advanced Features for more details.
--transcribe-file=: Specify an audio or video file to transcribe.--transcribe-model=: Choose a model specifically for transcription (separate from the chat model).--split-media-file: Automatically split audio/video files larger than 25MB using ffmpeg before transcription.--voice=: Select a Text-to-Speech (TTS) voice name for supported models (e.g., Kore, Charon, Puck, default: Kore).These options provide fine-grained control over how the AI model reasons and how its output is displayed. Explore AI Reasoning in Advanced Features.
--suppress-think: Suppress text enclosed within thinking tags in the model's output.--think-start-tag=: Define the custom start tag for thinking sections (default: <think>).--think-end-tag=: Define the custom end tag for thinking sections (default: </think>).Flags for running Fabric's built-in REST API server. Access the Web Interface for more details.
--serve: Start the Fabric REST API server.--serveOllama: Start the Fabric REST API server with Ollama-compatible endpoints.--address=: Specify the network address to bind the REST API (default: :8080).--api-key=: Set an API key to secure server routes.The --debug flag controls the verbosity of runtime logging. This is particularly useful for understanding how Fabric processes requests and interacts with AI models.
--debug=0: Off (default). No debug output.--debug=1: Basic. Provides minimal debugging information, useful for high-level troubleshooting.--debug=2: Detailed. Offers more verbose debugging, showing detailed steps of processing.--debug=3: Trace. The most verbose level, providing extensive logs for in-depth analysis.