# Admin (/apis/ar-io-node/admin) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Access several password protected features and functions specific to your AR.IO Gateway. # ArNS (/apis/ar-io-node/arns) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get data from the AR.IO Gateway Arweave Name System # Blocks (/apis/ar-io-node/blocks) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get current or historical Arweave block information # Chunks (/apis/ar-io-node/chunks) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Upload Arweave data chunks or get existing chunk offset information # Data (/apis/ar-io-node/data) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Core data retrieval operations for accessing transaction and data item content. Supports manifest resolution, range requests, caching, and verification status. These endpoints serve as the primary interface for retrieving data from the Permaweb. # Farcaster Frames (/apis/ar-io-node/farcaster-frames) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Retrieve and interact with Farcaster Frames using Arweave transactions. # Gateway (/apis/ar-io-node/gateway) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Operations related to the AR.IO Gateway server itself, including health checks, metrics, and gateway-specific information # Index Querying (/apis/ar-io-node/index-querying) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get data from the AR.IO Gateway index using GQL # AR.IO Gateway APIs (/apis/ar-io-node) import { Server, Network, Route, Database, Search, FileText, } from "lucide-react"; The AR.IO Gateway is the core software for the AR.IO network, serving the essential responsibility of gateways for accessing, caching, and querying data stored on Arweave. It provides a robust, decentralized infrastructure for interacting with the permanent web. ## Core Responsibilities The AR.IO Gateway handles fundamental operations for the Arweave ecosystem: - **Data Access** - Retrieve transaction data, files, and metadata from Arweave - **Caching** - Intelligent caching strategies for improved performance and availability - **Data Querying** - Powerful search and indexing capabilities for Arweave data - **ArNS Resolution** - Resolve human-readable names to Arweave transaction IDs - **Network Management** - Coordinate with other gateways in the AR.IO network ## Advanced Features Beyond basic gateway functionality, AR.IO Gateway includes sophisticated capabilities: - **Parquet Generation** - Convert Arweave data into optimized Parquet format for analytics - **Data Verification** - Cryptographic verification of data integrity and authenticity - **Index Querying** - Advanced search and filtering across Arweave datasets - **Farcaster Frames** - Support for Farcaster protocol integration - **Admin Controls** - Comprehensive gateway management and configuration ## APIs Categories } title="Data Access" description="Retrieve transaction data, files, and metadata from Arweave" href="/apis/ar-io-node/data" /> } title="ArNS Resolution" description="Resolve human-readable names to Arweave transaction IDs" href="/apis/ar-io-node/arns" /> } title="Transactions & Blocks" description="Access transaction details, block information, and network data" href="/apis/ar-io-node/transactions" /> } title="Index Querying" description="Advanced search and filtering capabilities across Arweave data" href="/apis/ar-io-node/index-querying" /> } title="Network & Gateway" description="Gateway status, network information, and peer coordination" href="/apis/ar-io-node/network" /> } title="Admin & Management" description="Gateway configuration, pricing, and administrative controls" href="/apis/ar-io-node/admin" /> ## Get Involved with AR.IO Gateways } title="Run a Gateway" description="Join the AR.IO network by operating your own gateway and earn rewards" href="/build/run-a-gateway/quick-start" /> } title="Leverage Gateways with Wayfinder" description="Use Wayfinder SDK to access data through the distributed gateway network" href="/sdks/wayfinder" /> } title="Join the Network" description="Learn about the AR.IO network and how to participate in the ecosystem" href="https://ar.io/network" /> ## Getting Started 1. **Explore the APIs endpoints** - Review the comprehensive APIs documentation 2. **Test with sample requests** - Try out the interactive examples 3. **Choose your integration approach** - Direct APIs calls or SDK usage 4. **Consider running a gateway** - Contribute to the network infrastructure The AR.IO Gateway APIs provide the foundation for building robust, decentralized applications on Arweave with reliable data access and advanced querying capabilities. # Network (/apis/ar-io-node/network) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get Arweave node info, peers and nework status # Pricing (/apis/ar-io-node/pricing) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the price (in winston) for an amount of bytes # Transactions (/apis/ar-io-node/transactions) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Submit a new Arweave transaction or get existing transaction information # Wallets (/apis/ar-io-node/wallets) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get Arweave wallet balance and last transaction information # APIs Reference (/apis) Explore the REST APIs available in the AR.IO ecosystem. Our services are built with a commitment to open source principles, and all repositories are publicly available under AGPL-3 licenses. ## Available Services } title="AR.IO Gateway" description="The core gateway software providing access to data on Arweave. Includes data retrieval, ArNS resolution, and network management." href="/apis/ar-io-node" /> } title="Turbo" description="Upload and payment services providing fast, reliable data uploads to Arweave with instant confirmation and transparent pricing." href="/apis/turbo" /> ## AR.IO Gateway APIs The AR.IO Gateway serves as the primary interface for accessing Arweave data through the AR.IO network. Key endpoints include: - **Data Access** - Retrieve transaction data and files from Arweave - **ArNS Resolution** - Resolve human-readable names to Arweave transaction IDs - **Network Information** - Query gateway health, pricing, and network status - **Transaction Management** - Submit and track transactions - **Admin Functions** - Gateway administration and configuration ## Turbo APIs Turbo provides high-performance upload services for the Arweave network with additional features: - **Data Upload** - Fast, reliable uploads with instant confirmation - **Payment Processing** - Transparent pricing and payment management - **Upload Tracking** - Monitor upload status and metadata - **Credit Management** - Handle payment credits and billing ## Open Source Commitment We believe strongly in open source development. All AR.IO services are: - **Publicly Available** - Source code is open and accessible - **AGPL-3 Licensed** - Ensuring software freedom and transparency - **Community Driven** - Built with input from the developer community - **Auditable** - Code can be reviewed and verified by anyone ## Getting Started 1. **Choose your service** - Select the APIs that fit your needs 2. **Review the documentation** - Each service has comprehensive APIs documentation 3. **Test endpoints** - Use the interactive examples to explore functionality 4. **Integrate** - Implement the APIs in your applications For SDK alternatives to these REST APIs, visit our [SDK documentation](/sdks). ## Explore More } title="SDK Documentation" description="Use our TypeScript SDKs for easier integration and development" href="/sdks" /> } title="Quick Start - Upload" description="Start uploading data to Arweave with our upload guides" href="/build/upload" /> } title="Quick Start - Access" description="Learn how to retrieve and query data from Arweave" href="/build/access" /> } title="Run a Gateway" description="Deploy your own AR.IO gateway and access these APIs directly" href="/build/run-a-gateway" /> # Turbo APIs (/apis/turbo) Turbo provides high-performance upload and payment services for the Arweave network, offering fast, reliable data uploads with instant confirmation and transparent pricing. ## Services } title="Upload Service" description="Fast, reliable data uploads to Arweave with instant confirmation and metadata management" href="/apis/turbo/upload-service/upload" /> } title="Payment Service" description="Transparent pricing, payment processing, and credit management for Turbo uploads" href="/apis/turbo/payment-service/payments" /> ## Upload Service The Turbo Upload Service provides high-performance data uploads to the Arweave network with features including: - **Fast Uploads** - Optimized upload processing for quick data submission - **Instant Confirmation** - Immediate upload confirmations and transaction IDs - **Metadata Management** - Comprehensive data tagging and organization - **Account Management** - User account and upload history tracking - **Service Information** - Real-time service status and capabilities Key endpoints include account management, upload processing, pricing information, and transaction data retrieval. ## Payment Service The Turbo Payment Service handles all financial aspects of data uploads with transparent and flexible payment options: - **Transparent Pricing** - Clear, upfront costs for all upload operations - **Multiple Currencies** - Support for various payment methods and currencies - **Credit Management** - Prepaid credits and balance tracking - **Payment Processing** - Secure payment handling and transaction management - **Approval Workflows** - Payment authorization and confirmation flows Key endpoints include balance management, payment processing, pricing calculations, and credit redemption. ## Getting Started with Turbo 1. **Choose your service** - Upload for data submission, Payment for financial operations 2. **Review the APIs documentation** - Detailed endpoint specifications and examples 3. **Test with sample data** - Try uploads and payment flows with test data 4. **Integrate into your application** - Implement the APIs in your workflow ## Use the Turbo SDK For a more convenient integration experience, consider using the Turbo SDK instead of direct API calls: } title="Interact with Turbo via the SDK" description="Use the Turbo SDK for simplified integration with built-in error handling, retries, and TypeScript support" href="/sdks/turbo-sdk/events" /> The SDK provides a higher-level interface with built-in error handling, automatic retries, and full TypeScript support, making it easier to integrate Turbo services into your applications. # Approvals (/apis/turbo/payment-service/approvals) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Credit sharing and approval management # Balance (/apis/turbo/payment-service/balance) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Account balance and credit management # Currencies (/apis/turbo/payment-service/currencies) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Supported currencies and exchange rates # Info (/apis/turbo/payment-service/info) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Service information and metadata # Payments (/apis/turbo/payment-service/payments) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Payment processing and top-up operations # Pricing (/apis/turbo/payment-service/pricing) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Pricing and cost calculation endpoints # Redemption (/apis/turbo/payment-service/redemption) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Credit redemption and gift processing # Account (/apis/turbo/upload-service/account) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Account balance and wallet information # Pricing (/apis/turbo/upload-service/pricing) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Pricing calculation endpoints # Service Info (/apis/turbo/upload-service/service-info) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Service information and health endpoints # Transaction Data (/apis/turbo/upload-service/transaction-data) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Transaction status and metadata retrieval # Upload (/apis/turbo/upload-service/upload) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Data item upload endpoints (single and multi-part) # Arweave Name System (ArNS) (/build/access/arns) ArNS provides **human-readable URLs** for your Arweave data, making it easy to share and remember permanent addresses. ## What is ArNS? ArNS is a naming system that allows you to register human-readable names that point to your Arweave transactions. Instead of sharing long transaction IDs, you can use memorable URLs. **Example:** - **Before:** `https://arweave.net/bVLEkL1SOPFCzIYi8T_QNnh17VlDp4RylU6YTwCMVRw` - **After:** `https://myapp.arweave.net` **Learn More:** For detailed information about ArNS architecture and how it works, see our [ArNS Documentation](/learn/arns). ## Get an ArNS Name The easiest way to get an ArNS name is via [arns.ar.io](https://arns.ar.io), which supports multiple payment methods: - **Fiat payments** - Credit cards and bank transfers - **Turbo Credits** - Use existing Turbo credits - **ARIO tokens** - Pay with ARIO cryptocurrency **Alternative registration methods:** - **[Wander Chrome Extension](https://chrome.google.com/webstore/detail/wander)** - Browser-based registration - **Wander Mobile App** - Register on iOS and Android - **AR.IO SDK** - Programmatic registration using the `buyRecord` API ### Using the AR.IO SDK For developers, you can register ArNS names programmatically: ```js const ario = ARIO.mainnet(); // Buy a record with Turbo Credits or ARIO tokens const result = await ario.buyRecord({ name: 'my-domain', years: 1, // Payment method: 'turbo-credits' or 'ario-tokens' }); console.log('Record purchased:', result); ``` **Learn More:** For a complete list of AR.IO SDK APIs, see the [ArNS SDK Documentation](/sdks/ar-io-sdk/arweave-name-system-arns). ## Fetching Data via ArNS Once you've set up your ArNS name, fetch data using standard HTTP requests: ```js // Fetch content from your ArNS name const response = await fetch("https://my-data.arweave.net"); if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } const data = await response.text(); console.log(data); ``` ## Why Use ArNS? ArNS provides significant advantages for accessing data on Arweave: **Decentralized Data Index** - ArNS creates a decentralized index of data accessible through any gateway in the AR.IO network - No single point of failure - names resolve across all participating gateways - Censorship-resistant access to your content **Flexible Data Management** - **Permanent references** - Keep stable URLs even when updating underlying data - **Replaceable data** - Point names to new transaction IDs as content evolves - **Undernames** - Organize related content under a single name using underscores (e.g., `v2_myapp.arweave.net`, `docs_myapp.arweave.net`) **Supporting Network Decentralization** - ArNS purchases contribute to the protocol balance - Fees reward AR.IO gateway operators for participating in the network - This economic model preserves decentralized access to data on Arweave - Your name registration helps maintain the infrastructure that serves your content ## Next Steps } > Register your own human-readable name on ArNS. } > Learn how to participate in the AR.IO ecosystem. } > Advanced gateway routing for production applications. # Fetch Data (via REST API) (/build/access/fetch-data) The simplest way to access data on Arweave is through **HTTP requests** to gateways. This method works in any web browser and requires no additional setup. ## Fetching Data from Gateways Gateways are the most performant way to fetch data from Arweave, providing significant advantages over accessing Arweave nodes directly. **Why Gateways Are Faster:** - **Content Caching** - Pre-cached data for instant retrieval - **Data Indexing** - Fast search and query capabilities - **Network Optimization** - Distributed infrastructure for better performance - **Content Delivery** - Optimized serving with compression and CDN features ## REST APIs for Fetching Data Gateways support multiple API endpoints for accessing data: ### Standard Endpoint Access any transaction using this URL structure: ``` https:/// ``` **Examples:** - `https://arweave.net/bVLEkL1SOPFCzIYi8T_QNnh17VlDp4RylU6YTwCMVRw` - `https://permagate.io/FguFk5eSth0wO8SKfziYshkSxeIYe7oK9zoPN2PhSc0` ### Raw Data Endpoint For raw data access that bypasses manifest path resolution: ``` https:///raw/ ``` This endpoint returns the raw data bytes without resolving manifest paths, useful when you need the exact stored data. **Learn More:** For complete API documentation and testing, see the [AR.IO Node Data APIs](/apis/ar-io-node/data). ## Sandboxing AR.IO gateways implement security measures by redirecting requests to sandbox subdomains for enhanced browser security. **Why Redirects Happen:** - **Security Isolation** - Content is served from isolated sandbox environments - **CSP Protection** - Prevents cross-site scripting attacks - **Resource Isolation** - Limits potential security vulnerabilities - **Browser Sandboxing** - Leverages same-origin policy for enhanced security **What to Expect:** - Initial request: `https://arweave.net/transaction-id` - Redirects to: `https://sandbox.arweave.net/transaction-id` (or similar) - Final content served from sandbox subdomain **Important:** Always follow redirects in your applications - the final sandbox URL contains the actual content. **Learn More:** For detailed information about how browser sandboxing works and why it's important for security, see our [Browser Sandboxing](/build/advanced/sandboxing) documentation. ## Using in Applications **JavaScript Example with Fetch:** ```js // Fetch data from Arweave (follows redirects automatically) const response = await fetch("https://arweave.net/your-transaction-id", { redirect: "follow", // Follow redirects automatically }); if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } const data = await response.text(); console.log(data); ``` **HTML Example:** ```html ``` ## Manifests For organized file collections, use manifests to create friendly path-based URLs: ``` https://arweave.net//path/to/file ``` **Example:** - `https://arweave.net/X8Qm…AOhA/index.html` - `https://arweave.net/X8Qm…AOhA/styles.css` - `https://arweave.net/X8Qm…AOhA/assets/logo.png` [Learn more about manifests](/build/upload/manifests) ## Next Steps } > Discover data by searching with tags, metadata, and filters. }> Set up a gateway to serve and cache your specific data. }> Start uploading your data to Arweave's permanent storage. } > Automatically route requests to the best performing gateway. # Find Data (via GraphQL) (/build/access/find-data) Use **GraphQL** to **find and identify** Arweave data with powerful search and filtering capabilities. GraphQL is used for discovery - you query to get transaction IDs, then use those IDs to fetch the actual data. **GraphQL is for Discovery, Not Direct Access** GraphQL finds data, it doesn't access it directly. Use GraphQL to get transaction IDs, then use those IDs with the REST API to fetch the actual data. ## How GraphQL Works GraphQL on Arweave follows a two-step process: 1. **Find** - Query GraphQL to discover transactions by tags, metadata, owner, or other criteria 2. **Fetch** - Use the transaction IDs from your query results to retrieve the actual data via the REST API This separation allows for powerful data discovery while keeping data retrieval fast and efficient. ## GraphQL Providers - **arweave.net** - `https://arweave.net/graphql` - Comprehensive indexing of all Arweave data - **Goldsky** - `https://arweave-search.goldsky.com/graphql` - High-performance GraphQL service with full data coverage **AR.IO Gateways:** AR.IO gateways support the `/graphql` endpoint, but they only return data they've indexed. If you're uploading data and want it unbundled and indexed, you can run a gateway and configure it to unbundle your data, or post data items/bundles via the gateway's APIs (recommended). [Learn more](/build/run-a-gateway/manage/filters). ## Quick Start The easiest way to get started is using the interactive GraphQL playground: 1. Navigate to [https://arweave.net/graphql](https://arweave.net/graphql) in your browser 2. Enter your GraphQL query in the interface 3. Press the "play" button to execute and see results ## Basic Query Structure Try this example query in the playground - it fetches the most recent 10 HTML pages from "MyApp": ```graphql query { transactions( tags: [ { name: "Content-Type", values: ["text/html"] } { name: "App-Name", values: ["MyApp"] } ] sort: HEIGHT_DESC first: 10 ) { edges { node { id tags { name value } data { size } } } } } ``` ## Example Queries Here's how to find videos using GraphQL: ```js const query = ` query { transactions( tags: [{ name: "Content-Type", values: ["video/mp4"] }] first: 10 ) { edges { node { id tags { name value } data { size } } } } } `; const response = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query }), }); const data = await response.json(); const videos = data.data.transactions.edges; // This returns transaction IDs that you can use with HTTP requests console.log( "Found video IDs:", videos.map((v) => v.node.id) ); ``` ```js const query = ` query { transactions(owners: ["your-wallet-address"], first: 10) { edges { node { id tags { name value } } } } } `; const response = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query }), }); const data = await response.json(); const transactions = data.data.transactions.edges; console.log("Found transactions:", transactions.map(t => t.node.id)); ``` ```js const query = ` query { transactions(block: { min: 1000000, max: 1100000 }, first: 10) { edges { node { id block { height } } } } } `; const response = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query }), }); const data = await response.json(); const transactions = data.data.transactions.edges; console.log("Found transactions in block range:", transactions.map(t => ({ id: t.node.id, height: t.node.block.height }))); ``` ```js // First page const query = ` query { transactions( tags: [{ name: "App-Name", values: ["MyApp"] }] first: 10 ) { pageInfo { hasNextPage } edges { cursor node { id tags { name value } } } } } `; const response = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query }), }); const data = await response.json(); const { edges, pageInfo } = data.data.transactions; // Next page using cursor if (pageInfo.hasNextPage) { const nextQuery = ` query($cursor: String) { transactions( tags: [{ name: "App-Name", values: ["MyApp"] }] after: $cursor first: 10 ) { pageInfo { hasNextPage } edges { cursor node { id tags { name value } } } } } `; const nextResponse = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query: nextQuery, variables: { cursor: edges[edges.length - 1].cursor }, }), }); const nextData = await nextResponse.json(); console.log(nextData.data); } ``` ## Pagination As of September 2025, GraphQL endpoints have different limits: - **arweave.net**: Supports queries up to **1,000 items at once** - **Goldsky**: Supports queries up to **100 items at once** For larger datasets, use cursor-based pagination to navigate through results. **How Pagination Works:** - Use `first` parameter to specify page size (max 1,000 for arweave.net, max 100 for Goldsky) - Use `pageInfo.hasNextPage` to check if more results exist - Use `cursor` from the last item with `after` parameter for the next page ```js let allTransactions = []; let hasNextPage = true; let cursor = null; while (hasNextPage) { const query = ` query($cursor: String) { transactions( tags: [{ name: "App-Name", values: ["MyApp"] }] first: 100 ${cursor ? "after: $cursor" : ""} ) { pageInfo { hasNextPage } edges { cursor node { id tags { name value } } } } } `; const response = await fetch("https://arweave.net/graphql", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ query, variables: cursor ? { cursor } : {}, }), }); const data = await response.json(); const { edges, pageInfo } = data.data.transactions; allTransactions.push(...edges); hasNextPage = pageInfo.hasNextPage; cursor = edges.length > 0 ? edges[edges.length - 1].cursor : null; console.log( `Loaded ${edges.length} transactions. Total: ${allTransactions.length}` ); } console.log(`Found ${allTransactions.length} total transactions`); ``` ## Query Optimization Tips Follow these guidelines for optimal performance: **Specificity:** - Use the most precise tags possible to narrow search scope - Query with essential tags only to reduce processing time **Schema Design:** - Design your app's schema to reflect query patterns - Use tags that encapsulate frequent combinations of criteria **Include Non-tag Fields:** - Add fields like `owner` to refine your search - This makes queries more efficient and targeted **Order Your Tags:** - Arrange tags from most specific to most general - This leverages Arweave's indexing more effectively **Example Optimized Query:** ```js // Well-optimized query with specific tags and useful fields const query = ` query { transactions( tags: [ { name: "App-Name", values: ["MyApp"] } { name: "Content-Type", values: ["application/json"] } { name: "Version", values: ["1.0"] } ] owners: ["your-wallet-address"] first: 20 ) { edges { node { id data { size type } tags { name value } block { height timestamp } owner { address } } } } } `; ``` ## Next Steps } > Learn how to retrieve the actual data using transaction IDs. }> Set up a gateway to index and serve your specific data. }> Start uploading your data to Arweave's permanent storage. } > Automatically route requests to the best performing gateway. # Access Data (/build/access) Once data is stored on Arweave, it's permanently available. Here's how to access it efficiently for your applications. ## Access Methods Different methods serve different needs. Each provides unique capabilities for retrieving data from Arweave. Search and discover data on Arweave Query by tags and metadata Filter by app, owner, timestamp Get transaction IDs for fetching } href="/build/access/find-data" icon={} /> Retrieve data bytes from Arweave REST API endpoints GET arweave.net/[txId] Returns raw data/files } href="/build/access/fetch-data" icon={} /> Assign names to data and apps Create names like ardrive.ar.io Point to any Arweave data Update targets as needed } href="/build/access/arns" icon={} /> ## Common Access Patterns **Finding Data** - Search for data by tags, owner, or timestamp - Discover content from specific applications - Get transaction IDs for data retrieval **Fetching Data** - Retrieve the actual files/data using transaction IDs - Access data via REST API: `GET arweave.net/[txId]` - Stream large files efficiently **Naming with ArNS** - Register memorable names for your apps and data - Create permanent links like `ardrive.ar.io` - Update where names point without changing the URL ## Quick Example: Find and Fetch ### Find Data Use GraphQL to search for data and get transaction IDs: ```graphql query { transactions( tags: [{ name: "App-Name", values: ["ArDrive"] }] first: 1 ) { edges { node { id } } } } ``` ### Fetch Data Use the transaction ID to retrieve the actual data: ```bash curl https://arweave.net/[transaction-id-from-above] ``` ## Additional Access Options # Wayfinder (/build/access/wayfinder) Wayfinder is a client-side routing and verification protocol that provides **decentralized, cryptographically verified access** to data stored on Arweave via the AR.IO Network. ## What is Wayfinder? Wayfinder solves the challenge of reliable data access on the permaweb by: - **Intelligent Routing** - Automatically selects the best gateway for each request - **Data Verification** - Cryptographically verifies data integrity - **Decentralized Access** - Eliminates single points of failure - **Seamless Integration** - Works behind the scenes for fast, reliable access **Learn More:** For detailed information about Wayfinder architecture and how it works, see our [Wayfinder Documentation](/learn/wayfinder). ## Get Started **Installation:** ```npm npm install @ar.io/wayfinder-core @ar.io/sdk ``` **Basic Usage:** ```js // Create wayfinder with default settings const wayfinder = createWayfinderClient({ ario: ARIO.mainnet(), }); // Fetch data using ar:// protocol try { const response = await wayfinder.request("ar://transaction-id"); const data = await response.text(); console.log("Data:", data); } catch (error) { console.error("Failed to fetch data:", error); } ``` **Full API Reference:** For complete documentation of all Wayfinder core APIs, see the [Wayfinder Core SDK Reference](/sdks/wayfinder/wayfinder-core). ## React Integration For React applications, use the wayfinder-react package: ```npm npm install @ar.io/wayfinder-react @ar.io/sdk ``` ```jsx function App() { return ( ); } function YourComponent() { const request = useWayfinderRequest(); const [data, setData] = useState(null); useEffect(() => { (async () => { const response = await request(`ar://${txId}`, { verificationSettings: { enabled: true, strict: true, }, }); const data = await response.arrayBuffer(); setData(data); })(); }, [request, txId]); return {data && {data}}; } ``` **Full API Reference:** For complete documentation of all Wayfinder React APIs, see the [Wayfinder React SDK Reference](/sdks/wayfinder/wayfinder-react). ## Why Use Wayfinder? Wayfinder eliminates centralized points of failure by distributing data access across the decentralized AR.IO Network, reducing dependency on arweave.net and providing advanced capabilities for production applications: **Maximum Reliability** - Intelligent gateway selection eliminates single points of failure - Automatic failover ensures data is always accessible - Built-in retry mechanisms handle network issues gracefully **Data Verification** - Cryptographic verification ensures data integrity - Multiple verification strategies protect against tampering - Trust but verify approach validates all responses **Performance Optimization** - Fastest ping routing selects optimal gateways - Round-robin distribution balances load across the network - Caching strategies reduce latency for frequently accessed data **Production Ready** - Developer-friendly APIs with React integration - Comprehensive error handling and logging - Configurable routing and verification strategies ## Next Steps } > Start building with the Wayfinder SDK. } > Use REST API for basic data retrieval. }> Use GraphQL to search for data. } > Create memorable names for your Arweave data. # Creating Drives (/build/advanced/arfs/creating-drives) To properly create a new drive, two new entities need to be created: a new Drive entity and a new Folder entity to serve as the root folder of that drive. ## New Drive Entity - The user must specify a `name` of the drive which is stored within the Drive Entity's metadata JSON. - ArDrive generates a new unique uuidv4 for the drive entity's `Drive-Id`. - ArDrive also generates a new unique uuidv4 for the drive entity's `rootFolderId`, which will refer to the `Folder-Id` of the new folder entity that will be created. - This `rootFolderId` is stored within the Drive Entity's metadata JSON. - Drive Entity Metadata transactions must have `Entity-Type: "drive"`. - ArDrive will that the current local system time as seconds since Unix epoch for the Drive Entity's `Unix-Time`. - The Drive Entity's `Drive-Privacy` must also be set to `public` or `private` in order for its subfolders and files to have the correct security settings. - If the drive is private: - Its `Cipher` tag must be filled out with the correct encryption algorithm (currently `AES256-GCM`). - Its `Cipher-IV` tag must be filled out with the generated Initialization Vector for the private drive. - The ArFS client must derive the Drive Key and encrypt the Drive Entity's metadata JSON using the assigned `Cipher` and `Cipher-IV`. ## New Root Folder Entity - The `name` of the drive and folder entities must be the same. - This `name` is stored within the Folder Entity's metadata JSON. - The Folder Entity's `Folder-Id` must match the `rootFolderId` previously created for the Drive Entity. - The Folder Entity's `Drive-Id` must match the `Drive-Id` previously created for the Drive Entity. - The Folder Entity must not include a `Parent-Folder-Id` tag. - This is how it is determined to be the root folder for a drive. - Folder Entity metadata transactions must have `Entity-Type: 'folder'`. - The client gets the user's local time for the `Unix-Time` tag, represented as seconds since Unix Epoch. - Public folders must have the content type `Content-Type: "application/json"`. - If the folder is private - Its `Cipher` tag must be filled out with the correct encryption algorithm (currently `AES256-GCM`). - Its `Cipher-IV` tag must be filled out with the generated Initialization Vector for the private folder. - Its content type must be `Content-Type: "application/octet-stream"`. - The ArFS client must encrypt the Drive Entity's metadata JSON using the assigned `Cipher` and `Cipher-IV`. ## Creating Files Files in ArFS require two separate transactions: 1. **File Metadata Transaction** - Contains file information and references 2. **File Data Transaction** - Contains the actual file data ### File Metadata Transaction ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Entity-Type: "file", File-Id: "", Parent-Folder-Id: "", Unix-Time: "" Metadata JSON { "name": "", "size": , "lastModifiedDate": , "dataTxId": "", "dataContentType": "", "isHidden": false, "pinnedDataOwner": "" } ``` ### File Data Transaction ```json Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", { File Data - Encrypted if private } ``` ## Creating Folders Folders are simpler than files as they only require a metadata transaction: ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Entity-Type: "folder", Folder-Id: "", Parent-Folder-Id?: "", Unix-Time: "" Metadata JSON { "name": "", "isHidden": false } ``` ## Creating Snapshots Snapshots provide a way to quickly synchronize drive state by rolling up all metadata into a single transaction: ```json ArFS: "0.15", Drive-Id: "", Entity-Type: "snapshot", Snapshot-Id: "", Content-Type: "", Block-Start: "", Block-End: "", Data-Start: "", Data-End: "", Unix-Time: "" ``` ## Implementation Example Here's a practical example of creating a complete drive structure: ```mermaid sequenceDiagram participant User participant Client participant Wallet participant Arweave User->>Client: Create drive "My Project" Client->>Client: Generate drive UUID Client->>Client: Generate root folder UUID Client->>Wallet: Request signature (if private) Wallet->>Client: Return signature Client->>Client: Derive drive key (if private) Client->>Client: Encrypt metadata (if private) Client->>Arweave: Upload drive entity Client->>Arweave: Upload root folder entity User->>Client: Create folder "Documents" Client->>Client: Generate folder UUID Client->>Client: Encrypt folder metadata (if private) Client->>Arweave: Upload folder entity User->>Client: Upload file "readme.txt" Client->>Client: Generate file UUID Client->>Client: Encrypt file metadata (if private) Client->>Client: Encrypt file data (if private) Client->>Arweave: Upload file metadata Client->>Arweave: Upload file data ``` ## Best Practices ### Naming Conventions - Use descriptive names for drives, folders, and files - Avoid special characters that might cause issues - Keep names under 255 characters - Use consistent casing ### Organization - Create logical folder structures - Use meaningful folder names - Implement proper versioning - Document your structure ### Performance - Batch operations when possible - Use efficient queries - Implement caching - Consider file sizes ### Security - Use strong passwords for private drives - Implement proper key management - Follow encryption best practices - Regular security audits ## Error Handling When creating ArFS entities, handle these common scenarios: ### Transaction Failures - Implement retry logic for failed uploads - Validate data before uploading - Check transaction confirmation status ### Validation Errors - Verify required tags are present - Check data format compliance - Validate UUID formats ### Network Issues - Implement timeout handling - Provide user feedback - Graceful degradation ## Next Steps Now that you know how to create ArFS entities, learn how to work with them: - [Reading Data](/build/advanced/arfs/reading-data) - Query and retrieve your ArFS data - [Privacy & Encryption](/build/advanced/arfs/privacy) - Secure your data with private drives - [Upgrading Private Drives](/build/advanced/arfs/upgrading-drives) - Update legacy drives to v0.15 # Data Model (/build/advanced/arfs/data-model) Because of Arweave's permanent and immutable nature, traditional file structure operations such as renaming and moving files or folders cannot be accomplished by simply updating on-chain data. ArFS works around this by defining an append-only transaction data model based on the metadata tags found in the Arweave [Transaction Headers.](https://docs.arweave.org/developers/server/http-api#transaction-format) This model uses a bottom-up reference method, which avoids race conditions in file system updates. Each file contains metadata that refers to the parent folder, and each folder contains metadata that refers to its parent drive. A top-down data model would require the parent model (i.e. a folder) to store references to its children. These defined entities allow the state of the drive to be constructed by a client to look and feel like a file system: - Drive Entities contain folders and files - Folder Entities contain other folders or files - File Entities contain both the file data and metadata - Snapshot entities contain a state rollups of all entities' (such as drive, folder, file and snapshot) metadata within a drive ## Entity Relationships The following diagram shows the high level relationships between drive, folder, and file entities, and their associated data. More detailed information about each Entity Type can be found in the ArFS specification documentation. ```mermaid graph TD A[Drive Entity] --> B[Root Folder] B --> C[Subfolder 1] B --> D[Subfolder 2] B --> E[File 1] C --> F[File 2] C --> G[File 3] D --> H[File 4] D --> I[Subfolder 3] I --> J[File 5] A --> K[Drive Metadata] B --> L[Folder Metadata] C --> M[Folder Metadata] D --> N[Folder Metadata] I --> O[Folder Metadata] E --> P[File Metadata + Data] F --> Q[File Metadata + Data] G --> R[File Metadata + Data] H --> S[File Metadata + Data] J --> T[File Metadata + Data] U[Snapshot Entity] --> V[Complete Drive State] V --> A V --> B V --> C V --> D V --> I V --> E V --> F V --> G V --> H V --> J ``` As you can see, each file and folder contains metadata which points to both the parent folder and the parent drive. The drive entity contains metadata about itself, but not the child contents. So clients must build drive states from the lowest level and work their way up. ## Metadata Format Metadata stored in any Arweave transaction tag will be defined in the following manner: ```json { "name": "Example-Tag", "value": "example-data" } ``` Metadata stored in the Transaction Data Payload will follow JSON formatting like below: ```json { "exampleField": "exampleData" } ``` Fields with a `?` suffix are optional. ```json { "name": "My Project", "description": "This is a sample project.", "version?": "1.0.0", "author?": "John Doe" } ``` Enumerated field values (those which must adhere to certain values) are defined in the format "value 1 | value 2". All UUIDs used for Entity-Ids are based on the [Universally Unique Identifier](https://en.wikipedia.org/wiki/Universally_unique_identifier) standard. There are no requirements to list ArFS tags in any specific order. ## Building Drive State To construct the current state of a drive, clients must: 1. **Query for all entities** associated with a specific `Drive-Id` 2. **Sort by block height** to establish chronological order 3. **Process entities bottom-up** starting with files and folders 4. **Build the hierarchy** by following parent-child relationships 5. **Handle conflicts** by using the most recent entity version ### Example Drive State Construction ```mermaid sequenceDiagram participant Client participant Gateway participant Arweave Client->>Gateway: Query Drive-Id: abc123 Gateway->>Client: Return all entities Client->>Client: Sort by block height Client->>Client: Process files first Client->>Client: Process folders Client->>Client: Process drive metadata Client->>Client: Build hierarchy tree Client->>Client: Resolve conflicts Client->>Client: Return complete drive state ``` ## Entity Lifecycle Each ArFS entity follows a specific lifecycle pattern: ### Creation 1. Generate unique UUID for entity 2. Create metadata transaction with required tags 3. For files: create separate data transaction 4. Upload to Arweave network ### Updates 1. Create new entity with same ID 2. Update metadata as needed 3. Upload new transaction 4. Client processes both versions and uses latest ### Deletion 1. Mark entity as hidden (`isHidden: true`) 2. Upload new transaction 3. Entity remains in history but hidden from UI ## Data Integrity ArFS ensures data integrity through: - **Immutable transactions** - Once uploaded, data cannot be modified - **Cryptographic signatures** - All transactions are signed by the owner - **Version tracking** - Multiple versions of entities can exist - **Conflict resolution** - Clients use block height and timestamps to resolve conflicts ## Performance Considerations For large drives, consider these optimization strategies: - **Use snapshots** for quick state reconstruction - **Implement caching** for frequently accessed data - **Batch operations** when possible - **Query by date ranges** to limit data transfer ## Next Steps Now that you understand the ArFS data model, learn how to work with it: - [Privacy & Encryption](/build/advanced/arfs/privacy) - Secure your data with private drives - [Creating Drives](/build/advanced/arfs/creating-drives) - Start building with ArFS - [Reading Data](/build/advanced/arfs/reading-data) - Query and retrieve your data # Entity Types (/build/advanced/arfs/entity-types) ## Overview Arweave transactions provide for a separation between data and metadata about that data via the use of headers. Key-value tags in the headers provide for expressive description about the data as well as searchability via gateway GraphQL APIs. ArFS adds an additional layer of separation between data and metadata by using separate transactions for ArFS metadata and, where applicable, ArFS file data. But it also makes use of tags and data separation within an ArFS metadata transaction by including data critical to tracking drive composition in the tags space of ArFS metadata transactions and having most of the other metadata encoded as JSON in the data body of the metadata transaction. In the case of private entities, JSON data and file data payloads are always encrypted according to the protocol processes defined below. - Drive entities require a single metadata transaction, with standard Drive tags and encoded JSON with secondary metadata. - Folder entities require a single metadata transaction, with standard Folder tags and an encoded JSON with secondary metadata. - File entities require a metadata transaction, with standard File tags and an encoded Data JSON with secondary metadata relating to the file. - File entities also require a second data transaction, which includes a limited set of File tags and the actual file data itself. - Snapshot entities require a single transaction, which contains a Data JSON with all of the Drive's rolled up ArFS metadata and standard Snapshot GQL tags that identify the Snapshot. ArFS v0.14 introduces the `isHidden` property. `isHidden` is a boolean (true/false) that tells clients if they should display the file or folder. Hidden files still exist and will be included in [snapshots](#snapshot), but should not be rendered by clients. If `isHidden` is not present, its value should be assumed false. ArFS v0.15 introduces the `Signature-Type` metadata property on Drive entities, and a new entity type `DriveSignature`. ## Drive A drive is the highest level logical grouping of folders and files. All folders and files must be part of a drive, and reference the Drive ID of that drive. When creating a Drive, a corresponding "root" folder must be created as well. This separation of drive and folder entity enables features such as folder view queries, renaming, and linking. ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Drive-Privacy: "", Drive-Auth-Mode?: "password", Entity-Type: "drive", Signature-Type?: "1", Unix-Time: "" Metadata JSON { "name": "", "rootFolderId": "", "isHidden": false } ``` ## Drive-Signature ArFS versions prior to v0.15 applied encryption to drive contents with a signing scheme that, while secure, is now deprecated in modern Arweave software wallets. ArFS v0.15 introduces an updated signing scheme compatible with these wallets and as well as "Drive Signatures", a new entity type to help bridge the signature derivation schemes across ArFS versions. A drive signature uses the v0.15 encryption scheme to encrypt and store the pre-v0.15 wallet signature for a private drive that is necessary for deriving the "drive key" for that drive. This allows for continued access of historical drive contents into the future. ```json ArFS: "0.15", Entity-Type: "drive-signature", Signature-Format: "1", Cipher?: "AES256-GCM", Cipher-IV: "" {data: } ``` The encrypted "type 1" signature for the drive must be provided in the `data` field of the transaction creating the drive-signature entity. ## Folder A folder is a logical grouping of other folders and files. Folder entity metadata transactions without a parent folder id are considered the Drive Root Folder of their corresponding Drives. All other Folder entities must have a parent folder id. Since folders do not have underlying data, there is no Folder data transaction required. ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Entity-Type: "folder", Folder-Id: "", Parent-Folder-Id?: "", Unix-Time: "" Metadata JSON { "name": "", "isHidden": false } ``` ## File A File contains uploaded data, like a photo, document, or movie. In the Arweave File System, a single file is broken into 2 parts - its metadata and its data. A File entity metadata transaction does not include the actual File data. Instead, the File data must be uploaded as a separate transaction, called the File Data Transaction. The File JSON metadata transaction contains a reference to the File Data Transaction ID so that it can retrieve the actual data. This separation allows for file metadata to be updated without requiring the file itself to be reuploaded. It also ensures that private files can have their JSON Metadata Transaction encrypted as well, ensuring that no one without authorization can see either the file or its metadata. ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Entity-Type: "file", File-Id: "", Parent-Folder-Id: "", Unix-Time: "" Metadata JSON { "name": "", "size": , "lastModifiedDate": , "dataTxId": "", "dataContentType": "", "isHidden": false, "pinnedDataOwner": "", # Optional } ``` ### Pinning Files Since the version v0.13, ArFS supports Pins. Pins are files whose data may be any transaction uploaded to Arweave, that may or may not be owned by the wallet that created the pin. When a new File Pin is created, the only created transaction is the Metadata Transaction. The `dataTxId` field will point it to any transaction in Arweave, and the optional `pinnedDataOwner` field is gonna hold the address of the wallet that owns the original copy of the data transaction. ### File Data Transaction Example The File Data Transaction contains limited information about the file, such as the information required to decrypt it, or the Content-Type (mime-type) needed to view in the browser. ```json Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", { File Data - Encrypted if private } ``` ### File Metadata Transaction Example The File Metadata Transaction contains the GQL Tags necessary to identify the file within a drive and folder. Its data contains the JSON metadata for the file. This includes the file name, size, last modified date, data transaction id, and data content type. ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Entity-Type: "file", File-Id: "", Parent-Folder-Id: "", Unix-Time: "", { File JSON Metadata - Encrypted if private } ``` ## Snapshot ArFS applications generate the latest state of a drive by querying for all ArFS transactions made relating to a user's particular `Drive-Id`. This includes both paged queries for indexed ArFS data via GQL, as well as the ArFS JSON metadata entries for each ArFS transaction. For small drives (less than 1000 files), a few thousand requests for very small volumes of data can be achieved relatively quickly and reliably. For larger drives, however, this results in long sync times to pull every piece of ArFS metadata when the local database cache is empty. This can also potentially trigger rate-limiting related ArWeave Gateway delays. Once a drive state has been completely, and accurately generated, in can be rolled up into a single snapshot and uploaded as an Arweave transaction. ArFS clients can use GQL to find and retrieve this snapshot in order to rapidly reconstitute the total state of the drive, or a large portion of it. They can then query individual transactions performed after the snapshot. This optional method offers convenience and resource efficiency when building the drive state, at the cost of paying for uploading the snapshot data. Using this method means a client will only have to iterate through a few snapshots instead of every transaction performed on the drive. ### Snapshot Entity Tags Snapshot entities require the following tags. These are queried by ArFS clients to find drive snapshots, organize them together with any other transactions not included within them, and build the latest state of the drive. ```json ArFS: "0.15", Drive-Id: "", Entity-Type: "snapshot", Snapshot-Id: "", Content-Type: "", Block-Start: "", Block-End: "", Data-Start: "" ``` ### Snapshot Entity Data A JSON data object must also be uploaded with every ArFS Snapshot entity. This data contains all ArFS Drive, Folder, and File metadata changes within the associated drive, as well as any previous Snapshots. The Snapshot Data contains an array `txSnapshots`. Each item includes both the GQL and ArFS metadata details of each transaction made for the associated drive, within the snapshot's start and end period. A `tsSnapshot` contains a `gqlNode` object which uses the same GQL tags interface returned by the Arweave Gateway. It includes all of the important `block`, `owner`, `tags`, and `bundledIn` information needed by ArFS clients. It also contains a `dataJson` object which stores the correlated Data JSON for that ArFS entity. For private drives, the `dataJson` object contains the JSON-string-escaped encrypted text of the associated file or folder. This encrypted text uses the file's existing `Cipher` and `Cipher-IV`. This ensures clients can decrypt this information quickly using the existing ArFS privacy protocols. ```json { "txSnapshots": [ { "gqlNode": { "id": "bWCvIc3cOzwVgquD349HUVsn5Dd1_GIri8Dglok41Vg", "owner": { "address": "hlWRbyJ6WUoErm3b0wqVgd1l3LTgaQeLBhB36v2HxgY" }, "bundledIn": { "id": "39n5evzP1Ip9MhGytuFm7F3TDaozwHuVUbS55My-MBk" }, "block": { "height": 1062005, "timestamp": 1669053791 }, "tags": [ { "name": "Content-Type", "value": "application/json" }, { "name": "ArFS", "value": "0.11" }, { "name": "Entity-Type", "value": "drive" }, { "name": "Drive-Id", "value": "f27abc4b-ed6f-4108-a9f5-e545fc4ff55b" }, { "name": "Drive-Privacy", "value": "public" }, { "name": "App-Name", "value": "ArDrive-App" }, { "name": "App-Platform", "value": "Web" }, { "name": "App-Version", "value": "1.39.0" }, { "name": "Unix-Time", "value": "1669053323" } ] }, "dataJson": "{\"name\":\"november\",\"rootFolderId\":\"71dfc1cb-5368-4323-972a-e9dd0b1c63a0\", \"isHidden\":false}" } ] } ``` ## Schema Diagrams The following diagrams show complete examples of Drive, Folder, and File entity Schemas. ### Public Drive ```mermaid graph TD A[Drive Entity] --> B[Drive Metadata JSON] A --> C[Drive Tags] C --> D[ArFS: 0.15] C --> E[Entity-Type: drive] C --> F[Drive-Id: uuid] C --> G[Drive-Privacy: public] C --> H[Unix-Time: timestamp] B --> I[name: string] B --> J[rootFolderId: uuid] B --> K[isHidden: boolean] ``` ### Private Drive ```mermaid graph TD A[Drive Entity] --> B[Encrypted Drive Metadata JSON] A --> C[Drive Tags] C --> D[ArFS: 0.15] C --> E[Entity-Type: drive] C --> F[Drive-Id: uuid] C --> G[Drive-Privacy: private] C --> H[Drive-Auth-Mode: password] C --> I[Signature-Type: 1] C --> J[Cipher: AES256-GCM] C --> K[Cipher-IV: base64] C --> L[Content-Type: application/octet-stream] C --> M[Unix-Time: timestamp] B --> N[Encrypted JSON with name, rootFolderId, isHidden] ``` ## Next Steps Now that you understand the different ArFS entity types, explore how they work together: - [Data Model](/build/advanced/arfs/data-model) - Learn how entities relate to each other - [Privacy & Encryption](/build/advanced/arfs/privacy) - Understand how private entities work - [Creating Drives](/build/advanced/arfs/creating-drives) - Start building with ArFS # ArFS Protocol (/build/advanced/arfs) Arweave File System, or "ArFS" is a data modeling, storage, and retrieval protocol designed to emulate common file system operations and to provide aspects of mutability to your data hierarchy on [Arweave](/learn/what-is-arweave)'s otherwise permanent, immutable data storage blockweave. Due to Arweave's permanent, immutable and public nature traditional file system operations such as permissions, file/folder renaming and moving, and file updates cannot be done by simply updating the on-chain data model. ArFS works around this by implementing a privacy and encryption pattern and defining an append-only transaction data model using tags within [Arweave Transaction headers](https://docs.arweave.org/developers/server/http-api#transaction-format). ## Key Features ### File Structure ArFS organizes files and folders using a hierarchical structure. Files are stored as individual transactions on the Arweave blockchain, while folders are metadata that reference these file transactions. ### Metadata Each file and folder has associated metadata, such as the name, type, size, and modification timestamp. ArFS leverages Arweave's tagging system to store this metadata in a standardized format, which allows for easy querying and organization. ### File Permissions ArFS supports public and private file permissions. Public files can be accessed by anyone on the network, while private files are encrypted using the owner's private key, ensuring only they can decrypt and access the content. ### File Versioning ArFS supports versioning of files, allowing users to store multiple versions of a file and access previous versions at any time. This is achieved by linking new file transactions to previous versions through the use of metadata tags. ### Data Deduplication To minimize storage redundancy and costs, ArFS employs data deduplication techniques. If a user tries to store a file that already exists on the network, the protocol will simply create a new reference to the existing file instead of storing a duplicate copy. ### Search and Discovery ArFS enables users to search and discover files based on their metadata, such as file names, types, and tags. This is made possible by indexing the metadata stored within the Arweave blockchain. ### Interoperability ArFS is designed to be interoperable with other decentralized applications and services built on the Arweave network. This allows for seamless integration and collaboration between different applications and users. ## Getting Started To start using ArFS, you'll need to familiarize yourself with the Arweave ecosystem, acquire AR tokens to cover storage costs, and choose a compatible client or library to interact with the ArFS protocol. ## ArFS Version History | Version | Date | Release Notes | | ------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | 0.10 | August 2020 | The brief, beta version that was in use during initial testing of ArDrive across Web (Dart) and legacy CLI (Typescript). | | 0.11 | September 2020 | Includes all of the major functionality supporting file systems on Arweave including new drives, folders, files, renames, moves and privacy. | | 0.12 | December 2022 | Added Snapshot entities to support quick synchronization of drive state. | | 0.13 | August 2023 | Added pins | | 0.14 | January 2024 | Added `isHidden` property to file and folder metadata to enable clients to "hide" content from end users. | | 0.15 | May 2025 | Added `Drive-Signature` entity type and `Signature-Type` metadata property on Drive entities. | ## Next Steps Ready to dive deeper into ArFS? Here's what you should explore next: - [Entity Types](/build/advanced/arfs/entity-types) - Understand the different ArFS entities and their structure - [Data Model](/build/advanced/arfs/data-model) - Learn how ArFS organizes data hierarchically - [Privacy & Encryption](/build/advanced/arfs/privacy) - Secure your data with private drives - [Creating Drives](/build/advanced/arfs/creating-drives) - Get started with your first ArFS drive - [Reading Data](/build/advanced/arfs/reading-data) - Query and retrieve your ArFS data ## Resources For more information, documentation, and community support, refer to the following resources: - [Arweave Official Website](https://www.arweave.org/) - [Arweave Developer Documentation](https://docs.arweave.org/) - [Arweave Community Forums](https://community.arweave.org/) # Privacy & Encryption (/build/advanced/arfs/privacy) The Arweave blockweave is inherently public. But with apps that use ArFS, like ArDrive, your private data never leaves your computer without using military grade (and [quantum resistant](https://blog.boot.dev/cryptography/is-aes-256-quantum-resistant/#:~:text=Symmetric%20encryption%2C%20or%20more%20specifically,key%20sizes%20are%20large%20enough)) encryption. This privacy layer is applied at the Drive level, and users determine whether a Drive is public or private when they first create it. Private drives must follow the ArFS privacy model. With ArDrive specifically, every file within a Private Drive is symmetrically encrypted using [AES-256-GCM](https://iopscience.iop.org/article/10.1088/1742-6596/1019/1/012008/pdf) (for small files and metadata transactions) or [AES-256-CTR](https://xilinx.github.io/Vitis_Libraries/security/2020.1/guide_L1/internals/ctr.html) (for large files, over 100MiB). Every Private drive has a master "Drive Key" which uses a combination of the user's Arweave wallet signature, a user defined drive password, and a unique drive identifier ([uuidv4](https://en.wikipedia.org/wiki/Universally_unique_identifier)). Each file has its own "File Key" derived from the "Drive Key". This allows for single files to be shared without exposing access to the other files within the Drive. Once a file is encrypted and stored on Arweave, it is locked forever and can only be decrypted using its file key. **NOTE**: Usable encryption standards are not limited to AES-256-GCM or AES-256-CTR. Any Encryption method may be used so long as it is clearly indicated in the `cipher` tag. ## Deriving Keys Private drives have a global drive key, `D`, and multiple file keys `F`, for encryption. This enables a drive to have as many uniquely encrypted files as needed. One key is used for all versions of a single file (since new file versions use the same File-Id) `D` is used for encrypting both Drive and Folder metadata, while `F` is used for encrypting File metadata and the actual stored data. Having these different keys, `D` and `F`, allows a user to share specific files without revealing the contents of their entire drive. `D` is derived using HKDF-SHA256 with an [unsalted]() RSA-PSS signature of the drive's id and a user provided password. `F` is also derived using HKDF-SHA256 with the drive key and the file's id. ```mermaid graph TD A[User Password] --> B[Drive Key Derivation] C[Wallet Signature] --> B D[Drive ID] --> B B --> E[Drive Key D] E --> F[File Key Derivation] G[File ID] --> F F --> H[File Key F] E --> I[Encrypt Drive Metadata] E --> J[Encrypt Folder Metadata] H --> K[Encrypt File Metadata] H --> L[Encrypt File Data] style A fill:#e1f5fe style C fill:#e1f5fe style D fill:#e1f5fe style G fill:#e1f5fe style E fill:#c8e6c9 style H fill:#c8e6c9 ``` Other wallets (like [ArConnect](https://www.arconnect.io/)) integrate with this Key Derivation protocol just exposing an API to collect a signature from a given Arweave Wallet in order to get the SHA-256 signature needed for the [HKDF](https://en.wikipedia.org/wiki/HKDF) to derive the Drive Key. An example implementation, using Dart, is available [here](https://github.com/ardriveapp/ardrive-web/blob/187b3fb30808bda452123c2b18931c898df6a3fb/docs/private_drive_kdf_reference.dart), with a Typescript implementation [here](https://github.com/ardriveapp/ardrive-core-js/blob/f19da30efd30a4370be53c9b07834eae764f8535/src/utils/crypto.ts). ## Private Drives Drives can store either public or private data. This is indicated by the `Drive-Privacy` tag in the Drive entity metadata. ``` Drive-Privacy: "" ``` If a Drive entity is private, an additional tag `Drive-Auth-Mode` must also be used to indicate how the Drive Key is derived. ArDrive clients currently leverage a secure password along with the Arweave Wallet private key signature to derive the global Drive Key. ``` Drive-Auth-Mode?: 'password' ``` On every encrypted Drive Entity, a `Cipher` tag must be specified, along with the public parameters for decrypting the data. This is done by specifying the parameter with a `Cipher-*` tag. eg. `Cipher-IV`. If the parameter is byte data, it must be encoded as Base64 in the tag. ArDrive clients currently leverage AES256-GCM for all symmetric encryption, which requires a Cipher Initialization Vector consisting of 12 random bytes. ``` Cipher?: "AES256-GCM" Cipher-IV?: "" ``` Additionally, all encrypted transactions must have the `Content-Type` tag `application/octet-stream` as opposed to `application/json` Private Drive Entities and their corresponding Root Folder Entities will both use these keys and ciphers generated to symmetrically encrypt the JSON files that are included in the transaction. This ensures that only the Drive Owner (and whomever the keys have been shared with) can open the drive, discover the root folder, and continue to load the rest of the children in the drive. ## Private Files When a file is uploaded to a private drive, it by default also becomes private and leverages the same drive keys used for its parent drive. Each unique file in a drive will get its own set of file keys based off of that file's unique `FileId`. If a single file gets a new version, its `File-Id` will be reused, effectively leveraging the same File Key for all versions in that file's history. These file keys can be shared by the drive's owner as needed. Private File entities have both its metadata and data transactions encrypted using the same File Key, ensuring all facets of the data is truly private. As such, both the file's metadata and data transactions must both have a unique `Cipher-IV` and `Cipher` tag: ``` Cipher?: "AES256-GCM" Cipher-IV?: "" ``` Just like drives, private files must have the `Content-Type` tag set as `application/octet-stream` in both its metadata and data transactions: ``` Content-Type: "application/octet-stream" ``` ## Encryption Process Here's how the encryption process works for private drives: ```mermaid sequenceDiagram participant User participant Client participant Wallet participant Arweave User->>Client: Create private drive Client->>Wallet: Request signature Wallet->>Client: Return signature Client->>Client: Derive drive key Client->>Client: Encrypt drive metadata Client->>Arweave: Upload encrypted drive User->>Client: Upload file to private drive Client->>Client: Derive file key Client->>Client: Encrypt file metadata Client->>Client: Encrypt file data Client->>Arweave: Upload encrypted metadata Client->>Arweave: Upload encrypted data ``` ## Security Best Practices When working with private drives, follow these security guidelines: ### Password Management - Use strong, unique passwords for each drive - Consider using a password manager - Never share passwords in plain text ### Key Storage - Never store drive keys in plain text - Use secure key derivation functions - Implement proper key rotation if needed ### Access Control - Share file keys only with authorized users - Implement proper access logging - Regularly audit drive access ### Data Handling - Encrypt data before transmission - Use secure communication channels - Implement proper error handling ## Drive Signature (ArFS v0.15) ArFS v0.15 introduces a new `Drive-Signature` entity type to help bridge signature derivation schemes across ArFS versions. This is particularly important for maintaining access to private drives created with older wallet signing methods. The drive signature entity stores an encrypted version of the pre-v0.15 wallet signature that's necessary for deriving the drive key. This allows continued access to historical drive contents while using modern wallet signing APIs. ```mermaid graph TD A[Legacy Wallet Signature] --> B[Encrypt with v0.15 scheme] B --> C[Drive-Signature Entity] C --> D[Store on Arweave] D --> E[Retrieve when needed] E --> F[Decrypt signature] F --> G[Use for drive key derivation] style A fill:#ffecb3 style C fill:#c8e6c9 style G fill:#e1f5fe ``` ## Next Steps Ready to implement privacy in your ArFS applications? - [Creating Private Drives](/build/advanced/arfs/creating-drives) - Learn how to create secure drives - [Upgrading Private Drives](/build/advanced/arfs/upgrading-drives) - Update legacy drives to v0.15 - [Reading Data](/build/advanced/arfs/reading-data) - Query and decrypt your private data # Reading Data (/build/advanced/arfs/reading-data) Clients can perform read operations to create a timeline of entity write transactions which can then be replayed to construct the Drive state. This is done by querying an Arweave GraphQL index for the user's respective transactions. [Arweave GraphQL Guide](https://gql-guide.vercel.app/) can provide more information on how to use Arweave GraphQL. If no GraphQL index is available, drive state can only be generated by downloading and inspecting all transactions made by the user's wallet. This timeline of transactions should be grouped by the block number of each transaction. At every step of the timeline, the client can check if the entity was written by an authorized user. This also conveniently enables the client to surface a trusted entity version history to the user. To determine the owner of a Drive, clients must check for who created the first Drive Entity transaction using that `Drive-Id`. Until a trusted permissions or ACL system is put in place, any transaction in a drive created by any wallet other than the one who created the first Drive Entity transaction could be considered spam. The `Unix-Time` defined on each transaction should be reserved for tie-breaking same entity updates in the same block and should not be trusted as the source of truth for entity write ordering. This is unimportant for single owner drives but is crucial for multi-owner drives with updateable permissions (currently undefined in this spec) as a malicious user could fake the `Unix-Time` to modify the drive timeline for other users. - Drives that have been updated many times can have a long entity timeline which can be a performance bottleneck. To avoid this, clients can cache the drive state locally and sync updates to the file system by only querying for entities in blocks higher than the last time they checked. - Not checking for Drive Ownership could result in seeing incorrect drive state and GraphQL queries. ## Folder/File Paths ArweaveFS does not store folder or file paths along with entities as these paths will need to be updated whenever the parent folder name changes which can require many updates for deeply nested file systems. Instead, folder/file paths are left for the client to generate from the folder/file names. ## Folder View Queries Clients that want to provide users with a quick view of a single folder can simply query for an entity timeline for a particular folder by its id. Clients with multi-owner permissions will additionally have to query for the folder's parent drive entity for permission based filtering of the timeline. ## Basic Query Patterns ### Query All Drive Entities ```graphql query { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["drive"] } { name: "Drive-Id", values: ["your-drive-id"] } ] ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ### Query Folder Contents ```graphql query ($parentFolderId: String!) { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Parent-Folder-Id", values: [$parentFolderId] } ] ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ### Query File Entities ```graphql query ($fileId: String!) { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["file"] } { name: "File-Id", values: [$fileId] } ] ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ## Building Drive State The process of building drive state involves several steps: ```mermaid graph TD A[Query Drive Entities] --> B[Sort by Block Height] B --> C[Process Files First] C --> D[Process Folders] D --> E[Process Drive Metadata] E --> F[Build Hierarchy Tree] F --> G[Resolve Conflicts] G --> H[Return Complete State] style A fill:#e3f2fd style H fill:#c8e6c9 ``` ### Step-by-Step Process 1. **Query for all entities** associated with a specific `Drive-Id` 2. **Sort by block height** to establish chronological order 3. **Process entities bottom-up** starting with files and folders 4. **Build the hierarchy** by following parent-child relationships 5. **Handle conflicts** by using the most recent entity version ### Example Implementation ```javascript async function buildDriveState(driveId) { // Query all entities for the drive const entities = await queryDriveEntities(driveId); // Sort by block height entities.sort((a, b) => a.block.height - b.block.height); // Process entities const driveState = { drive: null, folders: new Map(), files: new Map(), }; for (const entity of entities) { const entityType = getTagValue(entity.tags, "Entity-Type"); switch (entityType) { case "drive": driveState.drive = processDriveEntity(entity); break; case "folder": driveState.folders.set( getTagValue(entity.tags, "Folder-Id"), processFolderEntity(entity) ); break; case "file": driveState.files.set( getTagValue(entity.tags, "File-Id"), processFileEntity(entity) ); break; } } return driveState; } ``` ## Using Snapshots For large drives, snapshots can significantly improve performance: ```mermaid sequenceDiagram participant Client participant Gateway participant Arweave Client->>Gateway: Query for latest snapshot Gateway->>Client: Return snapshot data Client->>Client: Process snapshot data Client->>Gateway: Query for newer transactions Gateway->>Client: Return newer entities Client->>Client: Merge with snapshot data Client->>Client: Return complete drive state ``` ### Snapshot Query ```graphql query ($driveId: String!) { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["snapshot"] } { name: "Drive-Id", values: [$driveId] } ] sort: HEIGHT_DESC first: 1 ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ## Performance Optimization ### Caching Strategies - **Local caching** - Store frequently accessed data locally - **Incremental updates** - Only fetch new transactions since last sync - **Snapshot usage** - Use snapshots for large drives - **Batch queries** - Combine multiple queries when possible ### Query Optimization - **Use specific tags** - Narrow down queries with relevant tags - **Limit results** - Use pagination for large result sets - **Filter by date** - Query specific time ranges - **Index utilization** - Leverage GraphQL indexes effectively ## Error Handling ### Common Issues - **Network timeouts** - Implement retry logic - **Invalid data** - Validate entity structure - **Missing entities** - Handle incomplete data gracefully - **Decryption errors** - Proper error handling for private data ### Best Practices - **Validate ownership** - Check drive ownership before processing - **Handle conflicts** - Resolve entity version conflicts - **Graceful degradation** - Provide fallbacks for missing data - **User feedback** - Inform users of sync status ## Security Considerations ### Data Validation - **Verify signatures** - Check transaction signatures - **Validate ownership** - Ensure drive ownership - **Check timestamps** - Validate entity timestamps - **Sanitize data** - Clean user-provided data ### Privacy Protection - **Decrypt carefully** - Handle private data securely - **Key management** - Protect encryption keys - **Access control** - Implement proper permissions - **Audit logging** - Track data access ## Next Steps Now that you understand how to read ArFS data, explore these related topics: - [Privacy & Encryption](/build/advanced/arfs/privacy) - Secure your data with private drives - [Upgrading Private Drives](/build/advanced/arfs/upgrading-drives) - Update legacy drives to v0.15 - [Creating Drives](/build/advanced/arfs/creating-drives) - Start building with ArFS # Upgrading Private Drives (/build/advanced/arfs/upgrading-drives) ## Overview Private drives rely on a combination of user-set password and a wallet signature for encryption and decryption. [Wander](https://www.wander.app/), formerly ArConnect, is a popular Arweave wallet that is deprecating its `signature()` method in favor of `signDataItem()` or `signMessage()`. In order to preserve access to private drive contents that were secured via drive keys created via 'signature()', ArFS v0.15 introduces a new drive key derivation scheme that both utilizes the modern signing APIs and bridges historical drive keys for usage with it. Because private drive entities exist on chain and their encryption cannot be altered, an upgrade is required to allow continued access to "V1" private drives. This upgrade essentially takes a signature from the drive owner wallet, encrypts it using the required signature structure for V2 private drives, and places it on Arweave as a new "Drive-Signature" entity. This allows the signature to be fetched and decrypted using the latest methods before using it to decrypt the private drive in the V1 format. The below instructions for upgrading a private drive will work during the deprecation period for the `signature()` method from Wanter. Once this period is over, and `signature()` loses all support, additional steps will be required to obtain the correct signature format to decrypt V1 private drives in order to upgrade them. There is, at this time, no set date for when the deprecation period will end. ## The Upgrade Process The upgrade process involves creating a new `Drive-Signature` entity that contains an encrypted version of the legacy signature needed to decrypt the private drive. ```mermaid sequenceDiagram participant User participant Client participant Wallet participant Arweave User->>Client: Initiate drive upgrade Client->>Wallet: Request legacy signature Wallet->>Client: Return signature Client->>Client: Encrypt signature with v0.15 scheme Client->>Arweave: Upload Drive-Signature entity Client->>Client: Update drive with Signature-Type tag Client->>Arweave: Upload updated drive entity Client->>User: Upgrade complete ``` ### Drive-Signature Entity The `Drive-Signature` entity stores the encrypted legacy signature: ```json ArFS: "0.15", Entity-Type: "drive-signature", Signature-Format: "1", Cipher?: "AES256-GCM", Cipher-IV: "" {data: } ``` ### Updated Drive Entity The drive entity is updated with a new `Signature-Type` tag: ```json ArFS: "0.15", Cipher?: "AES256-GCM", Cipher-IV?: "", Content-Type: "", Drive-Id: "", Drive-Privacy: "", Drive-Auth-Mode?: "password", Entity-Type: "drive", Signature-Type?: "1", Unix-Time: "" Metadata JSON { "name": "", "rootFolderId": "", "isHidden": false } ``` ## Using ArDrive The upgrade process has been made simple by using the [ArDrive app](https://app.ardrive.io/). ### Step 1: Log into ArDrive If the connected wallet has V1 private drives that need to be updated, a banner will appear at the top of the screen. ![ArDrive Upgrade Banner](https://arweave.net/kJzzrYY4KIHLTC9VOECfzvVAjaO1_FOkejWgsRNbLx4) ### Step 2: Click "Update Now!" This will open a modal listing the drives that need to be updated, and linking to more information about the upgrade process. ![ArDrive Upgrade Modal](https://arweave.net/Qa-qeKkr1flXl1-fdLapjKNmme0lKA43YQbjmMvnF-U) ### Step 3: Click "Update" The process of upgrading the private drives will begin, and involve signing messages depending on how many drives are being upgraded. When the process is complete, a new modal will appear listing the drives that have been successfully updated. ![ArDrive Upgrade Complete](https://arweave.net/qEwT3oFZbFDpmRQpw9j1c_okN5pIkFqRuoSwBzhc3HQ) ## Manual Upgrade Process If you need to upgrade drives programmatically, here's the process: ### 1. Identify V1 Drives Query for drives that don't have the `Signature-Type` tag: ```graphql query { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["drive"] } { name: "Drive-Privacy", values: ["private"] } ] ) { edges { node { id tags { name value } } } } } ``` ### 2. Create Drive-Signature Entity ```javascript async function createDriveSignature(driveId, legacySignature) { // Encrypt the legacy signature const encryptedSignature = await encryptSignature(legacySignature); // Create the drive signature entity const driveSignature = { data: encryptedSignature, tags: [ { name: "ArFS", value: "0.15" }, { name: "Entity-Type", value: "drive-signature" }, { name: "Signature-Format", value: "1" }, { name: "Cipher", value: "AES256-GCM" }, { name: "Cipher-IV", value: cipherIV }, ], }; // Upload to Arweave return await uploadTransaction(driveSignature); } ``` ### 3. Update Drive Entity ```javascript async function updateDriveEntity(driveId) { // Get existing drive entity const driveEntity = await getDriveEntity(driveId); // Add Signature-Type tag const updatedTags = [ ...driveEntity.tags, { name: "Signature-Type", value: "1" }, ]; // Create updated drive entity const updatedDrive = { data: driveEntity.data, tags: updatedTags, }; // Upload to Arweave return await uploadTransaction(updatedDrive); } ``` ## Verification After upgrading, verify the process was successful: ### Check Drive-Signature Entity ```graphql query ($driveId: String!) { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["drive-signature"] } { name: "Drive-Id", values: [$driveId] } ] ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ### Check Updated Drive Entity ```graphql query ($driveId: String!) { transactions( tags: [ { name: "ArFS", values: ["0.15"] } { name: "Entity-Type", values: ["drive"] } { name: "Drive-Id", values: [$driveId] } { name: "Signature-Type", values: ["1"] } ] ) { edges { node { id block { height timestamp } tags { name value } } } } } ``` ## Troubleshooting ### Common Issues - **Signature not found** - Ensure the wallet supports the required signing methods - **Encryption errors** - Verify the encryption parameters are correct - **Upload failures** - Check network connectivity and retry - **Permission denied** - Ensure you own the drive being upgraded ### Error Handling ```javascript async function upgradeDrive(driveId) { try { // Get legacy signature const legacySignature = await getLegacySignature(driveId); // Create drive signature entity await createDriveSignature(driveId, legacySignature); // Update drive entity await updateDriveEntity(driveId); console.log("Drive upgraded successfully"); } catch (error) { console.error("Upgrade failed:", error); // Handle error appropriately } } ``` ## Best Practices ### Before Upgrading - **Backup your data** - Ensure you have access to your drive contents - **Test with one drive** - Start with a single drive to verify the process - **Check wallet compatibility** - Ensure your wallet supports required methods - **Verify ownership** - Confirm you own the drives being upgraded ### During Upgrading - **Monitor progress** - Keep track of upgrade status - **Handle errors gracefully** - Implement proper error handling - **Batch operations** - Upgrade multiple drives efficiently - **User feedback** - Provide clear status updates ### After Upgrading - **Verify functionality** - Test drive access and operations - **Update clients** - Ensure all clients support v0.15 - **Monitor performance** - Check for any performance issues - **Document changes** - Keep track of upgraded drives ## Migration Timeline ```mermaid gantt title ArFS v0.15 Migration Timeline dateFormat YYYY-MM-DD section Phase 1 Legacy Support :active, legacy, 2024-01-01, 2024-06-30 section Phase 2 Migration Period :migration, 2024-07-01, 2024-12-31 section Phase 3 Legacy Deprecation :deprecation, 2025-01-01, 2025-06-30 section Phase 4 Full v0.15 :full, 2025-07-01, 2025-12-31 ``` ## Next Steps After upgrading your drives, explore these related topics: - [Privacy & Encryption](/build/advanced/arfs/privacy) - Understand the new encryption scheme - [Reading Data](/build/advanced/arfs/reading-data) - Query your upgraded drives - [Creating Drives](/build/advanced/arfs/creating-drives) - Create new v0.15 drives # EthAReum Protocol (/build/advanced/ethareum) The **EthAReum protocol** enables the generation of private keys for an Arweave wallet using a signature from an Ethereum or Solana wallet. This allows users to create an Arweave wallet directly through popular wallet providers like MetaMask, providing seamless cross-chain wallet management. Generated private keys provide a fully functional Arweave wallet, equipped to perform all standard operations, including holding AR tokens and Turbo Credits, and uploading data to the Arweave network. ## How It Works EthAReum uses a deterministic key derivation process that combines: - **Ethereum/Solana wallet signature** - Provides the cryptographic foundation - **User-generated password** - Adds additional entropy and security - **Standardized derivation algorithm** - Ensures reproducible results The protocol generates a unique Arweave wallet that is cryptographically linked to your Ethereum or Solana wallet but remains completely independent. ## Browser Compatibility **Recommended Browser**: For optimal performance, use **Chrome** when working with EthAReum and MetaMask. While EthAReum functions correctly in most browsers, there are ongoing efforts to resolve some edge case compatibility issues in other environments. ## Password Security The EthAReum protocol incorporates a user-generated password in the wallet derivation process. This password provides an extra layer of security by contributing additional entropy to the wallet's derivation and serves as a critical verification step for wallet access. **Permanent Password**: The password used during the derivation of private keys is **permanent and cannot be changed or recovered** by any administrator. ArDrive is a decentralized platform with no account administration. It is crucial to keep this password secure. ### Password Requirements - Must be set during initial wallet creation - Used for all subsequent logins - Required for encrypting private uploads - Cannot be recovered if forgotten ## Wallet Addresses The public address of the generated Arweave wallet is derived from its public key and will be **different** from the public address of the Ethereum or Solana wallet used to generate it. ### Viewing Your Address The exact steps to obtain your generated wallet's public address depend on the dApp interface: - **ArDrive**: Click the user profile icon in the top right when logged in - **Other dApps**: Check the wallet settings or profile section ## Key Management ### Keyfiles vs Seed Phrases The Arweave ecosystem primarily uses **keyfiles** rather than seed phrases for wallet access: - **Keyfile**: JSON file containing a Json Web Key (JWK) that acts as private keys - **Seed Phrase**: Supported but not universally implemented across all dApps **Keyfile Security**: Always treat your keyfile with the same care as you would private keys for an Ethereum wallet. Learn more about keyfiles in the [Arweave Cookbook](https://cookbook.arweave.dev/). ### Accessing Your Keys Both keyfile and seed phrase are available for download in most dApps: - **ArDrive**: Click the user profile icon in the top right when logged in - **Other dApps**: Check wallet settings or ## Security Considerations ### One-Way Control EthAReum generates Arweave wallet private keys using a signature from your Ethereum/Solana wallet, ensuring that control only extends in one direction: - ✅ **EthAReum can generate** Arweave wallets from Ethereum/Solana signatures - ❌ **EthAReum cannot access** your Ethereum/Solana wallet or assets - ✅ **Your Ethereum/Solana assets remain** completely secure and independent ### Signature Security **Beware of Malicious dApps**: Some malicious dApps or websites may disguise high-risk authorization transactions as simple signature requests. Always ensure that you only provide signatures to reputable and trusted dApps like ArDrive. ### Best Practices 1. **Verify dApp authenticity** before providing signatures 2. **Use strong, unique passwords** for wallet derivation 3. **Backup your keyfile** in a secure location 4. **Never share your password** or keyfile with anyone 5. **Test with small amounts** before committing to large transactions ## Implementation Examples ### Basic Wallet Generation ```javascript // Example: Generate Arweave wallet from Ethereum signature async function generateArweaveWallet(ethereumSignature, password) { // This is a conceptual example - actual implementation // would use the EthAReum protocol specification const derivedKey = await deriveKeyFromSignature( ethereumSignature, password, "arweave" // derivation context ); return { address: getAddressFromKey(derivedKey), keyfile: createKeyfile(derivedKey), seedPhrase: generateSeedPhrase(derivedKey), }; } ``` ### Integration with MetaMask ```javascript // Example: Request signature from MetaMask async function requestEthereumSignature() { const accounts = await ethereum.request({ method: "eth_requestAccounts", }); const message = "Sign this message to generate your Arweave wallet"; const signature = await ethereum.request({ method: "personal_sign", params: [message, accounts[0]], }); return signature; } ``` ## Use Cases ### Cross-Chain dApp Development - **Unified wallet experience** across Ethereum and Arweave - **Simplified onboarding** for users familiar with Ethereum - **Reduced friction** in multi-chain applications ### Data Storage Solutions - **Decentralized file storage** using existing Ethereum wallets - **NFT metadata storage** on Arweave with Ethereum wallet access - **Cross-chain data management** for DeFi applications ### Developer Benefits - **Familiar wallet interfaces** for users - **Reduced development complexity** for multi-chain apps - **Enhanced user experience** with single wallet management ## Next Steps Learn about structured data storage on Arweave using your generated wallet. Upload data efficiently using Turbo Credits with your EthAReum wallet. Learn how to find and access data stored with your generated wallet. # Advanced (/build/advanced) ## Overview Explore advanced topics and specialized guides for building on Arweave and AR.IO. These resources are designed for developers and operators who need deeper technical knowledge and advanced configuration options. ## Advanced Topics **Advanced ArFS documentation** for structured data storage **Understanding wallet address normalization** across different networks **Security mechanisms** in AR.IO gateways **Key topics:** - Same-origin policy **Generate Arweave wallets** from Ethereum or Solana wallets ## Ready to Go Advanced? **New to Arweave?** Start with our [Getting Started guide](/build) to understand the basics. **Building dApps?** Check out [ArFS Protocol](/build/advanced/arfs) for structured data storage solutions. # Normalized Addresses (/build/advanced/normalized-addresses) ## Overview Different blockchains use different formats for the [public keys](/glossary) of wallets, and the [native addresses](/glossary) for those wallets. In most cases, when a system in the Arweave ecosystem needs to display the wallet address of a wallet from a different blockchain, for instance in the `Owner.address` value of an AO process spawned by an ETH wallet, that address will be normalized into the format recognized by Arweave. Specifically, a 43 character base64url representation of the sha256 hash of the public key. This is done to prevent potential errors by systems in the Arweave ecosystem that expect these values to be a certain size and conform to a specific format. Essentially, normalized addresses are a way to represent public keys and wallet addresses from other blockchains in a way that is familiar to systems in the Arweave ecosystem. A tool for easily obtaining a normalized addresses from public keys can be found at [ar://normalize-my-key](https://normalize-my-key.arweave.net/) ## At A Glance | | Arweave | ETH/POL | Solana | | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | -------------------------------------------- | | **Native Address** | 9ODOd-\_ZT9oWoRMVmmD4G5f9Z6MjvYxO3Nen-T5OXvU | 0x084af408C8E492aC52dc0Ec76514A7deF8D5F03f | Cd5yb4mvbuQyyJgAkriFZbWQivh2zM68KGZX8Ksn1L85 | | **base64url Encoded Public Key** | 0jkGWDFYI3DHEWaXhZitjTg67T-enQwXs50lTDrMhy2qb619_91drv_50J5PwrOYJiMmYhiEA5ojMvrrAFY-Dm1bJbJfVBU1kIsPho2tFcXnbSOa2_1bovAys0ckJU07wkbmIUpzp3trdxYReB4jayMMOXWw9B8xS0v81zFmK3IbCtL9N6WNTMONOSMATHFQrGqtDhDUqKyIsQZCBPFvfGykRWaLWzbtAUrApprqG9hfExQzppNsw0gsftNSHZ1emC5tC2fuib6FhQw9TE2ge9tUjEZNALcVZvopTtTX0H2gEfnRJ48UNeV3SKggjXcoPVeivmqXuPBGncXWWq1pHR-Xs4zSLA5Mgcw_tQJc4FIER0i7hUlZXoc991ZHyOvAC-GlHWzQwvrlY11oD38pB47NkHN2WVPtUCAtyYQe5TE6Xznd9kPgqqvVUkV0s0suh5vINGoiPEnMjyhYEN7eOmJRIJ_A87IJesbdPRV4ZzBsqPbd02RG3ZuVpc3gI1xKvwH1WS05XI8eWK-BbvB3oxB7WjaQTWcfBWhMEULiwx-SucuyAzPAw3i6Wjtq61TcL9SdWhmOf9_yo-Np052tj7MQ66nmgdOH_MEKYjAdFypxTsRQoSLbv28HEcSjwx8u3pY0q0gKMK_5X2XKJrp2i2GB_fVgbcpH9YsgrYxh1Q8 | 2W5VMzNKYwr51QsiYBHUS5h5wxZf_uBgG7C6xiHgBHwwLUty5LHKFFBDlAxTCTAhglcmys2_HQoOj_LnCkA3 | rK8XXxd8JqsZFPXVOwkSWS5Gh1SJzftfCOLpLk4i1FY | | **Normalized Address** | 9ODOd-\_ZT9oWoRMVmmD4G5f9Z6MjvYxO3Nen-T5OXvU | 5JtuS4yOFtUX2Rg3UU7AgBaUqh4s8wyyNTZk9UrzI-Q | K8kpPM1RID8ZM2sjF5mYy0rP4gXSRDbrwPUd9Qths64 | ## Public Keys and Addresses Crypto wallets consist of two separate components. The public keys, which are public knowledge and can be seen by anyone, and the private keys, which only the owner of a wallet should have access to. Crypto wallet addresses are derived from the public key. {" "} It is important to note that all crypto wallet public and private keys are binary data. The values provided below for Arweave and Ethereum/Polygon public keys are base64url and hex encoded representations of that binary data respectively. ### Arweave The public key for an Arweave wallet is the `n` field of the JWK json file. 0jkGWDFYI3DHEWaXhZitjTg67T-enQwXs50lTDrMhy2qb619_91drv_50J5PwrOYJiMmYhiEA5ojMvrrAFY-Dm1bJbJfVBU1kIsPho2tFcXnbSOa2_1bovAys0ckJU07wkbmIUpzp3trdxYReB4jayMMOXWw9B8xS0v81zFmK3IbCtL9N6WNTMONOSMATHFQrGqtDhDUqKyIsQZCBPFvfGykRWaLWzbtAUrApprqG9hfExQzppNsw0gsftNSHZ1emC5tC2fuib6FhQw9TE2ge9tUjEZNALcVZvopTtTX0H2gEfnRJ48UNeV3SKggjXcoPVeivmqXuPBGncXWWq1pHR-Xs4zSLA5Mgcw_tQJc4FIER0i7hUlZXoc991ZHyOvAC-GlHWzQwvrlY11oD38pB47NkHN2WVPtUCAtyYQe5TE6Xznd9kPgqqvVUkV0s0suh5vINGoiPEnMjyhYEN7eOmJRIJ_A87IJesbdPRV4ZzBsqPbd02RG3ZuVpc3gI1xKvwH1WS05XI8eWK-BbvB3oxB7WjaQTWcfBWhMEULiwx-SucuyAzPAw3i6Wjtq61TcL9SdWhmOf9_yo-Np052tj7MQ66nmgdOH_MEKYjAdFypxTsRQoSLbv28HEcSjwx8u3pY0q0gKMK_5X2XKJrp2i2GB_fVgbcpH9YsgrYxh1Q8 The public wallet address for that wallet is `9ODOd-_ZT9oWoRMVmmD4G5f9Z6MjvYxO3Nen-T5OXvU`, this is obtained by decoding the public key from base64url to normalize padding, sha256 hashing the result, and then base64url encoding that. ### Ethereum/Polygon The public key for an EVM wallet (Ethereum, Polygon/Matic) is derived from its private key, using the [Elliptic Curve Digital Signature Algorithm](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm), or ECDSA. `0xb5d96e5533334a630af9d50b226011d44b9879c3165ffee0601bb0bac621e0047c302d4b72e4b1ca145043940c53093021825726cacdbf1d0a0e8ff2e70a4037` The public wallet address is `0x084af408C8E492aC52dc0Ec76514A7deF8D5F03f`, this is obtained by removing the first byte from the public key, Keccak-256 hashing the remainder, taking the the last 20 bytes (40 hexadecimal characters) and prepending `0x` to it. ### Solana A Solana wallet is an array of 64 bytes. The first 32 bytes are the private key, and the last 32 bytes are the public key. Below is the public key portion of a Solana wallet: `[172, 175, 23, 95, 23, 124, 38, 171, 25, 20, 245, 213, 59, 9, 18, 89, 46, 70, 135, 84, 137, 205, 251, 95, 8, 226, 233, 46, 78, 34, 212, 86]` The public wallet address for this wallet is `Cd5yb4mvbuQyyJgAkriFZbWQivh2zM68KGZX8Ksn1L85`, this is derived by base58 encoding the public key bytes. ## Normalizing Addresses As shown in the above examples, the format of public keys, and the resulting derived wallet addresses, vary widely between blockchains. Arweave manages this by applying the same derivation methods that Arweave uses for its own wallets to the public keys from other chains. ### Ethereum/Polygon The leading `0x` and uncompressed flag `04` (if present) is removed from the public key of an EVM wallet, and then the remainder is base64url encoded to obtain the Arweave normalized public key. Continuing with the same public key in the above example, the normalized public key would be: `2W5VMzNKYwr51QsiYBHUS5h5wxZf_uBgG7C6xiHgBHwwLUty5LHKFFBDlAxTCTAhglcmys2_HQoOj_LnCkA3` This value is what is used as the GraphQL tag `owner` value for data items being uploaded to Arweave using an EVM wallet. The normalized address is then derived from this value by sha256 hashing it, and then base64url encoding the result: `5JtuS4yOFtUX2Rg3UU7AgBaUqh4s8wyyNTZk9UrzI-Q` ### Solana The normalized public key for Solana wallets are derived similarly. The 32 byte public key is base64url encoded: `rK8XXxd8JqsZFPXVOwkSWS5Gh1SJzftfCOLpLk4i1FY` Again, this value is used for the GraphQl tag `owner` when uploading data. It can then be sha256 hashed, and base64url encoded again to derive the normalized address: `K8kpPM1RID8ZM2sjF5mYy0rP4gXSRDbrwPUd9Qths64` # Browser Sandboxing (/build/advanced/sandboxing) ## Overview Browser sandboxing allows data requests to a gateway node to benefit from the security advantages of using a browser's same-origin policy by redirecting the requests to a pseudo-unique subdomain of the gateway's apex domain. For example, an attempt to access `https://arweave.net/gnWKBqFXMJrrksEWrXLQRUQQQeFhv4uVxesHBcT8i6o` would redirect to `https://qj2yubvbk4yjv24syelk24wqivcbaqpbmg7yxfof5mdqlrh4rova.arweave.net/gnWKBqFXMJrrksEWrXLQRUQQQeFhv4uVxesHBcT8i6o` Two DNS records are required to link a domain to an Arweave transaction on a gateway node. For example, `www.mycustomsite.com` would need the following records to link it to `www.arweave-gateway.net`: - A DNS CNAME record pointing to an Arweave gateway: www CNAME `arweave-gateway.net`, - A DNS TXT record linking the domain with a specific transaction ID: arweavetx TXT `kTv4OkVtmc0NAsqIcnHfudKjykJeQ83qXXrxf8hrh0S` When a browser requests `www.mycustomsite.com` the user's machine will (through the usual DNS processes) resolve this to the IP address for the gateway node `arweave-gateway.net`. When the gateway receives an HTTP request with a non-default hostname, e.g. `www.mycustomsite.com` instead of `www.arweave-gateway.net`, the gateway will query the DNS records for `www.mycustomsite.com` and the 'arweavetx' TXT record will tell the node which transaction to serve. ## TLS and its Role in Browser Sandboxing Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. In the context of Arweave applications and browser sandboxing, TLS plays a critical role in ensuring secure data transmission and enabling the effective use of browser security features. When Arweave applications are accessed without TLS, most browsers restrict the use of native cryptographic functions. These functions, which include hashing, signing, and verification, are essential for the secure operation of Arweave permaweb apps. Without TLS, not only are these functions unavailable, but the applications also become susceptible to various security threats, notably man-in-the-middle (MITM) attacks. Although Arweave transactions are signed, making direct MITM attacks challenging, the absence of encryption can expose other vulnerabilities. For instance, attackers could intercept and alter the `/price` endpoint, potentially causing transaction failures or leading to overcharging. To address these concerns, gateway operators are responsible for generating and maintaining TLS certificates for their gateways. This can be achieved through various systems, such as ACME for Let's Encrypt. An important step in setting up a gateway is obtaining a wildcard TLS certificate for the gateway's domain. This certificate secures traffic on both the apex domain and its single-level subdomains (e.g., `gateway.com` and `subdomain.gateway.com`). The integration of TLS is crucial for the implementation of browser sandboxing. When a browser requests a transaction from a gateway, the gateway issues a 301 redirect to a subdomain of the gateway, using a Base32 pseudo-unique address derived from the transaction ID. This redirection, secured by TLS, invokes the browser's same-origin policy. As a result, the requested web page is confined within a secure sandbox environment, isolated from other domains. This isolation is vital for maintaining the integrity and security of transactions and interactions within Arweave's permaweb applications. ## Deriving Sandbox Value AR.IO nodes generate browser sandbox values deterministically. Because of this, it is possible to calculate ahead of time what that value will be for a particular transaction id. Sandbox values are a Base32 encoding of the transaction ID. AR.IO gateways use the following code snippet to accomplish the encoding: ```typescript const expectedTxSandbox = (id: string): string => { return toB32(fromB64Url(id)) } ``` Example: ```typescript const id = 'gnWKBqFXMJrrksEWrXLQRUQQQeFhv4uVxesHBcT8i6o' const expectedTxSandbox = (id): string => { return toB32(fromB64Url(id)) } console.log(expectedTxSandbox) ``` Example Output: ```console qj2yubvbk4yjv24syelk24wqivcbaqpbmg7yxfof5mdqlrh4rova ``` View the full code for generating browser sandbox values [here](https://github.com/ar-io/arweave-gateway/blob/719f43f8d6135adf44c87701e95f58105638710a/src/gateway/middleware/sandbox.ts#L69). # Bundler (/build/extensions/bundler) ## Overview A [Turbo ANS-104](https://github.com/ardriveapp/turbo-upload-service/) data item bundler can be run alongside an AR.IO gateway. This allows gateways the ability to accept data items to be submitted to the Arweave blockweave. The bundler service can be easily run inside Docker in the same way that the gateway is. It utilizes a separate docker compose file for configuration and deployment, which also allows for the use of a separate file for environmental variables specific to the bundler service. Additionally, the separation allows operators to spin their bundler service up or down at any time without affecting their core gateway service. Despite the use of separate docker compose files, the bundler service shares a docker network with the AR.IO gateway, and so is able to directly interact with the gateway service and data. For more information on ANS-104 Bundles, see the [ANS-104 Bundles](/learn/ans-104-bundles) page. ## Getting Started **NOTE**: The bundler service relies on GraphQL indexing of recently bundled and uploaded data to manage its pipeline operations. The AR.IO gateway should have its indexes synced up to Arweave's current block height before starting the bundler's service stack. ### Configure Environmental Variables Environmental variables must be provided for the bundler to function and integrate properly with an existing AR.IO gateway. The gateway repository provides a `.env.bundler.example` file that can be renamed to `.env.bundler` and used as a starting point. It contains the following: ```bash BUNDLER_ARWEAVE_WALLET='Stringified JWK wallet. e.g: '{ "n": "...", ... }' BUNDLER_ARWEAVE_ADDRESS='Address for above wallet' APP_NAME='AR.IO bundler service' # Use localstack s3 bucket for shared data source between AR.IO gateway and bundler AWS_S3_BUCKET=ar.io AWS_S3_PREFIX='data' AWS_ACCESS_KEY_ID='test' AWS_SECRET_ACCESS_KEY='test' AWS_REGION='us-east-1' AWS_ENDPOINT='http://localstack:4566' ``` - `BUNDLER_ARWEAVE_WALLET` must be the entire jwk of an Arweave wallet's keyfile, stringified. All uploads of bundled data items to Arweave will be signed and paid for by this wallet, so it must maintain a balance of AR tokens sufficient to handle the uploads. - `BUNDLER_ARWEAVE_ADDRESS` must be the [normalized public address](/glossary) for the provided Arweave wallet. - `APP_NAME` is a GraphQL tag that will be added to uploaded bundles. The remaining lines in the `.env.bundler.example` file control settings that allow the bundler service to share data with the AR.IO gateway. Data sharing of contiguous data between a bundler and a gateway allows the gateway to serve optimistically cached data without waiting for it to fully settle on chain. ### Configure Optimistic Indexing By default, the bundler will only accept data items uploaded by data item signers whose [normalized wallet addresses](/glossary) are in the `ALLOW_LISTED_ADDRESSES` list. This is an additional environmental variable that can be added to your `.env.bundler` file, and must be a comma separated list of normalized public wallet addresses for wallets that should be allowed to bundle and upload data through your gateway. ```bash ALLOW_LISTED_ADDRESSES=, ``` The following permissioning configurations schemes are also possible: | Scheme | ALLOW_LISTED_ADDRESSES | SKIP_BALANCE_CHECKS | ALLOW_LISTED_SIGNATURE_TYPES | PAYMENT_SERVICE_BASE_URL | | -------------------------- | ------------------------------------------- | ------------------- | ---------------------------- | ------------------------ | | **Allow Specific Wallets** | Comma-separated normalized wallet addresses | false | EMPTY or supplied | EMPTY | | **Allow Specific chains** | EMPTY or supplied | false | arbundles sigtype int | EMPTY | | **Allow All** | n/a | true | n/a | n/a | | **Allow None** | EMPTY | false | EMPTY | EMPTY | | **Allow Payers** | EMPTY or supplied | false | EMPTY or supplied | Your payment service url | ### Set Up Indexing Bundlers submit data to the Arweave network as an [ANS-104 data item bundle](https://github.com/ArweaveTeam/arweave-standards/blob/master/ans/ANS-104.md). This means it is several transactions wrapped into one. A gateway will need to unbundle these transactions in order to index them. A gateway should include the following ANS-104 filters in order to unbundle and index transactions from a particular bundler: ```bash ANS104_INDEX_FILTER={ "always": true } ANS104_UNBUNDLE_FILTER={ "attributes": { "owner_address": "$BUNDLER_ARWEAVE_ADDRESS" } } ``` `$BUNDLER_ARWEAVE_ADDRESS` should be replaced with the [normalized public wallet address](/glossary) associated with the bundler. **NOTE**: The above filters must be placed in the `.env` file for the core gateway service, not the bundler. Gateways handle data item indexing asynchronously. This means they establish a queue of items to index, and work on processing the queue in the background while the gateway continues with its normal operations. If a gateway has broad indexing filters, there can be some latency in indexing data items from the bundler while the gateway works through its queue. ### Configure Optimistic Indexing Gateway operators control access to their [optimistic data item indexing](/glossary) API via an admin key that must be supplied by all bundling clients in order for their requests to be accepted. This key should be made available in the environment configuration files for BOTH the core gateway, and the bundler, and should be provided as `AR_IO_ADMIN_KEY`: ```bash AR_IO_ADMIN_KEY="Admin password" ``` **NOTE**: If a gateway is started without providing the admin key, a random string will be generated to protect the gateway's admin endpoints. This can be reset by restarting the gateway with the admin key provided in the `.env` file. ## Starting and Stopping the Bundler ### Starting The bundler service is designed to run in conjunction with an AR.IO gateway, and so relies on the `ar-io-network` network created in Docker when the core gateway services are spun up. It is possible to spin up the bundler while the core services are down, but the network must exist in Docker. To start the bundler, specify the env and docker-compose files being used in a `docker compose up` command: ```bash docker compose --env-file ./.env.bundler --file docker-compose.bundler.yaml up -d ``` The `-d` flag runs the command in "detached" mode, so it will run in the background without requiring the terminal to remain active. ### Stopping To spin the bundler service down, specify the docker-compose file in a `docker compose down` command: ```bash docker compose --file docker-compose.bundler.yaml down ``` ### Logs While the bundler service is running in detached mode, logs can be checked by specifying the docker-compose file in a `docker compose logs` command: ```bash docker compose --file docker-compose.bundler.yaml logs -f --tail=0 ``` - `-f` runs the command in "follow" mode, so the terminal will continue to watch and display new logs. - `--tail=` defines the number of logs to display that existed prior to running the command. `0` displays only new logs. ## Useful Docker Commands Monitor and manage your bundler service with these commands: ```bash # View all running services docker ps # Start bundler service in background docker compose --env-file ./.env.bundler --file docker-compose.bundler.yaml up -d # Stop bundler service docker compose --file docker-compose.bundler.yaml down # Pull latest bundler images docker compose --file docker-compose.bundler.yaml pull # Follow bundler logs docker compose --file docker-compose.bundler.yaml logs -f --tail=10 # Check bundler service status docker compose --file docker-compose.bundler.yaml ps # Restart bundler service docker compose --file docker-compose.bundler.yaml restart ``` ## Next Steps Now that you have a bundler set up to accept data uploads, continue building your gateway infrastructure: } title="Set Up Monitoring" description="Deploy Grafana to visualize your gateway's performance metrics" href="/build/extensions/grafana" /> } title="Add ClickHouse" description="Improve query performance with ClickHouse and Parquet integration" href="/build/extensions/clickhouse" /> } title="Run Compute Unit" description="Execute AO processes locally for maximum efficiency" href="/build/extensions/compute-unit" /> } title="Buy an ArNS Name" description="Get a human-readable name for your gateway and start serving the permanent web" href="/learn/arns/name-registration" /> # ClickHouse & Parquet (/build/extensions/clickhouse) ## Overview AR.IO gateway Release 33 introduces a new configuration option for using Parquet files and ClickHouse to improve performance and scalability of your AR.IO gateway for large datasets. This guide will walk you through the process of setting up ClickHouse with your AR.IO gateway, and importing Parquet files to bootstrap your ClickHouse database. ## What is Parquet? Apache Parquet is a columnar storage file format designed for efficient data storage and retrieval. Unlike row-based storage formats like SQLite, Parquet organizes data by column rather than by row, which provides several advantages for analytical workloads: - **Efficient compression**: Similar data is stored together, leading to better compression ratios - **Columnar access**: You can read only the columns you need, reducing I/O operations - **Predicate pushdown**: Filter operations can be pushed down to the storage layer, improving query performance For more information about Parquet, see the [Parquet documentation](https://parquet.apache.org/docs/). ## Current Integration with AR.IO Gateways In the current AR.IO gateway implementation, Parquet and ClickHouse run alongside SQLite rather than replacing it. This parallel architecture allows each database to handle what it does best: - **SQLite** continues to handle transaction writes and updates - **ClickHouse** with Parquet files is optimized for fast query performance, especially with large datasets The gateway continues to operate with SQLite just as it always has, maintaining all of its normal functionality. Periodically, the gateway will Note that despite Parquet's efficient compression, gateways may not see significant disk space reduction in all cases. While bundled transaction data is exported to Parquet, L1 data remains in SQLite. Without substantial unbundling and indexing filters, minimal data gets exported to Parquet, limiting potential storage savings. With ClickHouse integration enabled, GraphQL queries are primarily routed to ClickHouse, leveraging its superior performance for large datasets. This significantly improves response times while maintaining SQLite's reliability for transaction processing. For more information about gateway architecture and data processing, see our [Gateway Architecture](/learn/gateways/architecture) documentation. ## Parquet vs. SQLite in AR.IO Gateways While SQLite is excellent for transactional workloads and small to medium datasets, it faces challenges with very large datasets: | Feature | SQLite | Parquet + ClickHouse | | ------------------------ | ----------------------------- | -------------------------------- | | Storage model | Row-based | Column-based | | Query optimization | Basic | Advanced analytical optimization | | Compression | Limited | High compression ratios | | Scaling | Limited by single file | Distributed processing capable | | Write speed | Fast for small transactions | Optimized for batch operations | | Read speed for analytics | Slower for large datasets | Optimized for analytical queries | | Ideal use case | Recent transaction data, OLTP | Historical data, OLAP workloads | ## Benefits for Gateway Operators Implementing Parquet and ClickHouse alongside SQLite in your AR.IO gateway offers several key advantages: - **Dramatically improved query performance** for GraphQL endpoints, especially for large result sets - **Reduced storage requirements** through efficient columnar compression - **Better scalability** for growing datasets - **Faster bootstrapping** of new gateways through Parquet file imports - **Reduced load on SQLite** by offloading query operations to ClickHouse The primary focus of the Parquet/ClickHouse integration is the significant speed improvement for querying large datasets. Gateway operators managing significant volumes of data will notice substantial performance gains when using this configuration. ## Storage Considerations While Parquet files offer more efficient compression for the data they contain, it's important to understand the storage impact: - Bundled transaction data is exported to Parquet and removed from SQLite, potentially saving space - L1 data remains in SQLite regardless of Parquet configuration - Space savings are highly dependent on your unbundling filters - without substantial unbundling configurations, minimal data gets exported to Parquet - The more data you unbundle and For gateway operators, this means proper filter configuration is crucial to realize storage benefits. The primary advantage remains significantly improved query performance for large datasets, with potential space savings as a secondary benefit depending on your specific configuration. The following sections will guide you through setting up ClickHouse with your AR.IO gateway, exporting data from SQLite to Parquet, and importing Parquet files to bootstrap your ClickHouse database. The below instructions are designed to be used in a linux environment. Windows and MacOS users must modify the instructions to use the appropriate package manager/ command syntax for their platform. Unless otherwise specified, all commands should be run from the root directory of the gateway. ## Installing ClickHouse ClickHouse is a powerful, open-source analytical database that excels at handling large datasets and complex queries. It is the tool used by the gateway to integrate with the Parquet format. For more information about ClickHouse, see the [ClickHouse documentation](https://clickhouse.com/docs/). ### Add ClickHouse Repository It is recommended to use [official pre-compiled deb packages for Debian or Ubuntu](https://clickhouse.com/docs/install#quick-install). Run these commands to install packages: ```bash sudo apt-get install -y apt-transport-https ca-certificates curl gnupg curl -fsSL 'https://packages.clickhouse.com/rpm/lts/repodata/repomd.xml.key' | sudo gpg --dearmor -o /usr/share/keyrings/clickhouse-keyring.gpg ARCH=$(dpkg --print-architecture) echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg arch=${ARCH}] https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list sudo apt-get update ``` This will verify the installation package from official sources and enable installation via `apt-get`. ### Install ClickHouse ```bash sudo apt-get install -y clickhouse-client ``` This will perform the actual installation of the ClickHouse server and client. During installation, you will be prompted to set a password for the `default` user. This is required to connect to the ClickHouse server. Advanced users may also choose to create a designated user account in clickhouse for the gateway to use, but the default gateway configuration will assume the `default` user. ## Configure Gateway to use ClickHouse ### Set Basic ClickHouse Configuration Because the gateway will be accessing ClickHouse, host address andthe password for the selected user must be provided. This is done via the `CLICKHOUSE_PASSWORD` environment variable. Update your .env file with the following: ```bash CLICKHOUSE_URL="http://clickhouse:8123" CLICKHOUSE_PASSWORD= ``` If you set a specific user account for the gateway to use, you can set the `CLICKHOUSE_USER` environment variable to the username. ```bash CLICKHOUSE_USER= ``` If omitted, the gateway will use the `default` user. ### Configure Unbundling Filters Additionally, The Parquet file provided below contains an unbundled data set that includes all data items uploaded via an ArDrive product, including Turbo. Because of this, it is recommended to include unbundling filters that match, or expand, this configuration. ```bash ANS104_UNBUNDLE_FILTER='{ "and": [ { "not": { "or": [ { "tags": [ { "name": "Bundler-App-Name", "value": "Warp" } ] }, { "tags": [ { "name": "Bundler-App-Name", "value": "Redstone" } ] }, { "tags": [ { "name": "Bundler-App-Name", "value": "KYVE" } ] }, { "tags": [ { "name": "Bundler-App-Name", "value": "AO" } ] }, { "attributes": { "owner_address": "-OXcT1sVRSA5eGwt2k6Yuz8-3e3g9WJi5uSE99CWqsBs" } }, { "attributes": { "owner_address": "ZE0N-8P9gXkhtK-07PQu9d8me5tGDxa_i4Mee5RzVYg" } }, { "attributes": { "owner_address": "6DTqSgzXVErOuLhaP0fmAjqF4yzXkvth58asTxP3pNw" } } ] } }, { "tags": [ { "name": "App-Name", "valueStartsWith": "ArDrive" } ] } ] }' ANS104_INDEX_FILTER='{ "tags": [ { "name": "App-Name", "value": "ArDrive-App" } ] }' ``` ### Set Admin API Key Lastly, you must have a gateway admin password set. This is used for the periodic ```bash ADMIN_API_KEY= ``` Once the .env file is updated, restart the gateway to apply the changes. ## Downloading and Importing the Parquet File ### Download the Parquet File A Parquet archive file is available for download from [ar://JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM](https://arweave.net/JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM). This file contains an unbundled data set that includes all data items uploaded via an ArDrive product, current to April 23, 2025, and compressed using tar.gz. To download the file, run the following command: ```bash curl -L https://arweave.net/JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM -o 2025-04-23-ardrive-ans104-parquet.tar.gz ``` or visit the url [https://arweave.net/JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM](https://arweave.net/JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM) and download the file manually. If downloaded manually, it will download as a binary file named `JVmsuD2EmFkhitzWN71oi9woADE4WUfvrbBYgremCBM`. This is normal and must be converted to a tar.gz file by renaming it to `2025-04-23-ardrive-ans104-parquet.tar.gz`. It should also be placed in the root directory of the gateway. The downloaded file will be approximately 3.5GB in size. ### Extract the Parquet Files With the parquet file downloaded and placed in the root directory of the gateway, you can extract the file and import it into ClickHouse. ```bash tar -xzf 2025-04-23-ardrive-ans104-parquet.tar.gz ``` This will extract the file into a directory named `2025-04-23-ardrive-ans104-parquet`, and take a while to complete. ### Prepare the Data Directory Next, if you do not already have a `data/parquet` directory, you must create it. Release 33 does not have this directory by default, but future Releases will. You can create the directory by using the following command: ```bash mkdir -p data/parquet ``` or by starting the gateway ClickHouse container with the following command: ```bash docker compose --profile clickhouse up clickhouse -d ``` Depending on your system configurations, allowing the gateway to create the directory may result in the directory being created with incorrect permissions. If this is the case, you can remove the restrictions by running the following command: ```bash sudo chmod -R 777 data/parquet ``` With the directory created, you can now move the extracted parquet files into it. ```bash mv 2025-04-23-ardrive-ans104-parquet/* data/parquet ``` ### Import Data into ClickHouse When this is complete, you can run the import script to import the parquet files into ClickHouse. If you haven't done so already, start the ClickHouse container with the following command: ```bash docker compose --profile clickhouse up clickhouse -d ``` Then run the import script with the following command: ```bash ./scripts/clickhouse-import ``` This process will take several minutes, and will output the progress of the import. ## Verifying Successful Import ### Verify ClickHouse Import To verify that the import was successful, run the following commands: ```bash clickhouse client --password -h localhost -q 'SELECT COUNT(DISTINCT id) FROM transactions' ``` Being sure to replace `` with the password you set for the selected ClickHouse user. This should return a count of the number of unique transactions in the parquet file, which is `32712311`. ### Test GraphQL Endpoint You can also verify that the data is being served by the gateway's GraphQL endpoint by ensuring the gateway is not proxying its GraphQL queries (Make sure `GRAPHQL_HOST` is not set) and running the following command: ```bash curl -g -X POST \ -H "Content-Type: application/json" \ -d '{"query":"query { transactions(ids: [\"YSNwoYB01EFIzbs6HmkGUjjxHW3xuqh-rckYhi0av4A\"]) { edges { node { block { height } bundledIn { id } } } } }"}' \ http://localhost:3000/graphql # Expected output: # {"data":{"transactions":{"edges":[{"node":{"block":{"height":1461918},"bundledIn":{"id":"ylhb0PqDtG5HwBg00_RYztUl0x2RuKvbNzT6YiNR2JA"}}}]}}} ``` ## Starting and Stopping the Gateway with ClickHouse The gateway ClickHouse container is run as a "profile" in the main docker compose file. That means you must specify the profile when starting or stopping the gateway if you want to include the ClickHouse container in the commands. ### Start Gateway with ClickHouse To start the gateway with the ClickHouse profile, run the following command: ```bash docker compose --profile clickhouse up -d ``` This will start all of the containers normally covered by the `docker compose up` command, but will also start the ClickHouse container. ### Stop Gateway with ClickHouse To stop the gateway with the ClickHouse profile, run the following command: ```bash docker compose --profile clickhouse down ``` This will stop all of the containers normally covered by the `docker compose down` command, but will also stop the ClickHouse container. ### Manage ClickHouse Container Only To start or stop only the ClickHouse container, you can use the following commands: ```bash docker compose --profile clickhouse up clickhouse -d ``` and ```bash docker compose --profile clickhouse down clickhouse ``` ## Useful Docker Commands Monitor and manage your gateway with ClickHouse using these commands: ```bash # View all running services docker ps # Start gateway with ClickHouse profile docker compose --profile clickhouse up -d # Stop gateway with ClickHouse profile docker compose --profile clickhouse down # Pull latest images docker compose --profile clickhouse pull # Start only ClickHouse container docker compose --profile clickhouse up clickhouse -d # Stop only ClickHouse container docker compose --profile clickhouse down clickhouse # Follow gateway logs docker compose logs core -f -n 10 # Follow ClickHouse logs docker compose --profile clickhouse logs clickhouse -f -n 10 # Check ClickHouse container status docker compose --profile clickhouse ps clickhouse # Restart ClickHouse container docker compose --profile clickhouse restart clickhouse ``` ## Next Steps Now that you have ClickHouse set up for improved query performance, continue building your gateway infrastructure: } title="Set Up Monitoring" description="Deploy Grafana to visualize your gateway's performance metrics" href="/build/extensions/grafana" /> } title="Deploy Bundler" description="Accept data uploads directly through your gateway" href="/build/extensions/bundler" /> } title="Run Compute Unit" description="Execute AO processes locally for maximum efficiency" href="/build/extensions/compute-unit" /> } title="Join the Network" description="Register your gateway and start serving the permanent web" href="/build/run-a-gateway/join-the-network" /> # AO Compute Unit (CU) (/build/extensions/compute-unit) ## Overview An AO Compute Unit (CU) is a critical component in the AO ecosystem responsible for executing AO processes and maintaining their state. CUs serve as the computational backbone of the AO network by: - **Processing Messages**: CUs receive and process messages sent to AO processes - **Executing WASM Modules**: CUs run the WebAssembly (WASM) code that defines process behavior - **Maintaining State**: CUs track and update the state of AO processes - **Creating Checkpoints**: CUs periodically save process state to the Arweave network as checkpoints Running a CU alongside your gateway allows you to: 1. Process AO requests locally rather than relying on external services 2. Improve response times for AO-related queries 3. Contribute computational resources to the AO network 4. Ensure your gateway has reliable access to AO functionality For more detailed information about Compute Units, please refer to the [AO Cookbook: Units](https://cookbook_ao.arweave.net/concepts/units.html#summary). ## System Requirements Before deploying a CU, ensure your system meets the following requirements: - **Recommended**: At least 16GB RAM for optimal CU operation - **Minimum**: 4GB RAM is possible with adjusted memory limits (see resource allocation settings) - At least 100GB disk space dedicated to CU operation - These requirements are separate from your gateway requirements Running a CU is resource-intensive. Make sure your system has sufficient resources to handle both the gateway and the CU. While you can run a CU with less than the recommended RAM, you'll need to adjust the memory limits accordingly. ## Deploying an AO CU ### Navigate to Gateway Directory First, navigate to the root directory of your gateway: ```bash cd /path/to/your/gateway ``` ### Configure Environment Variables Copy the example environment file: ```bash cp .env.ao.example .env.ao ``` ### Default .env.ao.example Contents The default `.env.ao.example` file contains the following settings: ``` CU_WALLET='[wallet json here]' PROCESS_CHECKPOINT_TRUSTED_OWNERS=fcoN_xJeisVsPXA-trzVAuIiqO3ydLQxM-L4XbrQKzY GATEWAY_URL=http://envoy:3000 UPLOADER_URL=http://envoy:3000/bundler ``` These default settings are configured to work with a gateway running on the same machine, but you'll need to modify them as described below. Open the `.env.ao` file in your preferred text editor: ```bash nano .env.ao ``` Configure the following settings: 1. **CU_WALLET**: Replace `'[wallet json here]'` with the JSON from an Arweave wallet. The entire JSON must be placed on a single line for proper registration. 2. **PROCESS_CHECKPOINT_TRUSTED_OWNERS**: This is a comma-separated list of trusted wallet addresses: ``` PROCESS_CHECKPOINT_TRUSTED_OWNERS=fcoN_xJeisVsPXA-trzVAuIiqO3ydLQxM-L4XbrQKzY ``` If you are uploading your own checkpoints, you should add your own CU wallet address after the default value, separated by a comma: ``` PROCESS_CHECKPOINT_TRUSTED_OWNERS=fcoN_xJeisVsPXA-trzVAuIiqO3ydLQxM-L4XbrQKzY,YOUR_WALLET_ADDRESS_HERE ``` This allows your CU to trust checkpoints from both the official source and your own wallet. 3. **GATEWAY_URL**: By default, this is set to use your own gateway: ``` GATEWAY_URL=http://envoy:3000 ``` A gateway must be set to index all ANS-104 data items from AO or the CU will not operate properly. Most users will want to set this to: ``` GATEWAY_URL=https://arweave.net ``` 4. **UPLOADER_URL**: By default, this is set to use a bundler sidecar run by your gateway: ``` UPLOADER_URL=http://envoy:3000/bundler ``` Checkpoints are uploaded to Arweave, so the upload must be paid for. You must ensure your wallet has sufficient funds: - If using `https://up.arweave.net` (recommended), your CU_WALLET must contain Turbo Credits - If using your own bundler or another service, you'll need the appropriate token (AR or other) - Without proper funding, checkpoints will fail to upload and your CU may not function correctly The simplest option for most users is to use: ``` UPLOADER_URL=https://up.arweave.net ``` This requires your CU_WALLET to contain Turbo Credits. 5. **Optional: Disable Checkpoint Creation**: If you want to disable checkpoint uploads, add: ``` DISABLE_PROCESS_CHECKPOINT_CREATION=true ``` ### Example of a Completed .env.ao File Here's an example of what your completed `.env.ao` file might look like with common settings: ``` CU_WALLET='{"kty":"RSA","e":"AQAB","n":"mYM07..."}' PROCESS_CHECKPOINT_TRUSTED_OWNERS=fcoN_xJeisVsPXA-trzVAuIiqO3ydLQxM-L4XbrQKzY GATEWAY_URL=https://arweave.net UPLOADER_URL=https://up.arweave.net ``` After making your changes, save and exit the nano editor: 1. Press `Ctrl+X` to exit 2. Press `Y` to confirm saving changes 3. Press `Enter` to confirm the filename ### Optional Resource Allocation Settings You can fine-tune the CU's resource usage by adding these optional environment variables: 1. **PROCESS_WASM_MEMORY_MAX_LIMIT**: Sets the maximum memory limit (in bytes) for WASM processes. ``` PROCESS_WASM_MEMORY_MAX_LIMIT=17179869184 # 16GB (16 * 1024^3) ``` To work with the AR.IO process, `PROCESS_WASM_MEMORY_MAX_LIMIT` must be at least `17179869184` (16GB). Note: This doesn't mean your server needs 16GB of RAM. This is the maximum memory limit the CU will support for processes. Most processes don't use their maximum allocated memory. You can set this value to 16GB even if your server only has 4GB of RAM. However, if a process requires more memory than your server has available, the CU will fail when evaluating messages that need more memory. 2. **WASM_EVALUATION_MAX_WORKERS**: Sets the maximum number of worker threads for WASM evaluation. ``` WASM_EVALUATION_MAX_WORKERS=4 # Example: Use 4 worker threads ``` This will default to (available CPUs - 1) if not specified. If you're running a gateway and unbundling on the same server, consider setting this to 2 or less to avoid overloading your CPU. 3. **PROCESS_WASM_COMPUTE_MAX_LIMIT**: The maximum Compute-Limit, in bytes, supported for ao processes (defaults to 9 billion) ``` PROCESS_WASM_COMPUTE_MAX_LIMIT=9000000000 ``` 4. **NODE_OPTIONS**: Sets Node.js memory allocation for the Docker container. ``` NODE_OPTIONS=--max-old-space-size=8192 # Example: 8GB for Node.js heap ``` Start with conservative values and monitor performance. You can adjust these settings based on your system's capabilities and the CU's performance. ### Start the CU Container Once your environment file is configured, start the CU container: ```bash docker compose --env-file .env.ao -f docker-compose.ao.yaml up -d ``` This command uses the following flags: - `--env-file .env.ao`: Specifies the environment file to use - `-f docker-compose.ao.yaml`: Specifies the Docker Compose file to use - `up`: Creates and starts the containers - `-d`: Runs containers in detached mode (background) ### Check the Logs To check the logs of your CU container: ```bash docker compose -f docker-compose.ao.yaml logs -f --tail=20 ``` This command uses the following flags: - `-f`: Follows the log output (continuous display) - `--tail=20`: Shows only the last 20 lines of logs Exit the logs by pressing `Ctrl+C`. ## Connecting Your Gateway to the CU To make your gateway use your local CU: 1. Add the following line to your gateway's `.env` file: ``` AO_CU_URL=http://ao-cu:6363 ``` This assumes the CU is running on the same machine as the gateway. 2. Restart your gateway: ```bash docker compose down docker compose up -d ``` A CU won't do anything until requests are being made of it. By connecting your gateway to the CU, you'll start generating these requests. ### Accessing Your CU Once properly set up and connected to your gateway, you can access your CU via: ``` https:///ao/cu ``` This endpoint allows you to interact with your CU directly through your gateway's domain. ## Important Notes - **Initial Processing Time**: A CU will need to process AO history before it can give valid responses. This process can take several hours. - **Gateway Fallback**: A gateway on release 27 or above will fallback to arweave.net if its default CU is not responding quickly enough, so gateway operations will not be significantly impacted during the initial processing. - **Monitoring Progress**: Check the CU logs after pointing a gateway at it to watch the process of working through AO history: ```bash docker compose -f docker-compose.ao.yaml logs -f --tail=20 ``` - **Resource Usage**: Running a CU is resource-intensive. Monitor your system's performance to ensure it can handle both the gateway and CU workloads. ## Useful Docker Commands Monitor and manage your AO Compute Unit with these commands: ```bash # View all running services docker ps # Start CU container with environment file docker compose --env-file .env.ao -f docker-compose.ao.yaml up -d # Stop CU container docker compose -f docker-compose.ao.yaml down # Pull latest CU images docker compose -f docker-compose.ao.yaml pull # Follow CU logs docker compose -f docker-compose.ao.yaml logs -f --tail=20 # Check CU container status docker compose -f docker-compose.ao.yaml ps # Restart CU container docker compose -f docker-compose.ao.yaml restart # View CU logs without following docker compose -f docker-compose.ao.yaml logs --tail=50 # Start CU in foreground (for debugging) docker compose --env-file .env.ao -f docker-compose.ao.yaml up ``` ## Next Steps Now that you have a Compute Unit running alongside your gateway, continue building your infrastructure: } title="Set Up Monitoring" description="Deploy Grafana to visualize your gateway's performance metrics" href="/build/extensions/grafana" /> } title="Add ClickHouse" description="Improve query performance with ClickHouse and Parquet integration" href="/build/extensions/clickhouse" /> } title="Deploy Bundler" description="Accept data uploads directly through your gateway" href="/build/extensions/bundler" /> } title="Join the Network" description="Register your gateway and start serving the permanent web" href="/build/run-a-gateway/join-the-network" /> # Grafana (/build/extensions/grafana) ## Overview AR.IO gateways track extensive performance and operational metrics using [Prometheus](https://prometheus.io/). A [Grafana](https://grafana.com/) sidecar can be deployed to visualize these metrics, providing an easy way to monitor gateway health and performance. The Grafana sidecar is deployed as a separate Docker container that uses the same network as the gateway, making it simple to integrate with your existing setup. ![Grafana Dashboard](/grafana.png) ## Quick Start ### Deploy Grafana Deploy the Grafana sidecar using the provided Docker Compose file: ```bash docker compose -f docker-compose.grafana.yaml up -d ``` This command assumes you're running from the root directory of the gateway. If running from a different directory, adjust the path to the docker-compose file accordingly. ### Verify Deployment Check that Grafana is running properly: ```bash docker compose -f docker-compose.grafana.yaml logs -f --tail=25 ``` Press `Ctrl+C` to exit the logs. Look for any error messages or permission issues. ### Access Grafana Navigate to `http://localhost:1024` in your browser to access Grafana. **Default credentials:** - Username: `admin` - Password: `admin` Updated credentials may be lost if the Grafana sidecar is restarted. Be sure to log into Grafana immediately after every startup to ensure Grafana cannot be accessed with the default credentials. ## Exposing Dashboard Publicly To expose your Grafana dashboard externally through your domain, you'll need to configure nginx as a reverse proxy. This requires DNS setup and SSL certificates as covered in the [gateway installation guide](/build/run-a-gateway/quick-start). This setup assumes you've already configured DNS, SSL certificates, and nginx as described in the [Installation & Setup guide](/build/run-a-gateway/quick-start). ### Deploy Grafana Sidecar First, ensure your Grafana container is running: ```bash docker compose -f docker-compose.grafana.yaml up -d ``` Verify it's accessible locally at `http://localhost:1024`. ### Update Nginx Configuration Edit your existing nginx configuration file (`/etc/nginx/sites-available/default`) to add the Grafana location block: ```nginx # Add this block inside your existing HTTPS server block (port 443) location /grafana/ { proxy_pass http://localhost:1024/grafana/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } ``` Your complete nginx configuration should look like this: ```nginx # Force redirects from HTTP to HTTPS server { listen 80; listen [::]:80; server_name .com *..com; location / { return 301 https://$host$request_uri; } } # Forward traffic to your node and provide SSL certificates server { listen 443 ssl; listen [::]:443 ssl; server_name .com *..com; ssl_certificate /etc/letsencrypt/live/.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/.com/privkey.pem; location / { proxy_pass http://localhost:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; # Forward AR.IO headers if present in the request proxy_set_header X-AR-IO-Origin $http_x_ar_io_origin; proxy_set_header X-AR-IO-Origin-Node-Release $http_x_ar_io_origin_node_release; proxy_set_header X-AR-IO-Hops $http_x_ar_io_hops; } # Grafana dashboard access location /grafana/ { proxy_pass http://localhost:1024/grafana/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` ### Test and Reload Nginx Validate your nginx configuration: ```bash sudo nginx -t ``` If the configuration is valid, reload nginx: ```bash sudo systemctl reload nginx ``` ### Access Your Dashboard Navigate to `https://.com/grafana/` in your browser to access your Grafana dashboard externally. **Default credentials:** - Username: `admin` - Password: `admin` ## Troubleshooting ### Fix Permission Issues ### Method 1: Modify Directory Permissions The simplest solution is to modify the permissions of the Grafana data directory: ```bash sudo chmod -R 777 ./data/grafana ``` This command assumes you're running from the root directory of the gateway. Adjust the path if running from a different directory. ### Method 2: Change Grafana User Alternatively, modify the `docker-compose.grafana.yaml` file to use root user: ```yaml grafana: image: grafana/grafana:latest user: root ports: - "3000:3000" ``` ### Verify Fix Restart Grafana and check logs: ```bash docker compose -f docker-compose.grafana.yaml restart docker compose -f docker-compose.grafana.yaml logs -f ``` ### Resolve Connection Problems ### Check Container Status Verify Grafana is running: ```bash docker compose -f docker-compose.grafana.yaml ps ``` ### Check Port Availability Ensure the port isn't already in use: ```bash netstat -tulpn | grep :1024 # or lsof -i :1024 ``` ### Review Logs Check for specific error messages: ```bash docker compose -f docker-compose.grafana.yaml logs --tail=50 ``` ### Fix Configuration Issues ### Validate Nginx Configuration Test your Nginx configuration: ```bash sudo nginx -t ``` ### Check Proxy Settings Ensure proxy headers are correctly configured: ```nginx location /grafana/ { proxy_pass http://localhost:1024/grafana/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } ``` ### Restart Services Restart both Nginx and Grafana: ```bash sudo systemctl restart nginx docker compose -f docker-compose.grafana.yaml restart ``` ## Security Considerations Always change the default admin credentials immediately after first login. Default credentials are publicly known and pose a security risk. ### Best Practices 1. **Change Default Password** - Use a strong, unique password 2. **Enable HTTPS** - Use SSL certificates for external access 3. **Restrict Access** - Use firewall rules to limit access 4. **Regular Updates** - Keep Grafana updated to latest version 5. **Backup Configuration** - Export and backup dashboard configurations ## Documentation & Support - **Grafana Documentation** - [Official Grafana docs](https://grafana.com/docs/) - **Prometheus Metrics** - [Understanding gateway metrics](https://prometheus.io/docs/concepts/metric_types/) - **Community Support** - Join the [AR.IO Discord](https://discord.gg/cuCqBb5v) for help ## Useful Docker Commands Monitor and manage your Grafana sidecar with these commands: ```bash # View all running services docker ps # Start Grafana sidecar docker compose -f docker-compose.grafana.yaml up -d # Stop Grafana sidecar docker compose -f docker-compose.grafana.yaml down # Pull latest Grafana images docker compose -f docker-compose.grafana.yaml pull # Follow Grafana logs docker compose -f docker-compose.grafana.yaml logs -f --tail=25 # Check Grafana container status docker compose -f docker-compose.grafana.yaml ps # Restart Grafana sidecar docker compose -f docker-compose.grafana.yaml restart # View Grafana logs without following docker compose -f docker-compose.grafana.yaml logs --tail=50 # Start Grafana in foreground (for debugging) docker compose -f docker-compose.grafana.yaml up # Check port availability netstat -tulpn | grep :1024 ``` ## Next Steps Now that you have monitoring set up, continue building your gateway infrastructure: } title="Optimize Performance" description="Learn advanced gateway optimization techniques for better performance" href="/build/run-a-gateway/manage/filters" /> } title="Add ClickHouse" description="Improve query performance with ClickHouse and Parquet integration" href="/build/extensions/clickhouse" /> } title="Deploy Bundler" description="Accept data uploads directly through your gateway" href="/build/extensions/bundler" /> } title="Run Compute Unit" description="Execute AO processes locally for maximum efficiency" href="/build/extensions/compute-unit" /> # Extensions & Sidecars (/build/extensions) ## What are Extensions? Extensions are additional scripts and tools you can run alongside your gateway to expand its capabilities or enhance the operator experience. The full list of community extensions can be found at [gateways.ar.io/#/extensions](https://gateways.ar.io/#/extensions). ## What are Sidecars? Sidecars are dockerized services that add additional functionality, APIs, and services to AR.IO gateways. They run as separate containers alongside your gateway, providing specialized capabilities. ## Getting Started with Team-Supported Sidecars The following sidecars are developed and maintained by the AR.IO team, designed to run alongside your gateway as separate containers. }> Visualize gateway metrics with comprehensive dashboards and performance monitoring. } > Improve query performance for large datasets using columnar storage and analytical optimization. } > Accept and process ANS-104 data item uploads with multiple payment methods and access control. }> Execute AO processes locally with WASM module support and state management. **Ready to enhance your gateway?** Click any sidecar above to get started with detailed setup guides. ## Explore More } title="Monitor your gateway with Grafana" description="Set up comprehensive monitoring and analytics for your gateway infrastructure" href="/build/extensions/grafana" /> } title="Performance Optimization" description="Optimize your gateway for large datasets and high-performance queries" href="/build/run-a-gateway/manage/filters" /> } title="Gateway Operations" description="Learn advanced gateway management, troubleshooting, and configuration" href="/build/run-a-gateway/manage" /> } title="Developer SDKs" description="Integrate AR.IO services into your applications with our SDKs" href="/sdks" /> # ArNS Marketplace (/build/guides/arns-marketplace) **ArNS tokens** have the potential to be traded and sold in decentralized marketplaces. ANTs (Arweave Name Tokens) are both smart contracts and transferable tokens, making them valuable digital assets that could be bought, sold, and traded. However, no established marketplace has yet emerged as the preferred platform for ArNS trading. ## Current State of ArNS Trading **No established marketplace exists yet** - While ANTs are technically transferable tokens, there is currently no widely adopted marketplace specifically for ArNS trading. **Direct transfers are possible** - You can transfer ANTs directly between wallets, but this requires technical knowledge and direct coordination between buyer and seller. **Future potential** - As the ArNS ecosystem grows, dedicated marketplaces may emerge to facilitate easier trading of ArNS tokens and domains. ## What Are ANTs? **Arweave Name Tokens (ANTs)** are: - **Smart contracts** - Define the rules and functionality of your domain - **Transferable tokens** - Can be bought, sold, and traded - **Digital assets** - Represent ownership of ArNS domains - **Permanent** - Stored on Arweave forever ## How Trading Could Work ### 1. Token Ownership **When you own an ANT:** - You control the domain name - You can update where it points - You can transfer ownership - You could potentially sell it to others ### 2. Potential Marketplace Dynamics **Possible trading mechanisms:** - **Direct transfers** - Send tokens directly to another wallet - **Marketplace platforms** - Use dedicated trading platforms (when available) - **Auction systems** - Bid on available domains (when implemented) - **Fixed price sales** - Set a price and wait for buyers (when supported) ### 3. Name Characteristics **What makes ANTs desirable:** - **Domain length** - Shorter names are more memorable - **Memorability** - Easy-to-remember names are more useful - **Brand potential** - Names that could become recognizable - **Uniqueness** - Creative and distinctive names - **Content attached** - Domains with established content ## Potential Trading Examples ### Popular Domain Types **Short names:** - `ar://ai` - Single letter domains - `ar://web3` - Industry keywords - `ar://nft` - Popular terms **Brandable names:** - `ar://crypto` - Industry terms - `ar://decentralized` - Descriptive names - `ar://permanent` - Arweave-related terms ### Potential Use Cases **Personal branding** - Use memorable names for your identity **Project organization** - Create names for different projects **Content management** - Organize content under specific names **Community building** - Create recognizable names for communities ## Getting Started ### 1. Acquire ANTs **Ways to get ANTs:** - **Register new domains** - Create your own primary names - **Buy from others** - Purchase existing domains - **Participate in auctions** - Bid on available names - **Trade with others** - Exchange domains you own ### 2. Choose Names **Consider these factors:** - **Domain length** - Shorter names are more memorable - **Memorability** - Easy to remember and type - **Brand potential** - Could become recognizable - **Current content** - What's already attached to the domain - **Personal preference** - What fits your needs and style ### 3. Trade Safely **Best practices:** - **Verify ownership** - Confirm the seller owns the domain - **Check domain status** - Ensure it's not expired or locked - **Use escrow services** - Protect both buyer and seller - **Document transfers** - Keep records of all transactions ## Benefits - **Transferable ownership** - Move domains between wallets - **Creative expression** - Own and manage creative domain names - **Community participation** - Engage with the ArNS ecosystem - **Content organization** - Structure your permanent web presence - **Identity management** - Use names for personal or project identity ## Ready to Trade? } > Learn about ArNS Primary Names for domain creation. } > Check out hosting decentralized websites for content creation. } > Explore the ArNS documentation for advanced features. # Working With Primary Names (/build/guides/arns-primary-names) Create **web3 identity** using ArNS names. Primary names allow you to use human-readable names as your identity in the Arweave ecosystem, making it easy for others to find and interact with you. ## What Are Primary Names? **Primary names** are ArNS names used as identity that: - **Resolve to wallet addresses** - Link human-readable names to wallet addresses - **Provide web3 identity** - Give users friendly names for their Arweave identity - **Are bidirectional** - Can resolve from name to address or address to name - **Require ownership** - Only the owner of an ArNS name can set it as their primary name - **Enable secure verification** - Ownership requirement ensures identity authenticity - **Work across gateways** - Accessible from any AR.IO gateway ## How It Works ### 1. Identity Registration **Register a primary name:** - Choose a unique name (e.g., `jonniesparkles`) - Pay the registration fee - Link the name to your wallet address - Use as your web3 identity ### 2. Bidirectional Resolution **Name to address resolution:** - `jonniesparkles` → `OU48aJtcq3KjsEqSUWDVpynh1xP2Y1VI-bwiSukAktU` - Others can find your wallet using your name - Use in dApps and applications **Address to name resolution:** - `OU48aJtcq3KjsEqSUWDVpynh1xP2Y1VI-bwiSukAktU` → `jonniesparkles` - Find the name associated with any wallet - Verify identity in transactions ### 3. Application Integration **Use in supported apps:** - **Send tokens to "jonniesparkles"** instead of copying long wallet addresses - **Display friendly names** as usernames when connecting wallets - **Apps resolve names** to wallet addresses using the AR.IO SDK - **Seamless user experience** with human-readable identifiers ## Basic Integration ### Using the AR.IO SDK **Get a primary name by address:** ```javascript const ario = new ARIO(); // Get the primary name for a wallet address const nameData = await ario.getPrimaryName({ address: "OU48aJtcq3KjsEqSUWDVpynh1xP2Y1VI-bwiSukAktU", }); console.log(nameData.name); // e.g., "jonniesparkles" ``` **Get primary name data:** ```javascript const ario = new ARIO(); // Get primary name data for a name const nameData = await ario.getPrimaryName({ name: "jonniesparkles", }); console.log(nameData.owner); // e.g., "OU48aJtcq3KjsEqSUWDVpynh1xP2Y1VI-bwiSukAktU" console.log(nameData.name); // e.g., "jonniesparkles" ``` ## How Apps Use Primary Names **Token transfers:** - Send tokens to "jonniesparkles" instead of copying `OU48aJtcq3KjsEqSUWDVpynh1xP2Y1VI-bwiSukAktU` - Apps automatically resolve the name to the wallet address - Much more user-friendly than long wallet addresses **User interfaces:** - Display "jonniesparkles" as username when wallet is connected - Show friendly names in transaction histories - Make interactions more personal and memorable **Developer integration:** - Use the [AR.IO SDK](/sdks/ar-io-sdk/primary-names#getprimaryname) to resolve names - Support primary names in your dApp - Enhance user experience with human-readable identifiers - **Trust identity ownership** - Only name owners can set primary names, ensuring secure verification ## Benefits - **Web3 identity** - Use human-readable names as your identity - **Easy discovery** - Others can find you by name instead of wallet address - **Bidirectional resolution** - Resolve name to address or address to name - **Secure verification** - Only name owners can set primary names, preventing impersonation - **Permanent ownership** - Own your identity forever - **App integration** - Works in any app that supports primary names ## Ready to Learn More? } > Check out ArNS Marketplace for buying and selling. } > See hosting decentralized websites for website setup. } > Explore the ArNS documentation for advanced features. # ArNS Undernames for Permasite Versioning (/build/guides/arns-undernames-versioning) Use **ArNS undernames** to organize and version your permanent website components. Undernames allow you to create sub-domains under your main ArNS name, making it easy to manage different versions, pages, and assets. ## What Are Undernames? **Undernames** are sub-domains under your main ArNS name that can point to different Arweave transactions. They provide a structured way to organize your permanent website content. **Example structure:** - `yourname.arweave.dev` - Main site - `v1_yourname.arweave.dev` - Version 1 - `v2_yourname.arweave.dev` - Version 2 - `api_yourname.arweave.dev` - API endpoints - `docs_yourname.arweave.dev` - Documentation ## Real-World Example: ArDrive The ArDrive website uses undernames to organize different components and versions: ```json { "@": { "priority": 0, "ttlSeconds": 3600, "transactionId": "Vrf5_MrC1R-6rAk7o_E52DwOsKhyJmkSUqh0h5q4mDQ", "index": 0 }, "dapp": { "ttlSeconds": 3600, "transactionId": "1ubf6cW8T5dYN3COApn8Yii4bA0HKoGeid-z2IjelTo", "index": 1 }, "home": { "ttlSeconds": 900, "transactionId": "V9rQR06L1w9eLBHh2lY7o4uaDO6OqBI8j7TM_qjmNfE", "index": 2 }, "v1_home": { "ttlSeconds": 900, "transactionId": "YzD_Pm5VAfYpMD3zQCgMUcKKuleGhEH7axlrnrDCKBo", "index": 9 }, "v2_home": { "ttlSeconds": 900, "transactionId": "nOXJjj_vk0Dc1yCgdWD8kti_1iHruGzLQLNNBHVpN0Y", "index": 10 }, "v3_home": { "ttlSeconds": 900, "transactionId": "YvGRDf0h2F7LCaGPvdH19m5lqbag5DGRnw607ZJ1oUg", "index": 11 } } ``` **This structure provides:** - **`@`** - Main site (ardrive.arweave.dev) - **`dapp`** - Application interface (dapp_ardrive.arweave.dev) - **`home`** - Homepage (home_ardrive.arweave.dev) - **`v1_home`** - Version 1 homepage (v1_home_ardrive.arweave.dev) - **`v2_home`** - Version 2 homepage (v2_home_ardrive.arweave.dev) - **`v3_home`** - Version 3 homepage (v3_home_ardrive.arweave.dev) ## Use Cases for Undernames ### 1. Website Versioning **Maintain multiple versions:** - `v1_yourname.arweave.dev` - Previous version - `v2_yourname.arweave.dev` - Current version - `beta_yourname.arweave.dev` - Beta testing - `staging_yourname.arweave.dev` - Staging environment ### 2. Component Organization **Separate different parts:** - `api_yourname.arweave.dev` - API endpoints - `docs_yourname.arweave.dev` - Documentation - `assets_yourname.arweave.dev` - Static assets - `blog_yourname.arweave.dev` - Blog content ### 3. Content Management **Organize by content type:** - `home_yourname.arweave.dev` - Homepage - `about_yourname.arweave.dev` - About page - `contact_yourname.arweave.dev` - Contact page - `privacy_yourname.arweave.dev` - Privacy policy ## Benefits of Undername Versioning **Easy access to versions:** - Users can access any version directly via URL - No need to remember transaction IDs - Clear versioning structure **Permanent version history:** - All versions remain accessible forever - Historical record of your website evolution - Easy rollback to previous versions **Organized content:** - Logical structure for different components - Easy to manage and update - Clear separation of concerns **Transferable with ANT:** - Undernames transfer with the main ArNS name - Maintain ownership of all versions - Sell or transfer entire website structure ## How to Set Up Undernames **1. Register your main ArNS name:** - Choose your primary name (e.g., `myapp`) - Register through [ArNS App](https://arns.app) **2. Create undernames:** - Use the ANT interface to add undernames - Point each undername to different transaction IDs - Set appropriate TTL values **3. Deploy different versions:** - Upload each version to Arweave - Get transaction IDs for each version - Update undername records ## Example Implementation **Deploy version 1:** ```bash # Deploy to main site npx permaweb-deploy --arns-name myapp --deploy-folder ./v1-build # Deploy to v1 undername npx permaweb-deploy --arns-name myapp --undername v1 --deploy-folder ./v1-build ``` **Deploy version 2:** ```bash # Deploy to main site npx permaweb-deploy --arns-name myapp --deploy-folder ./v2-build # Deploy to v2 undername npx permaweb-deploy --arns-name myapp --undername v2 --deploy-folder ./v2-build ``` **Access different versions:** - `myapp.arweave.dev` - Current version - `v1_myapp.arweave.dev` - Version 1 - `v2_myapp.arweave.dev` - Version 2 ## Ready to Version Your Site? **Want to learn more?** Check out [ArNS Primary Names](/build/guides/arns-primary-names) for identity management. **Need deployment help?** See [Hosting Decentralized Websites](/build/guides/hosting-decentralized-websites) for setup guides. **Want to trade domains?** Explore [ArNS Marketplace](/build/guides/arns-marketplace) for buying and selling. # Crossmint NFT Minting App (/build/guides/crossmint-nft-minting-app) Build a **completely decentralized NFT minting app** that leverages the power of Arweave for permanent storage and Crossmint for simplified NFT creation. Learn how to store NFT content permanently, create and mint NFTs, build a frontend with authentication and payment options, and deploy your application to Arweave. ## What You'll Learn - How to store NFT content permanently on Arweave - How to create and mint NFTs using Crossmint's API - How to build a frontend with authentication and payment options - How to deploy your application to Arweave - How to configure a human-readable ArNS domain ## Example Project - **Live Demo**: [https://crossmint_zerotoarweave.arweave.net](https://crossmint_zerotoarweave.arweave.net) - **GitHub Repository**: [https://github.com/ar-io/crossmint-arweave-example](https://github.com/ar-io/crossmint-arweave-example) ## Prerequisites - Node.js environment - Arweave wallet with AR tokens - Crossmint developer account - Basic understanding of React and JavaScript ## Quick Start ### Storage Setup Store your NFT image permanently on Arweave using [ArDrive.io](http://ArDrive.io): #### Generate AI Image **Create an AI-generated image for your NFT:** 1. Visit [ChatGPT](https://chat.openai.com/) or another AI image generation tool 2. Use a prompt to generate an interesting image for your NFT 3. Download the generated image to your local machine 4. Make sure to save it in a common format like PNG or JPG #### Upload to Arweave **Store the image permanently on Arweave:** 1. Visit [ArDrive.io](http://ArDrive.io) and log in to your account 2. Fund your ArDrive wallet if needed (requires AR tokens) 3. Create a new folder for your NFT project 4. Drag and drop your AI-generated image into this folder 5. Wait for the upload to complete and for the transaction to be processed #### Get Transaction ID **Retrieve the Arweave Transaction ID:** 1. Click on the uploaded image in your ArDrive folder 2. Look for the "Transaction ID" or "TX ID" in the file details 3. Copy this Transaction ID - it looks like `Abc123XYZ...` 4. Save this Transaction ID - you'll need it for creating your NFT metadata **Important:** This Transaction ID is the permanent reference to your image on the Arweave network. ### Collection and Template Creation Create an ERC-1155 collection and template using Crossmint's API: #### Create Account **Set up your Crossmint developer account:** 1. Visit the [Crossmint Staging Console](https://staging.crossmint.com/console) 2. Sign in and accept the dialog to continue 3. Note that Crossmint provides two environments: - **Staging**: For development and testing (what we'll use first) - **Production**: For your final, live application #### Get API Key **Get a server-side API key:** 1. After logging in, navigate to the "Integrate" tab 2. Click on "API Keys" at the top of the page 3. In the "Server-side keys" section, click "Create new key" 4. Select the following scopes under "Minting API": - `collections.create` - Required for creating a new collection - `nfts.create` - Required for minting NFTs - `nfts.read` - Needed to read NFT information 5. Create and save this API key securely #### Create Collection **Create an ERC-1155 collection:** ```javascript const apiKey = "YOUR_API_KEY"; const env = "staging"; // Using staging environment for development const url = `https://${env}.crossmint.com/api/2022-06-09/collections`; const options = { method: "POST", headers: { "accept": "application/json", "content-type": "application/json", "x-api-key": apiKey, }, body: JSON.stringify({ chain: "ethereum-sepolia", // Using Ethereum testnet for development fungibility: "semi-fungible", // For ERC-1155 tokens metadata: { name: "lil dumdumz SFT Collection", imageUrl: "https://arweave.net/YOUR_ARWEAVE_TX_ID", // Optional collection image description: "A collection of semi-fungible tokens with images stored on Arweave" } }), }; fetch(url, options) .then((res) => res.json()) .then((json) => { console.log("Collection created! Collection ID:", json.id); console.log("Save this Collection ID for the next steps"); }) .catch((err) => console.error("Error:", err)); ``` #### Create Template **Create an SFT template:** ```javascript const apiKey = "YOUR_API_KEY"; const collectionId = "YOUR_COLLECTION_ID"; const env = "staging"; const url = `https://${env}.crossmint.com/apis/2022-06-09/collections/${collectionId}/templates`; const options = { method: "POST", headers: { "accept": "application/json", "content-type": "application/json", "x-api-key": apiKey, }, body: JSON.stringify({ name: "lil dumdumz SFT", description: "A semi-fungible token with image stored on Arweave", imageUrl: "https://arweave.net/YOUR_ARWEAVE_TX_ID", attributes: [ { trait_type: "Rarity", value: "Common" }, { trait_type: "Storage", value: "Arweave" } ] }), }; fetch(url, options) .then((res) => res.json()) .then((json) => { console.log("Template created! Template ID:", json.id); console.log("Save this Template ID for minting NFTs"); }) .catch((err) => console.error("Error:", err)); ``` ### Frontend Development Clone and set up the Zero-to-Arweave starter kit: #### Clone Repository **Clone the starter kit:** ```bash git clone https://github.com/ar-io/crossmint-arweave-example.git cd crossmint-arweave-example ``` #### Install Dependencies **Install required packages:** ```bash npm install # or yarn install ``` #### Configure Environment **Set up your environment variables:** Create a `.env` file in the root directory: ``` VITE_CROSSMINT_API_KEY=your_api_key_here VITE_CROSSMINT_ENV=staging VITE_COLLECTION_ID=your_collection_id_here VITE_TEMPLATE_ID=your_template_id_here ``` ### Authentication Integration Implement Crossmint's client-side authentication: ```javascript function App() { const { user, login, logout, isLoading } = CrossmintAuth.useAuth(); return ( {user ? ( Welcome, {user.email}! Logout ) : ( Login with Crossmint )} ); } ``` ### Payment Integration Add Crossmint's embedded checkout for NFT purchases: ```javascript function NFTMinting() { const handlePaymentSuccess = (result) => { console.log("Payment successful:", result); // Handle successful payment }; return ( ); } ``` ### Deploy to Arweave Deploy your completed application to Arweave: #### Build Application **Build your React application:** ```bash npm run build # or yarn build ``` #### Deploy with ArDrive **Deploy using ArDrive:** 1. Visit [ArDrive.io](http://ArDrive.io) 2. Create a new folder for your application 3. Upload the contents of your `dist` folder 4. Wait for the upload to complete #### Get Manifest ID **Retrieve the manifest ID:** 1. Click on your uploaded application folder 2. Look for the "Manifest ID" in the folder details 3. Copy this ID - you'll need it for domain configuration ### Domain Configuration Connect your application to a human-readable domain name using ArNS: #### Purchase ARNS Name **Get an ARNS name (if needed):** 1. Visit [arns.app](https://arns.app/) 2. Connect your Arweave wallet 3. Search for an available name 4. Purchase it with $ARIO tokens #### Get Process ID **Get your Process ID:** 1. Visit [arns.app](https://arns.app/) 2. Connect your Arweave wallet 3. Click "Manage Assets" in the top-right 4. Find your ARNS name and click on the settings icon 5. Copy the Process ID displayed #### Update Configuration **Update the configuration:** ```javascript const ant = ANT.init({ signer: new ArweaveSigner(jwk), processId: 'YOUR_PROCESS_ID_HERE' // Replace with your Process ID }); const result = await ant.setRecord({ name: '@', ttlSeconds: 900, // 15 minutes dataLink: 'YOUR_MANIFEST_ID' // Replace with the manifest ID }); ``` #### Set Base Record **Set the base record:** ```bash # Using pnpm pnpm run set-base # Using yarn yarn set-base ``` When successful, you'll see: ``` ✅ Base record update successful! 🔗 Your application is now available at: https://YOUR-NAME.ar.io ``` ## Advanced Features ### Custom NFT Metadata ```javascript const customMetadata = { name: "Custom NFT Name", description: "A unique NFT with custom attributes", imageUrl: "https://arweave.net/YOUR_TX_ID", attributes: [ { trait_type: "Rarity", value: "Legendary" }, { trait_type: "Power", value: 95 }, { trait_type: "Element", value: "Fire" } ] }; ``` ### Batch Minting NFTs ```javascript const batchMint = async (templateId, quantity) => { const promises = Array(quantity).fill().map(() => mintNFT(templateId) ); const results = await Promise.all(promises); return results; }; ``` ### Track Sales and Engagement ```javascript const trackMint = (nftId, userEmail) => { // Send analytics data analytics.track('nft_minted', { nftId, userEmail, timestamp: Date.now() }); }; ``` ### Comprehensive Error Handling ```javascript const mintWithErrorHandling = async (templateId) => { try { const result = await mintNFT(templateId); return { success: true, data: result }; } catch (error) { console.error('Minting failed:', error); return { success: false, error: error.message }; } }; ``` ## Benefits of This Approach - **True Permanence**: NFT images are stored permanently on Arweave - **Accessibility**: Credit card payments make NFTs accessible to mainstream users - **Complete Decentralization**: Both application and assets are stored on decentralized networks - **User-Friendly Experience**: Seamless experience for both creators and collectors - **No Server Maintenance**: No need to manage servers or renew domains ## Ready to Build? } arrow > Get started with the complete example project } > Learn more about Crossmint's APIs and features } > Understand Arweave's file system for advanced storage # Storing DePIN Data on Arweave Using Turbo (/build/guides/depin) DePIN networks require **scalable and cost-effective storage solutions** they can trust. With vast amounts of data generated by decentralized physical infrastructure networks, traditional on-chain storage is prohibitively expensive, yet networks need reliable, long-term access to their device data. Arweave via AR.IO Network provides **chain-agnostic, permanent and immutable storage** for a one-time fee, ensuring networks can access any device data previously stored and verify it has not been tampered with. ## Getting Started with DePIN Data Storage ### Prepare Your Data Structure Organize your DePIN device data in a consistent format. Here's an example for environmental sensor data: ```json { "device_id": "airmon-007", "timestamp": "2025-09-22T14:31:05Z", "location": { "lat": 51.5098, "lon": -0.118 }, "pm25": 16, "co2_ppm": 412, "noise_dB": 41.2 } ``` **Best Practices:** - Use consistent field names across all devices - Include timestamps in ISO format - Add device identifiers for tracking - Consider data compression for large datasets ### Tag Your Data for Discovery Proper tagging is essential for [finding your data](/build/access/find-data) later. Consider these tags for DePIN data: ```json { "name": "App-Name", "value": "AirQuality-DePIN-v1.0" }, { "name": "Device-ID", "value": "airmon-007" }, { "name": "Device-Type", "value": "Environmental-Sensor" }, { "name": "Network-Name", "value": "AirQuality-Network" }, { "name": "Data-Category", "value": "Air-Quality" }, { "name": "Location", "value": "London-UK" }, { "name": "Device-Timestamp", "value": "2025-09-22T14:31:05Z" } ``` **Tagging Strategy:** - Use consistent naming conventions - Include geographic identifiers - Add device type classifications - Include data categories for filtering For more detailed information on tagging, see our [Tagging documentation](/build/upload/tagging) ### Upload to Arweave Select the best method for your DePIN network's needs: ```typescript // Initialize with your wallet const turbo = await TurboFactory.authenticated({ privateKey: jwk, // Your Arweave wallet token: 'arweave' }) // Upload device data const result = await turbo.upload({ data: JSON.stringify(deviceData), dataItemOpts: { tags: [ { name: "Content-Type", value: "application/json" }, { name: "App-Name", value: "AirQuality-DePIN-v1.0" }, { name: "Device-ID", value: "airmon-007" }, { name: "Device-Type", value: "Environmental-Sensor" }, { name: "Network-Name", value: "AirQuality-Network" }, { name: "Data-Category", value: "Air-Quality" }, { name: "Location", value: "London-UK" }, { name: "Device-Timestamp", value: "2025-09-22T14:31:05Z" } ] } }) ``` ```bash # Install Turbo CLI npm install -g @ardrive/turbo-sdk # Upload a single file turbo upload-file --file-path sensor-data.json \ --tag "Content-Type:application/json" \ --tag "App-Name:AirQuality-DePIN-v1.0" \ --tag "Device-Type:Environmental-Sensor" \ --tag "Network-Name:AirQuality-Network" \ --tag "Data-Category:Air-Quality" \ --tag "Location:London-UK" \ --tag "Device-Timestamp:2025-09-22T14:31:05Z" # Upload entire folder turbo upload-folder --folder-path ./sensor-data \ --tag "App-Name:AirQuality-DePIN-v1.0" \ --tag "Network-Name:AirQuality-Network" \ --tag "Data-Category:Air-Quality" \ --index-file index.json ``` For more advanced uploading options, see our [Advanced Uploading with Turbo](/build/upload/advanced-uploading-with-turbo) guide, or the [Turbo SDK documentation](/sdks/turbo-sdk) directly. ## Querying Your DePIN Data ### Find Your Data Use GraphQL to search for your DePIN data by tags and criteria: ```graphql # Find all data for a specific device, most recent results first query { transactions( tags: [ { name: "App-Name", values: ["AirQuality-DePIN-v1.0"] } { name: "Device-ID", values: ["airmon-007"] } ] first: 100 sort: HEIGHT_DESC ) { edges { node { id tags { name value } data { size } } } } } ``` ```graphql # Find data by location query { transactions( tags: [ { name: "App-Name", values: ["AirQuality-DePIN-v1.0"] } { name: "Location", values: ["London-UK"] } ] first: 50 ) { edges { node { id tags { name value } } } } } ``` For more advanced querying options, see our [Find Your Data](/build/access/find-data) documentation. ### Access and Use Your Data Once you have transaction IDs from your queries, choose how to fetch and process the data: **Direct data fetching:** ```javascript // Example: Process air quality data async function processAirQualityData(transactionIds) { const results = [] for (const txId of transactionIds) { const response = await fetch(`https://arweave.net/${txId}`) const data = await response.json() // Process the data const processed = { Device_ID: data.device_id, timestamp: data.timestamp, location: data.location, pm25: data.pm25, co2_ppm: data.co2_ppm, noise_dB: data.noise_dB } results.push(processed) } return results } ``` For more information on fetching data, see our [Fetch Data](/build/access/fetch-data) documentation. **Verified data with optimized routing:** ```javascript import { createWayfinderClient, PreferredWithFallbackRoutingStrategy, FastestPingRoutingStrategy, HashVerificationStrategy } from "@ar.io/wayfinder-core"; const wayfinder = createWayfinderClient({ ario: ARIO.mainnet(), routingStrategy: new PreferredWithFallbackRoutingStrategy({ preferredGateway: 'https://your-gateway.com', fallbackStrategy: new FastestPingRoutingStrategy({ timeoutMs: 500 }), }), verificationStrategy: new HashVerificationStrategy({ trustedGateways: ['https://arweave.net'], }), telemetrySettings: { enabled: true, clientName: 'AirQuality-DePIN-v1.0', }, }); // Fetch and verify data using ar:// protocol async function processVerifiedAirQualityData(transactionIds) { const results = [] for (const txId of transactionIds) { const response = await wayfinder.request(`ar://${txId}`) const data = await response.json() // Process the verified data const processed = { Device_ID: data.device_id, timestamp: data.timestamp, location: data.location, pm25: data.pm25, co2_ppm: data.co2_ppm, noise_dB: data.noise_dB, noise_level: data.noise_dB, verified: true // Data is cryptographically verified } results.push(processed) } return results } ``` Learn more about data verification with [Wayfinder](/build/access/wayfinder). ## Next Steps In production, teams have several options to take this further to provide significantly more value to the network and its users including: } > Pay in different Tokens and organise device data files with folders or manifests. }> Operate a gateway optimised to index and serve your device data fast. }> Create mutable data structures for permanent device data and decentralised apps. These approaches can make your DePIN data even more resilient and useful. See more detailed guides about this below and or join our discord to find out more. ## Need Help? If you're interested in exploring these advanced features for your DePIN network, join our [Discord community](https://discord.gg/cuCqBb5v) or reach out to our team. # Deploy a dApp with ArDrive Web (/build/guides/deploy-dapp-with-ardrive-web) Create **permanent dApps** using the ArDrive web interface. This guide shows you how to deploy your dApp or website to the permaweb using ArDrive's user-friendly interface. ## What You'll Learn - How to deploy dApps using ArDrive web - Creating manifests for proper file routing - Assigning friendly ArNS names - Updating your dApp with new versions ## Prerequisites **For simple apps and websites:** - Your dApp files ready for deployment - ArDrive account (free to create) **For advanced applications:** - dApp prepared with hash routing and relative file paths - Static files built (for frameworks like React) - Learn more about [preparing your dApp for deployment](https://docs.ardrive.io/docs/misc/deploy/) ## Step-by-Step Deployment ### Log into ArDrive Go to the [ArDrive web app](https://app.ardrive.io/#/sign-in) and log in using your preferred method. If you don't have an account, follow the instructions to create one. ### Select or Create a Drive Navigate to the drive where you want your project hosted. If you need a new drive: - Click the big red "New" button at the top left - Create a new drive - **Important:** Set the drive to **public** for others to access your dApp ### Upload Your Project With your drive selected: - Click the big red "New" button again - Select "Upload Folder" - Navigate to your project's root directory (or built directory if required) - Select the entire directory to maintain your project's file structure ### Confirm Upload Review the upload and associated cost. If everything looks correct, click "Confirm". **Cost Note:** Uploading to Arweave isn't free, but costs are usually quite small compared to the benefits of permanent hosting. ### Create the Manifest While ArDrive displays files as a traditional file structure, they don't actually exist that way on Arweave. The manifest acts as a map to all your dApp files: - Navigate into your newly created folder by double-clicking it - Click the big red "New" button again - Select "New Manifest" in the "Advanced" section - Name the manifest and save it inside the folder you just created ### Get the Data TX ID Once the manifest is created: - Click on it to expand its details - Go to the "Details" tab - Find the "Data TX ID" on the bottom right - Copy this unique identifier for your dApp ### View and Share Your dApp Your dApp is now live on the permaweb forever! - Append the Data TX ID to an Arweave gateway URL: `https://arweave.net/YOUR-TX-ID` - It may take a few minutes for files to propagate through the network - Once propagated, your dApp is accessible to anyone, anywhere, at any time ### Assign a Friendly Name (Optional) Make your dApp easier to access with an ArNS name: - If you own an ArNS name, you'll be prompted during manifest creation - If not, purchase one from [arns.app](https://arns.arweave.net) - You can also assign an ArNS name later by clicking the three dots next to any file and selecting "Assign ArNS name" ## Updating Your dApp Files uploaded to Arweave are **permanent and immutable** - they cannot be changed. However, the [Arweave File System (ArFS)](/build/advanced/arfs) protocol lets you "replace" them with new versions while keeping old ones accessible. ### How Updates Work **To update your dApp:** 1. **Make your changes** and build the static directory 2. **Upload the entire folder again** to the same location 3. **Follow the same steps** as the original upload 4. **Create a new manifest** with the same name as the old one 5. **The new manifest generates a new TX ID** for the updated dApp **Important Notes:** - The old version remains accessible to anyone with the correct TX ID - Old files won't display in ArDrive unless you view file history - Each version gets its own unique transaction ID ## Benefits of ArDrive Web Deployment - **User-friendly interface** - No command line required - **Automatic manifest creation** - Handles file routing for you - **Integrated ArNS support** - Easy domain name assignment - **Version management** - Built-in file history and updates - **Cost transparency** - See upload costs before confirming ## Ready to Deploy? } arrow > Deploy your dApp using the ArDrive web interface } > Learn how to create friendly domain names for your dApp } > Explore more advanced deployment options and tools # Hosting Decentralized Websites (/build/guides/hosting-decentralized-websites) Create **permanent websites** that can't be censored, taken down, or modified after deployment. Host your content on Arweave and serve it through AR.IO gateways for a truly decentralized web experience. ## What Makes It Different? **Traditional websites:** - Hosted on centralized servers - Can be taken down or censored - Require ongoing hosting costs - Single point of failure **Decentralized websites:** - Stored permanently on Arweave - Censorship-resistant - Pay once, host forever - Distributed across the network ## How It Works ### 1. Arweave Manifests [Arweave manifests](/build/upload/manifests) are JSON files that define how your website's files are organized and linked together. They enable: - **Friendly URLs** - Access files with readable paths instead of transaction IDs - **Relative linking** - Use `./style.css` instead of full transaction IDs - **Fallback pages** - Handle 404 errors gracefully - **File organization** - Structure your website like a traditional site **Example manifest:** ```json { "manifest": "arweave/paths", "version": "0.2.0", "index": { "path": "index.html" }, "fallback": { "id": "404-page-transaction-id" }, "paths": { "index.html": { "id": "main-page-transaction-id" }, "style.css": { "id": "css-transaction-id" }, "script.js": { "id": "js-transaction-id" } } } ``` ### 2. Deployment Tools **Permaweb Deploy** - [CLI deployment tool](https://github.com/permaweb/permaweb-deploy) that: - Uploads your build folder to Arweave using Turbo - Creates a manifest automatically - Updates your ArNS domain - Integrates with GitHub Actions - Requires `--deploy-folder` and `--arns-name` parameters **Arlink** - [User-friendly web interface](https://arlink.ar.io/) for: - Quick website uploads through the browser - Manifest generation - ArNS integration **ArDrive Web** - [User-friendly interface](/build/guides/deploy-dapp-with-ardrive-web) for: - File management - Website building - Deployment workflows ### 3. ArNS Integration **Primary Names** - [Decentralized domain names](/learn/arns) that: - Point to your website's manifest - Provide human-readable URLs - Can be updated to point to new versions - Are owned and controlled by you ## Quick Start **1. Build your website** - Create a static website with HTML, CSS, and JavaScript **2. Choose a deployment tool:** ```bash # Using permaweb-deploy (CLI tool) npm install permaweb-deploy --save-dev npx permaweb-deploy --deploy-folder ./build --arns-name your-domain # Using arlink (web interface) # Visit the arlink website and upload through the browser ``` **3. Get an ArNS domain** - Register a primary name to point to your website **4. Access your site** - Visit `https://your-domain.arweave.net` or any AR.IO gateway ## Benefits - **Permanent hosting** - Your website will exist forever - **Censorship resistance** - Cannot be taken down by authorities - **Cost efficiency** - Pay once, host forever - **Global distribution** - Served from multiple AR.IO gateways - **Version control** - Update your ArNS domain to point to new versions # Guides (/build/guides) Explore real-world applications and use cases for **Arweave** and **AR.IO** infrastructure. These examples show what's possible with permanent data storage and decentralized web services. ## What You Can Build **Arweave and AR.IO enable:** - **Decentralized websites** - Host permanent, censorship-resistant web content - **ArNS domains** - Create and manage decentralized domain names - **Data marketplaces** - Trade and sell digital assets and data - **Permanent applications** - Deploy apps that can't be taken down - **And much more** - The permanent web is only limited by your imagination ## Getting Started **Build permanent websites** that can't be censored or taken down **Key topics:** - Arweave manifests for file routing - Permaweb deployment tools - ArNS domain integration **Create and manage** decentralized domain names **Key topics:** - Primary name registration - Domain management - Integration with applications **Version and organize** your permanent website content **Key topics:** - Undername management - Website versioning - Component organization **Trade and sell** ArNS tokens and digital assets **Key topics:** - Arweave Name Token (ANT) trading - Marketplace dynamics - Asset ownership **Deploy dApps easily** using the ArDrive web interface **Key topics:** - ArDrive web deployment - Manifest creation - ArNS name assignment - Version management **Build a decentralized NFT minting app** with Arweave and Crossmint **Key topics:** - Permanent NFT storage on Arweave - Crossmint API integration - Payment processing - Decentralized deployment ## Why Use Arweave? **Permanent storage** - Data stored on Arweave is permanent and cannot be deleted **Decentralized** - No single point of failure or control **Cost-effective** - Pay once, store forever **Censorship-resistant** - Content cannot be taken down by authorities ## Ready to Build? **Start with websites** - Learn how to host decentralized websites with [Hosting Decentralized Websites](/build/guides/hosting-decentralized-websites). **Want domains?** Explore [ArNS Primary Names](/build/guides/arns-primary-names) for decentralized domain management. **Interested in trading?** Check out [ArNS Marketplace](/build/guides/arns-marketplace) for digital asset trading. # Using Turbo SDK with Vanilla HTML (/build/guides/using-turbo-in-a-browser/html) # Using Turbo SDK with Vanilla HTML **Firefox Compatibility**: Some compatibility issues have been reported with the Turbo SDK in Firefox browsers. At this time the below framework examples may not behave as expected in Firefox. ## Overview This guide demonstrates how to integrate the `@ardrive/turbo-sdk` directly into vanilla HTML pages using CDN imports. No build tools, bundlers, or polyfills are required - just modern ES modules support in browsers. **Note**: Vanilla HTML implementation is the simplest way to get started with the Turbo SDK. It's perfect for prototyping, simple applications, or when you want to avoid build complexity. ## Prerequisites - Modern browser with ES modules support (Chrome 61+, Firefox 60+, Safari 10.1+, Edge 16+) - Basic understanding of HTML, CSS, and JavaScript - HTTPS hosting for production (required for browser wallet integrations) Create a basic HTML file with Turbo SDK integration: ```html Turbo SDK Example body { font-family: Arial, sans-serif; max-width: 800px; margin: 0 auto; padding: 20px; } .section { margin: 20px 0; padding: 20px; border: 1px solid #ddd; border-radius: 8px; } .loading { color: #666; font-style: italic; } .error { color: red; } .success { color: green; } button { background: #007cba; color: white; border: none; padding: 10px 20px; border-radius: 4px; cursor: pointer; margin: 5px; } button:hover { background: #005a87; } button:disabled { background: #ccc; cursor: not-allowed; } Turbo SDK - Vanilla HTML Demo Current Rates Loading rates... Upload File Upload File // Initialize Turbo client const turbo = TurboFactory.unauthenticated(); // Fetch and display rates async function loadRates() { try { const rates = await turbo.getFiatRates(); const ratesDiv = document.getElementById("rates"); const ratesText = Object.entries(rates.fiat) .map( ([currency, rate]) => `${currency.toUpperCase()}: $${rate} per GiB` ) .join(""); ratesDiv.innerHTML = ratesText; } catch (error) { document.getElementById( "rates" ).innerHTML = `Error loading rates: ${error.message}`; } } // Handle file upload document .getElementById("uploadForm") .addEventListener("submit", async (e) => { e.preventDefault(); const fileInput = document.getElementById("fileInput"); const uploadBtn = document.getElementById("uploadBtn"); const statusDiv = document.getElementById("uploadStatus"); if (!fileInput.files.length) { statusDiv.innerHTML = 'Please select a file'; return; } const file = fileInput.files[0]; uploadBtn.disabled = true; statusDiv.innerHTML = 'Preparing upload...'; try { // Show upload cost first const costs = await turbo.getUploadCosts({ bytes: [file.size] }); const cost = costs[0]; statusDiv.innerHTML = ` Upload cost: ${cost.winc} winc File size: ${file.size.toLocaleString()} bytes Note: This example cannot complete uploads without wallet authentication. See wallet integration examples below for full upload functionality. `; } catch (error) { statusDiv.innerHTML = `Error: ${error.message}`; } finally { uploadBtn.disabled = false; } }); // Load rates on page load loadRates(); ``` Select the appropriate CDN import method for your needs: **Use esm.sh for best compatibility**: The `unpkg.com` CDN has known issues with ES module exports for complex packages like Turbo SDK. **Latest Version (Recommended for Development)** ```javascript ``` **Specific Version (Recommended for Production)** ```javascript ``` **Alternative CDN Providers** ```javascript // jsDelivr // SkyPack // unpkg.com (not recommended - has ES module issues) ``` Connect your browser wallet to enable file uploads: **Never expose private keys in browser applications!** Always use browser wallet integrations. **Uploading with Wander** **Deprecation Notice**: The signature API used by ArConnect wallets is deprecated and will be removed. Visit [Wander wallet documentation](https://docs.wander.app/api/signature) for alternatives. Complete HTML page with Wander wallet integration: ```html Turbo SDK - Wander Wallet body { font-family: Arial, sans-serif; max-width: 600px; margin: 0 auto; padding: 20px; background: #f5f5f5; } .container { background: white; padding: 30px; border-radius: 10px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1); } .wallet-section { border: 2px solid #e0e0e0; border-radius: 8px; padding: 20px; margin: 20px 0; } .connected { border-color: #4caf50; background-color: #f9fff9; } button { background: #000; color: white; border: none; padding: 12px 24px; border-radius: 6px; cursor: pointer; font-size: 16px; margin: 5px; } button:hover { background: #333; } button:disabled { background: #ccc; cursor: not-allowed; } .status { margin: 10px 0; padding: 10px; border-radius: 4px; } .success { background: #d4edda; color: #155724; border: 1px solid #c3e6cb; } .error { background: #f8d7da; color: #721c24; border: 1px solid #f5c6cb; } .info { background: #d1ecf1; color: #0c5460; border: 1px solid #bee5eb; } ⚡ Turbo SDK + Wander Wallet Wander Wallet Connection Connect your Wander wallet to upload files to Arweave using your AR balance. Connect Wander Wallet 📁 File Upload Upload to Arweave import { TurboFactory, ArconnectSigner, } from "https://esm.sh/@ardrive/turbo-sdk"; let connectedAddress = null; let turboClient = null; // Connect to Wander wallet async function connectWanderWallet() { const statusDiv = document.getElementById("walletStatus"); const connectBtn = document.getElementById("connectBtn"); try { if (!window.arweaveWallet) { statusDiv.innerHTML = ` Wander wallet is not installed! Install Wander Wallet `; return; } connectBtn.disabled = true; statusDiv.innerHTML = 'Connecting to Wander wallet...'; // Required permissions for Turbo SDK const permissions = [ "ACCESS_ADDRESS", "ACCESS_PUBLIC_KEY", "SIGN_TRANSACTION", "SIGNATURE", ]; // Connect to wallet await window.arweaveWallet.connect(permissions); // Get wallet address connectedAddress = await window.arweaveWallet.getActiveAddress(); // Create authenticated Turbo client const signer = new ArconnectSigner(window.arweaveWallet); turboClient = TurboFactory.authenticated({ signer }); // Update UI document.getElementById("walletSection").classList.add("connected"); statusDiv.innerHTML = ` ✅ Connected to Wander Wallet Address: ${connectedAddress.slice( 0, 8 )}...${connectedAddress.slice(-8)} `; connectBtn.style.display = "none"; document.getElementById("uploadSection").style.display = "block"; } catch (error) { console.error("Wander wallet connection failed:", error); statusDiv.innerHTML = `Connection failed: ${error.message}`; } finally { connectBtn.disabled = false; } } // Upload file function async function uploadFile() { const fileInput = document.getElementById("fileInput"); const statusDiv = document.getElementById("uploadStatus"); if (!fileInput.files.length) { statusDiv.innerHTML = 'Please select a file first'; return; } if (!turboClient) { statusDiv.innerHTML = 'Please connect Wander wallet first'; return; } const file = fileInput.files[0]; let uploadStartTime = Date.now(); statusDiv.innerHTML = 'Preparing upload...'; try { // Get upload cost first const costs = await turboClient.getUploadCosts({ bytes: [file.size], }); const cost = costs[0]; statusDiv.innerHTML = ` Upload cost: ${cost.winc} winc Starting upload... `; // Upload with comprehensive progress tracking const result = await turboClient.uploadFile({ fileStreamFactory: () => file.stream(), fileSizeFactory: () => file.size, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream", }, { name: "App-Name", value: "Turbo-HTML-Wander-Demo" }, { name: "File-Name", value: file.name }, { name: "Upload-Timestamp", value: new Date().toISOString() }, ], }, events: { onProgress: ({ totalBytes, processedBytes, step }) => { const percent = Math.round((processedBytes / totalBytes) * 100); const elapsed = Math.round( (Date.now() - uploadStartTime) / 1000 ); statusDiv.innerHTML = ` ${step}: ${percent}% Progress: ${processedBytes.toLocaleString()} / ${totalBytes.toLocaleString()} bytes Elapsed: ${elapsed}s `; }, onError: ({ error, step }) => { console.error(`Error during ${step}:`, error); statusDiv.innerHTML = `Error during ${step}: ${error.message}`; }, }, }); const totalTime = Math.round((Date.now() - uploadStartTime) / 1000); // Use original file size for display (result object doesn't contain size info) const displayBytes = file.size; statusDiv.innerHTML = ` 🎉 Upload Successful! Transaction ID: ${ result.id } File Size: ${displayBytes.toLocaleString()} bytes Upload Time: ${totalTime}s Timestamp: ${new Date( result.timestamp ).toLocaleString()} View File: arweave.net/${result.id} Explorer: ViewBlock `; } catch (error) { console.error("Upload failed:", error); statusDiv.innerHTML = `Upload failed: ${error.message}`; } } // Make functions available globally for onclick handlers window.connectWanderWallet = connectWanderWallet; window.uploadFile = uploadFile; ``` Browser wallet integrations require HTTPS in production: ```html ``` Configure CSP headers to allow CDN imports: ```html ``` Implement comprehensive error handling: ```javascript // Network error handling async function robustApiCall(apiFunction, retries = 3) { for (let i = 0; i setTimeout(resolve, 1000 * Math.pow(2, i)) ); } } } // Usage example const rates = await robustApiCall(() => turbo.getFiatRates()); ``` Optimize for production environments: ```html // Production code here ``` ## Best Practices ### 1. User Experience - **Loading States**: Always show loading indicators during API calls - **Error Recovery**: Provide clear error messages with recovery options - **Progress Tracking**: Show upload progress for large files - **Wallet Detection**: Guide users to install wallets if missing ### 2. Security - **Never expose private keys** in browser applications - **Validate user inputs** before API calls - **Use HTTPS** for all production deployments - **Implement CSP headers** to prevent XSS attacks ### 3. Performance - **Cache API responses** where appropriate (rates, costs) - **Use specific CDN versions** in production - **Implement retry logic** for network failures - **Optimize file handling** for large uploads ### 4. Development - **Use development endpoints** during testing - **Test wallet integrations** across different browsers - **Validate upload functionality** with small files first - **Monitor API rate limits** and implement backoff ## Troubleshooting Common Issues ### CDN Import Errors If you encounter errors like: - `The requested module does not provide an - Module resolution failures **Solution**: Use `esm.sh` instead of `unpkg.com`: ```javascript // ❌ Problematic // ✅ Working ``` ### Function Scope Issues If onclick handlers throw `ReferenceError: function is not defined`: **Solution**: Use explicit global assignment: ```javascript // ❌ Problematic window.myFunction = async function() { ... } // ✅ Working async function myFunction() { ... } window.myFunction = myFunction; ``` ### Upload Result Properties If upload results have undefined properties: **Solution**: Use original file size for display: ```javascript // ❌ Problematic - these properties don't exist in result object const totalBytes = result.totalBytes || result.dataSizeBytes; // ✅ Correct - use original file size const displayBytes = originalFile.size; // Available result properties: id, timestamp, winc, version, // deadlineHeight, dataCaches, fastFinalityIndexes, public, signature, owner ``` ### Upload Cost Properties If cost calculations fail: **Solution**: Use correct cost object structure: ```javascript // ❌ Problematic - adjustedBytes doesn't exist in cost objects const cost = costs[0]; console.log(`Adjusted: ${cost.adjustedBytes.toLocaleString()}`); // ✅ Correct - use available properties const cost = costs[0]; console.log(`Cost: ${cost.winc} winc`); console.log(`File size: ${originalFile.size.toLocaleString()} bytes`); // Available cost properties: winc (string), adjustments (array) ``` ## Testing Your Implementation ### 1. Basic Functionality Test ```javascript // Test CDN import console.log("Testing Turbo SDK import..."); const turbo = TurboFactory.unauthenticated(); console.log("✅ SDK imported successfully"); // Test rate fetching const rates = await turbo.getFiatRates(); console.log("✅ Rates fetched:", rates); ``` ### 2. Wallet Integration Test - Connect to MetaMask/Wander wallet - Verify address display - Test small file upload ( { // Actual structure: { processedBytes: 4326, // number - bytes processed so far totalBytes: 8652, // number - total bytes to process step: "signing" // string - current step: "signing" or "upload" } } } ``` ## Additional Resources - [Turbo SDK Documentation](https://docs.ardrive.io) - [Browser Wallet Security Guide](https://docs.wander.app) - [Arweave Developer Documentation](https://docs.arweave.org) - [CDN Import Best Practices](https://esm.sh) --- For more advanced implementations, see the [Next.js](./nextjs.mdx) and [Vite](./vite.mdx) framework guides, or explore the [Turbo SDK examples](https://github.com/ardriveio/turbo-sdk) repository. # Using Turbo in a Browser (/build/guides/using-turbo-in-a-browser) # Using Turbo in a Browser Integrate the **Turbo SDK** directly into your web applications for fast, reliable data uploads to Arweave. Choose the approach that best fits your development workflow and framework preferences. ## What You Can Build **With Turbo SDK in browsers, you can:** - **Upload files directly** from web applications to Arweave - **Pay with different tokens** (AR, Ethereum, and more) - **Integrate with popular wallets** (MetaMask, Wander, ArConnect) - **Build permanent web apps** that store data on Arweave - **Create data marketplaces** and decentralized applications ## Getting Started **Start with the simplest approach** - no build tools required **Key topics:** - CDN imports for instant setup - Wallet integration examples - Production deployment considerations - Error handling and troubleshooting **Full-stack React applications** with server-side rendering **Key topics:** - Webpack polyfill configuration - Client-side component setup - TypeScript integration - Production optimization **Fast development** with modern build tools **Key topics:** - Vite plugin configuration - React and TypeScript setup - Hot module replacement - Bundle optimization ## Why Use Turbo SDK? **Fast uploads** - Upload data to Arweave in seconds, not minutes **Multiple payment options** - Pay with AR, Ethereum, or other supported tokens **Wallet integration** - Seamlessly connect with popular browser wallets **Reliable infrastructure** - Built on Arweave's permanent storage network **Developer-friendly** - Simple APIs with comprehensive documentation # Using Turbo SDK with Next.js (/build/guides/using-turbo-in-a-browser/nextjs) # Using Turbo SDK with Next.js **Firefox Compatibility**: Some compatibility issues have been reported with the Turbo SDK in Firefox browsers. At this time the below framework examples may not behave as expected in Firefox. ## Overview This guide demonstrates how to configure the `@ardrive/turbo-sdk` in a Next.js application with proper polyfills for client-side usage. Next.js uses webpack under the hood, which requires specific configuration to handle Node.js modules that the Turbo SDK depends on. **Polyfills**: Polyfills are required when using the Turbo SDK in Next.js applications. The SDK relies on Node.js modules like `crypto`, `buffer`, `process`, and `stream` that are not available in the browser by default. ## Prerequisites - Next.js 13+ (with App Router or Pages Router) - Node.js 18+ - Basic familiarity with Next.js configuration Install the main Turbo SDK package: ```bash npm install @ardrive/turbo-sdk ``` Add required polyfill packages for browser compatibility: ```bash npm install --save-dev crypto-browserify stream-browserify process buffer ``` **Wallet Integration Dependencies**: The Turbo SDK includes `@dha-team/arbundles` as a peer dependency, which provides the necessary signers for browser wallet integration (like `InjectedEthereumSigner` and `ArconnectSigner`). You can import these directly without additional installation. Create or update your `next.config.js` file to include the necessary polyfills: ```javascript /** @type {import('next').NextConfig} */ const nextConfig = { webpack: (config, { isServer }) => { // Only configure polyfills for client-side bundles if (!isServer) { config.resolve.fallback = { ...config.resolve.fallback, crypto: require.resolve("crypto-browserify"), stream: require.resolve("stream-browserify"), buffer: require.resolve("buffer"), process: require.resolve("process/browser"), fs: false, net: false, tls: false, }; // Provide global process and Buffer config.plugins.push( new config.webpack.ProvidePlugin({ process: "process/browser", Buffer: ["buffer", "Buffer"], }) ); } return config; }, }; module.exports = nextConfig; ``` If you're using TypeScript, update your `tsconfig.json` to include proper module resolution: ```json { "compilerOptions": { "moduleResolution": "bundler", "lib": ["es2015", "dom", "dom.iterable"] // ... other options } } ``` **TypeScript Wallet Types** Create a `types/wallet.d.ts` file to properly type wallet objects: ```typescript // types/wallet.d.ts interface Window { ethereum?: { request: (args: { method: string; params?: any[] }) => Promise; on?: (event: string, handler: (...args: any[]) => void) => void; removeListener?: (event: string, handler: (...args: any[]) => void) => void; isMetaMask?: boolean; }; arweaveWallet?: { connect: (permissions: string[]) => Promise; disconnect: () => Promise; getActiveAddress: () => Promise; getPermissions: () => Promise; sign: (transaction: any) => Promise; getPublicKey: () => Promise; }; } ``` Select between MetaMask or Wander wallet integration: **Never expose private keys in browser applications!** Always use browser wallet integrations for security. Create a React component for MetaMask wallet integration: For MetaMask integration, you'll need to use `InjectedEthereumSigner` from `@dha-team/arbundles`, which is available as a peer dependency through the Turbo SDK. ```tsx "use client"; const [connected, setConnected] = useState(false); const [address, setAddress] = useState(""); const [uploading, setUploading] = useState(false); const [uploadResult, setUploadResult] = useState(null); const connectMetaMask = useCallback(async () => { try { if (!window.ethereum) { alert("MetaMask is not installed!"); return; } // Request account access await window.ethereum.request({ method: "eth_requestAccounts", }); // Get the current account const accounts = await window.ethereum.request({ method: "eth_accounts", }); if (accounts.length > 0) { setAddress(accounts[0]); setConnected(true); // Log current chain for debugging const chainId = await window.ethereum.request({ method: "eth_chainId", }); console.log("Connected to chain:", chainId); } } catch (error) { console.error("Failed to connect to MetaMask:", error); } }, []); const uploadWithMetaMask = async (event) => { const file = event.target.files?.[0]; if (!file || !connected) return; setUploading(true); try { // Create a provider wrapper for InjectedEthereumSigner const providerWrapper = { getSigner: () => ({ signMessage: async (message: string | Uint8Array) => { const accounts = await window.ethereum!.request({ method: "eth_accounts", }); if (accounts.length === 0) { throw new Error("No accounts available"); } // Convert message to hex if it's Uint8Array const messageToSign = typeof message === "string" ? message : "0x" + Array.from(message) .map((b) => b.toString(16).padStart(2, "0")) .join(""); return await window.ethereum!.request({ method: "personal_sign", params: [messageToSign, accounts[0]], }); }, }), }; // Create the signer using InjectedEthereumSigner const signer = new InjectedEthereumSigner(providerWrapper); const turbo = TurboFactory.authenticated({ signer, token: "ethereum", // Important: specify token type for Ethereum }); // Upload file with progress tracking const result = await turbo.uploadFile({ fileStreamFactory: () => file.stream(), fileSizeFactory: () => file.size, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type }, { name: "App-Name", value: "My-Next-App" }, { name: "Funded-By", value: "Ethereum" }, ], }, events: { onProgress: ({ totalBytes, processedBytes, step }) => { console.log( `${step}: ${Math.round((processedBytes / totalBytes) * 100)}%` ); }, onError: ({ error, step }) => { console.error(`Error during ${step}:`, error); console.error("Error details:", JSON.stringify(error, null, 2)); }, }, }); setUploadResult(result); } catch (error) { console.error("Upload failed:", error); console.error("Error details:", JSON.stringify(error, null, 2)); alert(`Upload failed: ${error.message}`); } finally { setUploading(false); } }; return ( MetaMask Upload {!connected ? ( Connect MetaMask ) : ( ✅ Connected: {address.slice(0, 6)}...{address.slice(-4)} Select File to Upload: {uploading && ( 🔄 Uploading... Please confirm transaction in MetaMask )} {uploadResult && ( ✅ Upload Successful! Transaction ID: {uploadResult.id} Data Size: {uploadResult.totalBytes} bytes )} )} ); } ``` Create a React component for Wander wallet integration: ```tsx "use client"; const [connected, setConnected] = useState(false); const [address, setAddress] = useState(""); const [uploading, setUploading] = useState(false); const [uploadResult, setUploadResult] = useState(null); const connectWanderWallet = useCallback(async () => { try { if (!window.arweaveWallet) { alert("Wander wallet is not installed!"); return; } // Required permissions for Turbo SDK const permissions = [ "ACCESS_ADDRESS", "ACCESS_PUBLIC_KEY", "SIGN_TRANSACTION", "SIGNATURE", ]; // Connect to wallet await window.arweaveWallet.connect(permissions); // Get wallet address const walletAddress = await window.arweaveWallet.getActiveAddress(); setAddress(walletAddress); setConnected(true); } catch (error) { console.error("Failed to connect to Wander wallet:", error); } }, []); const uploadWithWanderWallet = async (event) => { const file = event.target.files?.[0]; if (!file || !connected) return; setUploading(true); try { // Create ArConnect signer using Wander wallet const signer = new ArconnectSigner(window.arweaveWallet); const turbo = TurboFactory.authenticated({ signer }); // Note: No need to specify token for Arweave as it's the default // Upload file with progress tracking const result = await turbo.uploadFile({ fileStreamFactory: () => file.stream(), fileSizeFactory: () => file.size, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type }, { name: "App-Name", value: "My-Next-App" }, { name: "Funded-By", value: "Arweave" }, ], }, events: { onProgress: ({ totalBytes, processedBytes, step }) => { console.log( `${step}: ${Math.round((processedBytes / totalBytes) * 100)}%` ); }, onError: ({ error, step }) => { console.error(`Error during ${step}:`, error); }, }, }); setUploadResult(result); } catch (error) { console.error("Upload failed:", error); alert(`Upload failed: ${error.message}`); } finally { setUploading(false); } }; return ( Wander Wallet Upload {!connected ? ( Connect Wander Wallet ) : ( ✅ Connected: {address.slice(0, 6)}...{address.slice(-4)} Select File to Upload: {uploading && ( 🔄 Uploading... Please confirm transaction in Wander wallet )} {uploadResult && ( ✅ Upload Successful! Transaction ID: {uploadResult.id} Data Size: {uploadResult.totalBytes} bytes )} )} ); } ``` ## Common Issues and Solutions ### Build Errors If you encounter build errors related to missing modules: 1. **"Module not found: Can't resolve 'fs'"** - Ensure `fs: false` is set in your webpack fallback configuration 2. **"process is not defined"** - Make sure you have the `ProvidePlugin` configuration for process 3. **"Buffer is not defined"** - Verify the Buffer polyfill is properly configured in `ProvidePlugin` ### Runtime Errors 1. **"crypto.getRandomValues is not a function"** - This usually indicates the crypto polyfill isn't working. Double-check your webpack configuration. 2. **"TypeError: e.startsWith is not a function"** - This indicates incorrect signer usage. For MetaMask integration, use `InjectedEthereumSigner` from `@dha-team/arbundles`, not `EthereumSigner`. - `EthereumSigner` expects a private key string, while `InjectedEthereumSigner` expects a provider wrapper. 3. **"No accounts available" during wallet operations** - Ensure the wallet is properly connected before attempting operations - Add validation to check account availability after connection 4. **Message signing failures with wallets** - For `InjectedEthereumSigner`, ensure your provider wrapper correctly implements the `getSigner()` method - Handle both string and Uint8Array message types in your `signMessage` implementation - Use MetaMask's `personal_sign` method with proper parameter formatting 5. **Server-side rendering issues** - Always use `'use client'` directive for components that use the Turbo SDK - Consider dynamic imports with `ssr: false` for complex cases: ```tsx const TurboUploader = dynamic(() => import("./TurboUploader"), { ssr: false, }); ``` ### Wallet Integration Issues 1. **Incorrect Signer Import** ```tsx // ❌ INCORRECT - For Node environments // ✅ CORRECT - For browser wallets ``` 2. **Provider Interface Mismatch** ```tsx // ❌ INCORRECT - window.ethereum doesn't have getSigner() const signer = new InjectedEthereumSigner(window.ethereum); // ✅ CORRECT - Use a provider wrapper const providerWrapper = { getSigner: () => ({ signMessage: async (message: string | Uint8Array) => { // Implementation here }, }), }; const signer = new InjectedEthereumSigner(providerWrapper); ``` 3. **Missing Dependencies** If you encounter import errors for `@dha-team/arbundles`, note that it's available as a peer dependency through `@ardrive/turbo-sdk`. You may need to ensure it's properly resolved in your build process. ## Best Practices 1. **Use Client Components**: Always mark components using the Turbo SDK with `'use client'` 2. **Error Handling**: Implement proper error handling for network requests and wallet interactions 3. **Environment Variables**: Store sensitive configuration in environment variables: ```javascript // next.config.js const nextConfig = { env: { TURBO_UPLOAD_URL: process.env.TURBO_UPLOAD_URL, TURBO_PAYMENT_URL: process.env.TURBO_PAYMENT_URL, }, // ... webpack config }; ``` 4. **Bundle Size**: Consider code splitting for large applications to reduce bundle size 5. **Wallet Security**: - **Never expose private keys** in client-side code - Always use browser wallet integrations (MetaMask, Wander, etc.) - Request only necessary permissions from wallets - Validate wallet connections before use - Handle wallet disconnection gracefully ## Production Deployment Checklist For production deployments: 1. **Verify polyfills work correctly** in your build environment 2. **Test wallet connections** with various providers (Wander, MetaMask, etc.) 3. **Monitor bundle sizes** to ensure polyfills don't significantly increase your app size 4. **Use environment-specific configurations** for different Turbo endpoints 5. **Implement proper error boundaries** for wallet connection failures 6. **Add loading states** for wallet operations to improve UX 7. **Test across different browsers** to ensure wallet compatibility ## Implementation Verification To verify your MetaMask integration is working correctly: 1. **Check Console Logs**: After connecting to MetaMask, you should see: ``` Connected to chain: 0x1 (or appropriate chain ID) ``` 2. **Test Balance Retrieval**: Add this to verify your authenticated client works: ```tsx // After creating authenticated turbo client const balance = await turbo.getBalance(); console.log("Current balance:", balance); ``` 3. **Verify Signer Setup**: Your implementation should: - Use `InjectedEthereumSigner` from `@dha-team/arbundles` - Include a proper provider wrapper with `getSigner()` method - Handle both string and Uint8Array message types - Use MetaMask's `personal_sign` method 4. **Common Success Indicators**: - No `TypeError: e.startsWith is not a function` errors - Successful wallet connection and address display - Ability to fetch balance without errors - Upload operations work with proper MetaMask transaction prompts ## Additional Resources - [Turbo SDK Documentation](https://docs.ardrive.io) - [Web Usage Examples](https://docs.ardrive.io) - [Next.js Webpack Configuration](https://nextjs.org/docs/pages/api-reference/next-config-js/webpack) - [ArDrive Examples Repository](https://github.com/ardriveio/turbo-sdk) --- For more examples and advanced usage patterns, refer to the [Turbo SDK examples directory](https://github.com/ardriveio/turbo-sdk) or the main [SDK documentation](https://docs.ardrive.io). # Using Turbo SDK with Vite (/build/guides/using-turbo-in-a-browser/vite) # Using Turbo SDK with Vite **Firefox Compatibility**: Some compatibility issues have been reported with the Turbo SDK in Firefox browsers. At this time the below framework examples may not behave as expected in Firefox. ## Overview This guide demonstrates how to configure the `@ardrive/turbo-sdk` in a Vite application with proper polyfills for client-side usage. Vite provides excellent support for modern JavaScript features and can be easily configured to work with the Turbo SDK through plugins. **Polyfills**: Vite simplifies polyfill management compared to other bundlers. The `vite-plugin-node-polyfills` plugin handles most of the complexity automatically. ## Prerequisites - Vite 5+ - Node.js 18+ - React 18+ (or your preferred framework) - Basic familiarity with Vite configuration Install the main Turbo SDK package: ```bash npm install @ardrive/turbo-sdk ``` Add the Vite node polyfills plugin for browser compatibility: ```bash npm install --save-dev vite-plugin-node-polyfills ``` **Wallet Integration Dependencies**: The Turbo SDK includes `@dha-team/arbundles` as a peer dependency, which provides the necessary signers for browser wallet integration (like `InjectedEthereumSigner` and `ArconnectSigner`). You can import these directly without additional installation. Add React and TypeScript dependencies (if using React): ```bash npm install react react-dom npm install --save-dev @vitejs/plugin-react @types/react @types/react-dom ``` Create or update your `vite.config.js` file: ```javascript base: "/", plugins: [ react(), nodePolyfills({ // Enable specific polyfills for Turbo SDK requirements include: ["crypto", "stream", "buffer", "process"], globals: { Buffer: true, global: true, process: true, }, }), ], define: { // Define globals for browser compatibility global: "globalThis", }, }); ``` If you're using TypeScript, update your `tsconfig.json`: ```json { "compilerOptions": { "target": "ESNext", "lib": ["DOM", "DOM.Iterable", "ESNext"], "allowJs": true, "skipLibCheck": true, "esModuleInterop": true, "allowSyntheticDefaultImports": true, "strict": true, "forceConsistentCasingInFileNames": true, "module": "ESNext", "moduleResolution": "Bundler", "isolatedModules": true, "jsx": "react-jsx", "paths": { "buffer/": ["./node_modules/vite-plugin-node-polyfills/shims/buffer"] } }, "include": ["src"] } ``` **TypeScript Wallet Types** Create a `types/wallet.d.ts` file to properly type wallet objects: ```typescript // types/wallet.d.ts interface Window { ethereum?: { request: (args: { method: string; params?: any[] }) => Promise; on?: (event: string, handler: (...args: any[]) => void) => void; removeListener?: (event: string, handler: (...args: any[]) => void) => void; isMetaMask?: boolean; }; arweaveWallet?: { connect: (permissions: string[]) => Promise; disconnect: () => Promise; getActiveAddress: () => Promise; getPermissions: () => Promise; sign: (transaction: any) => Promise; getPublicKey: () => Promise; }; } ``` Select between MetaMask or Wander wallet integration: **Never expose private keys in browser applications!** Always use browser wallet integrations for security. Create a React component for MetaMask wallet integration: ```tsx const [connected, setConnected] = useState(false); const [address, setAddress] = useState(""); const [uploading, setUploading] = useState(false); const [uploadResult, setUploadResult] = useState(null); const connectMetaMask = useCallback(async () => { try { if (!window.ethereum) { alert("MetaMask is not installed!"); return; } // Request account access await window.ethereum.request({ method: "eth_requestAccounts", }); // Get the current account const accounts = await window.ethereum.request({ method: "eth_accounts", }); if (accounts.length > 0) { setAddress(accounts[0]); setConnected(true); // Log current chain for debugging const chainId = await window.ethereum.request({ method: "eth_chainId", }); console.log("Connected to chain:", chainId); } } catch (error) { console.error("Failed to connect to MetaMask:", error); } }, []); const uploadWithMetaMask = async (event) => { const file = event.target.files?.[0]; if (!file || !connected) return; setUploading(true); try { // Create a provider wrapper for InjectedEthereumSigner const providerWrapper = { getSigner: () => ({ signMessage: async (message: string | Uint8Array) => { const accounts = await window.ethereum!.request({ method: "eth_accounts", }); if (accounts.length === 0) { throw new Error("No accounts available"); } // Convert message to hex if it's Uint8Array const messageToSign = typeof message === "string" ? message : "0x" + Array.from(message) .map((b) => b.toString(16).padStart(2, "0")) .join(""); return await window.ethereum!.request({ method: "personal_sign", params: [messageToSign, accounts[0]], }); }, }), }; // Create the signer using InjectedEthereumSigner const signer = new InjectedEthereumSigner(providerWrapper); const turbo = TurboFactory.authenticated({ signer, token: "ethereum", // Important: specify token type for Ethereum }); // Upload file with progress tracking const result = await turbo.uploadFile({ fileStreamFactory: () => file.stream(), fileSizeFactory: () => file.size, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type }, { name: "App-Name", value: "My-Vite-App" }, { name: "Funded-By", value: "Ethereum" }, ], }, events: { onProgress: ({ totalBytes, processedBytes, step }) => { console.log( `${step}: ${Math.round((processedBytes / totalBytes) * 100)}%` ); }, onError: ({ error, step }) => { console.error(`Error during ${step}:`, error); console.error("Error details:", JSON.stringify(error, null, 2)); }, }, }); setUploadResult(result); } catch (error) { console.error("Upload failed:", error); console.error("Error details:", JSON.stringify(error, null, 2)); alert(`Upload failed: ${error.message}`); } finally { setUploading(false); } }; return ( MetaMask Upload {!connected ? ( Connect MetaMask ) : ( ✅ Connected: {address.slice(0, 6)}...{address.slice(-4)} Select File to Upload: {uploading && ( 🔄 Uploading... Please confirm transaction in MetaMask )} {uploadResult && ( ✅ Upload Successful! Transaction ID: {uploadResult.id} Data Size: {uploadResult.totalBytes} bytes )} )} ); } ``` Create a React component for Wander wallet integration: ```tsx const [connected, setConnected] = useState(false); const [address, setAddress] = useState(""); const [uploading, setUploading] = useState(false); const [uploadResult, setUploadResult] = useState(null); const connectWanderWallet = useCallback(async () => { try { if (!window.arweaveWallet) { alert("Wander wallet is not installed!"); return; } // Required permissions for Turbo SDK const permissions = [ "ACCESS_ADDRESS", "ACCESS_PUBLIC_KEY", "SIGN_TRANSACTION", "SIGNATURE", ]; // Connect to wallet await window.arweaveWallet.connect(permissions); // Get wallet address const walletAddress = await window.arweaveWallet.getActiveAddress(); setAddress(walletAddress); setConnected(true); } catch (error) { console.error("Failed to connect to Wander wallet:", error); } }, []); const uploadWithWanderWallet = async (event) => { const file = event.target.files?.[0]; if (!file || !connected) return; setUploading(true); try { // Create ArConnect signer using Wander wallet const signer = new ArconnectSigner(window.arweaveWallet); const turbo = TurboFactory.authenticated({ signer }); // Note: No need to specify token for Arweave as it's the default // Upload file with progress tracking const result = await turbo.uploadFile({ fileStreamFactory: () => file.stream(), fileSizeFactory: () => file.size, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type }, { name: "App-Name", value: "My-Vite-App" }, { name: "Funded-By", value: "Arweave" }, ], }, events: { onProgress: ({ totalBytes, processedBytes, step }) => { console.log( `${step}: ${Math.round((processedBytes / totalBytes) * 100)}%` ); }, onError: ({ error, step }) => { console.error(`Error during ${step}:`, error); }, }, }); setUploadResult(result); } catch (error) { console.error("Upload failed:", error); alert(`Upload failed: ${error.message}`); } finally { setUploading(false); } }; return ( Wander Wallet Upload {!connected ? ( Connect Wander Wallet ) : ( ✅ Connected: {address.slice(0, 6)}...{address.slice(-4)} Select File to Upload: {uploading && ( 🔄 Uploading... Please confirm transaction in Wander wallet )} {uploadResult && ( ✅ Upload Successful! Transaction ID: {uploadResult.id} Data Size: {uploadResult.totalBytes} bytes )} )} ); } ``` Here's a complete `package.json` example for a Vite + React + Turbo SDK project: ```json { "name": "vite-turbo-app", "version": "0.1.0", "private": true, "type": "module", "scripts": { "dev": "vite", "build": "vite build", "preview": "vite preview", "lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0" }, "dependencies": { "@ardrive/turbo-sdk": "^1.20.0", "react": "^18.3.1", "react-dom": "^18.3.1" }, "devDependencies": { "@types/react": "^18.3.1", "@types/react-dom": "^18.3.0", "@vitejs/plugin-react": "^4.2.1", "typescript": "^5.3.3", "vite": "^5.2.14", "vite-plugin-node-polyfills": "^0.17.0" } } ``` ## Common Issues and Solutions ### Build Errors 1. **"global is not defined"** - Ensure you have `global: 'globalThis'` in your Vite config's `define` section 2. **Buffer polyfill issues** - Make sure `vite-plugin-node-polyfills` is properly configured with Buffer globals - Add the buffer path mapping in your `tsconfig.json` 3. **Module resolution errors** - Use `moduleResolution: "Bundler"` in TypeScript configuration - Ensure you're importing from `@ardrive/turbo-sdk/web` for browser usage ### Runtime Errors 1. **"process is not defined"** - Enable process globals in the node polyfills plugin configuration 2. **Wallet integration errors** - For MetaMask, use `InjectedEthereumSigner` from `@dha-team/arbundles` - For Wander wallet, use `ArconnectSigner` from the Turbo SDK - Always check wallet availability before attempting connection ### Development Experience 1. **Hot reload issues with wallet connections** - Wallet state may not persist across hot reloads - Consider using localStorage to persist connection state 2. **Console warnings about dependencies** - Some peer dependency warnings are normal for wallet libraries - Focus on runtime functionality rather than dependency warnings ## Best Practices 1. **Development vs Production** - Use debug logs during development: `TurboFactory.setLogLevel('debug')` - Remove debug logs in production builds 2. **Error Handling** - Always wrap wallet operations in try-catch blocks - Provide meaningful error messages to users - Log detailed error information for debugging 3. **Performance** - Initialize Turbo clients once and reuse them - Consider lazy loading wallet integration components - Use loading states for better user experience 4. **Security** - Never expose private keys in browser applications - Always validate wallet connections before operations - Use secure wallet connection methods in production ## Production Deployment Checklist For production builds: 1. **Build optimization** - Vite automatically optimizes builds with tree shaking - Polyfills are only included when needed 2. **Testing** - Test wallet connections across different browsers - Verify polyfills work in production builds - Test with actual wallet extensions 3. **Monitoring** - Monitor bundle sizes to ensure polyfills don't bloat your app - Set up error tracking for wallet connection failures ## Implementation Verification To verify your Vite setup is working correctly: 1. **Check Development Server**: Start your dev server and verify no polyfill errors 2. **Test Wallet Connections**: Ensure both MetaMask and Wander wallet integrations work 3. **Build Verification**: Run `npm run build` and check for any build errors 4. **Bundle Analysis**: Use `vite-bundle-analyzer` to inspect your bundle size ## Additional Resources - [Vite Documentation](https://vitejs.dev/) - [vite-plugin-node-polyfills](https://www.npmjs.com/package/vite-plugin-node-polyfills) - [Turbo SDK Documentation](https://docs.ardrive.io) - [Web Usage Examples](https://docs.ardrive.io) - [ArDrive Examples Repository](https://github.com/ardriveio/turbo-sdk) --- For more examples and advanced usage patterns, refer to the [Turbo SDK examples directory](https://github.com/ardriveio/turbo-sdk) or the main [SDK documentation](https://docs.ardrive.io). # Get Started (/build) Welcome to AR.IO Network's developer documentation. This section will guide you through everything you need to know for building on top of AR.IO Network from uploading and accessing your data, to operating and extending your own infrastructure to support your unique use-case. If you're unfamiliar with Arweave's permanent storage and AR.IO Network we recommend reading this [introduction section](/learn) first. ## Get Started Building with AR.IO } title="Uploading Data" description="Learn how to permanently store files, websites, and application data on Arweave" href="/build/upload" /> } title="Accessing Data" description="Query, retrieve, and interact with data stored on the permanent web" href="/build/access" /> } title="Running Your Own Gateway" description="Deploy and operate AR.IO gateway infrastructure to support the network" href="/build/run-a-gateway" /> ## Developer Resources } title="Turbo SDK Reference" description="Fast upload service SDK with payment processing and instant confirmation" href="/sdks/turbo-sdk/events" /> } title="Wayfinder SDK Reference" description="Decentralized data access SDK with built-in verification and routing" href="/sdks/wayfinder" /> } title="AR.IO SDK Reference" description="Complete SDK documentation for interacting with AR.IO Network protocols" href="/sdks/ar-io-sdk" /> } title="Gateway API Reference" description="Direct API access for custom implementations and integrations" href="/apis" /> ## Use Cases & Guides } title="Hosting Decentralized Websites" description="Build and deploy censorship-resistant web applications on the permanent web" href="/build/guides/hosting-decentralized-websites" /> } title="ArNS Primary Names" description="Set up primary names for user-friendly wallet addresses" href="/build/guides/arns-primary-names" /> } title="ArNS Undernames & Versioning" description="Manage subdomains and versioning for your ArNS names" href="/build/guides/arns-undernames-versioning" /> } title="ArNS Marketplace" description="Explore the ArNS marketplace for name trading and management" href="/build/guides/arns-marketplace" /> ## Get Help } title="Browse Examples" description="Explore code examples and sample applications" href="https://github.com/ar-io" /> } title="Join the Community" description="Connect with other developers on Discord" href="https://discord.gg/cuCqBb5v" /> Ready to build on the permanent web? Choose your path above and start creating applications that last forever. # Run a Gateway (/build/run-a-gateway) Join the decentralized network that powers permanent data access. Run your own **AR.IO Gateway** to support the permaweb infrastructure and earn rewards. ## Gateway Options Choose the deployment approach that fits your needs - from local testing to production infrastructure. Production Gateway Earn Rewards} description={ Join the AR.IO Network and earn ARIO tokens Earn ARIO token rewards Serve Wayfinder traffic Cache and serve Arweave data } href="/build/run-a-gateway/quick-start#production-setup-with-custom-domain" icon={} /> Perfect for development and testing Quick Docker setup Test gateway features No commitment required } href="/build/run-a-gateway/quick-start" icon={} /> Optimize for specific use cases • Data filtering options • Performance tuning • Advanced features } href="/build/run-a-gateway/manage/filters" icon={} /> ## Why Run a Gateway? **Economic Benefits** - Earn ARIO tokens through network participation - Set custom pricing for premium services - Build sustainable infrastructure business **Technical Advantages** - Full control over data access and caching - Custom configuration for your applications - Direct integration with your services **Network Impact** - Support decentralized web infrastructure - Increase network reliability and redundancy - Enable censorship-resistant data access ## Quick Start in 30 Seconds Get a gateway running locally with a single command: ```bash # Prerequisites: Docker installed on your system docker run -p 4000:4000 ghcr.io/ar-io/ar-io-core:latest ``` Test your gateway: ```bash # Fetch a transaction curl localhost:4000/4jBV3ofWh41KhuTs2pFvj-KBZWUkbrbCYlJH0vLA6LM ``` Your gateway is now serving Arweave data! This local setup is perfect for: - Testing gateway functionality - Developing applications - Understanding gateway operations ## Learn Before You Build Understanding gateways helps you make informed infrastructure decisions. } /> } /> } /> ## Ready to Deploy? Whether you're exploring gateway capabilities or ready to join the network, we have resources to help: # Join the Network (/build/run-a-gateway/join-the-network) Take control of the permanent web by running your own **AR.IO Gateway**. Join the decentralized network that powers the permaweb and earn rewards for providing infrastructure services. ## Prerequisites ### Running Gateway Required You must have a fully functional AR.IO Gateway running with a custom domain and SSL certificates. **Don't have a gateway yet?** Follow our [Production Setup Guide](/build/run-a-gateway/quick-start#production-setup-with-custom-domain) to get your gateway running with proper DNS configuration. **Requirements:** - Gateway accessible via your custom domain (e.g., `https://yourdomain.com`) - SSL certificates properly configured - ArNS subdomain resolution working - Gateway responding to test requests ### Minimum Stake Requirement To join the network as a gateway operator, you need **10,000 ARIO tokens** as the minimum stake requirement. **Need to acquire ARIO tokens?** Visit our [Get the Token guide](/learn/token/get-the-token) to learn about all available methods including exchanges, DEXs, and network participation. **Acquisition Options:** - Purchase on centralized exchanges like Gate.io - Trade on decentralized exchanges (Dexi, Botega, Vento) - Use Wander wallet for easy exchange and swap functionality - Earn through network participation and community programs ## Join the Network Choose your preferred method to register your gateway: ### Visit the Network Portal Go to [gateways.ar.io](https://gateways.ar.io/#/gateways) to access the AR.IO Network Portal. The portal shows all active gateways on the network and provides the interface to register your own gateway. ### Connect Your Wallet Click the "Start your own gateway" button to begin the registration process. You'll be prompted to connect your wallet. Choose your preferred wallet (Wander, Metamask, or Beacon) to connect. Use the same wallet address that you configured in your gateway's `AR_IO_WALLET` environment variable. This wallet will be the owner of your gateway registration. ### Fill Out Gateway Information Complete the gateway registration form with your gateway details: **Required Fields:** - **Label**: A display name for your gateway (e.g., "My New Gateway") - **Address**: Your gateway's domain with port (e.g., `https://fastandfurious.io:443`) - **Observer Wallet**: The public address of your observer wallet - **Properties ID**: Transaction ID of your gateway properties - **Stake (ARIO)**: Minimum stake required (typically 10,000 ARIO) - **Delegated Staking**: Enable to allow others to delegate stake to your gateway - **Minimum Delegated Stake**: Set minimum delegation amount (e.g., 100 ARIO) - **Reward Share Ratio**: Percentage of rewards shared with delegators (e.g., 50%) - **Note**: Additional information about your gateway (e.g., "AR.IO rules!") ### Confirm Registration Review all information carefully and click "Confirm" to submit your gateway registration to the network. **What happens next:** - Your gateway will be added to the Gateway Address Registry - Observers will start observing your gateway at the next Epoch (day) - You will begin to receive rewards based on your gateway performance - You can monitor your gateway's performance in the portal **Confirm your gateway registration:** Your gateway should now be viewable at `gateways.ar.io/#/` with the wallet address you used to join. This dashboard shows your gateway's information, stats, and performance metrics including join date, uptime, operator stake, and delegated stake details. ### Install the AR.IO CLI First, install the AR.IO CLI tool if you haven't already: ```npm npm install -g @ar.io/sdk ``` ### Run the Join Network Command Use the `ar.io join-network` command with your gateway configuration: ```bash ar.io join-network \ --wallet-file ./path/to/wallet.json \ --qty 10000000000 \ --auto-stake true \ --allow-delegated-staking true \ --min-delegated-stake 100000000 \ --delegate-reward-share-ratio 10 \ --label "My Test Gateway" \ --note "Test gateway for development" \ --observer-wallet 0VE0wIhDy90WiQoV3U2PeY44FH1aVetOoulPGqgYukj \ --fqdn my-gateway.example.com \ --port 443 \ --protocol https \ --mainnet ``` The wallet file used in the `--wallet-file` parameter must be the same wallet configured in your AR.IO Gateway's `AR_IO_WALLET` environment variable. This ensures your gateway registration is properly linked to your running gateway instance. **Parameter explanations:** - `--qty 10000000000` - 10,000 ARIO in mARIO (multiply by 1,000,000) - `--min-delegated-stake 100000000` - 100 ARIO in mARIO - `--delegate-reward-share-ratio 10` - 10% shared with delegators - `--observer-wallet` - Must match your gateway's OBSERVER_WALLET env var - `--fqdn` - Your gateway's domain name ### Verify Registration After running the command, verify your gateway registration using the CLI: ```bash ar.io get-gateway 0VE0wIhDy90WiQoV3U2PeY44FH1aVetOoulPGqgYukj ``` The status should show `joined` when your gateway is successfully registered. You can also verify by visiting: `gateways.ar.io/#/` The CLI will output transaction details and your gateway should appear in the network portal within a few minutes. ## What Happens After Registration After joining the network: - Your gateway will be monitored by the Observer system - You'll earn rewards for providing reliable service - You can monitor your gateway's performance and earnings in the portal - You may be selected as an Observer to help monitor other gateways ## Next Steps Your gateway is now part of the AR.IO Network! Here are some next steps to maximize your participation: } /> } /> } /> } /> # Content Moderation (/build/run-a-gateway/manage/content-moderation) ## Overview Arweave is a network designed for permanent storage of data. It is a practical impossibility for data to be wholly removed from the network once it has been uploaded. The AR.IO Network has adopted Arweave's voluntary content moderation model, whereby every participant of the network has the autonomy to decide which content they want to (or can legally) store, serve, and see. Each gateway operating on the network has the right and ability to blocklist any content, ArNS name, or address that is deemed in violation of its content policies or is non-compliant with local regulations. Gateway operators may set content to be blocked by their gateway by submitting a `PUT` request to their gateway defining the content to be blocked. This requires that the `ADMIN_API_KEY` environmental variable to be set in order to authenticate the moderation request. The simplest method for submitting moderation requests to a gateway is to use `curl` in a terminal. ## Quick Start ### Set Up Admin API Key Configure your admin API key in your `.env` file: ```bash # Set a secure admin API key ADMIN_API_KEY=your_secure_admin_key_here ``` Choose a strong, unique admin API key. This key provides administrative access to your gateway and should be kept secure. ### Test API Access Verify your admin API key is working: ```bash # Test admin endpoint access curl -H "Authorization: Bearer your_secure_admin_key_here" \ http://localhost:3000/ar-io/admin/debug ``` ### Block Your First Content Block a specific transaction ID: ```bash curl -X 'PUT' \ 'http://localhost:3000/ar-io/admin/block-data' \ -H 'accept: */*' \ -H 'Authorization: Bearer your_secure_admin_key_here' \ -H 'Content-Type: application/json' \ -d '{ "id": "3lyxgbgEvqNSvJrTX2J7CfRychUD5KClFhhVLyTPNCQ", "notes": "Content violates our policies", "source": "Manual Review" }' ``` ## Authentication Moderation requests must contain the gateway's `ADMIN_API_KEY` in the request Header, as `Authorization: Bearer`. For example, if a gateway's `ADMIN_API_KEY` is set to `secret`, any request must contain `Authorization: Bearer secret` in the Header. ## Block Data Specific data items can be blocked by a gateway operator by submitting a `PUT` request containing a json object with three keys: - **id**: The Arweave transaction Id of the data item to be blocked. - **notes**: Any note the gateway operator wants to leave him/herself as to the reason the content is blocked. - **source**: A note as to where the content was identified as requiring moderation. i.e. a public block list. Requests to block data must be submitted to the gateway's `/ar-io/admin/block-data` endpoint. ```bash {{ title: 'curl' }} curl -X 'PUT' \ 'http://localhost:3000/ar-io/admin/block-data' \ -H 'accept: */*' \ -H 'Authorization: Bearer secret' \ -H 'Content-Type: application/json' \ -d '{ "id": "3lyxgbgEvqNSvJrTX2J7CfRychUD5KClFhhVLyTPNCQ", "notes": "This content is offensive", "source": "Public Block list" }' ``` ### Unblock Data At this time, blocked data items can only be unblocked by manually deleting the corresponding row from the `data/sqlite/moderation.db` database. The Arweave transaction Id of the blocked data item is stored in the database as raw bytes, which sqlite3 accepts as a BLOB (Binary Large OBject), and so cannot be accessed easily using the original transaction Id, which is a base64url. Sqlite3 is able to interact with a hexadecimal representation of the BLOB, by using a BLOB literal. To do so, wrap a hexadecimal representation of the Arweave transaction Id in single quotes, and prepend an `X` i.e. `X'de5cb181b804bea352bc9ad35f627b09f472721503e4a0a51618552f24cf3424'`. Where possible, consider using the `notes` or `source` values to identify rows for deletion rather than the `id`. ```bash {{ title: 'id' }} sqlite3 data/sqlite/moderation.db "DELETE FROM blocked_ids WHERE id=X'de5cb181b804bea352bc9ad35f627b09f472721503e4a0a51618552f24cf3424';" # Note that the id in this command is a BLOB literal using the hexadecimal representation of the Arweave transaction Id, not the transaction Id in its normal base64url format ``` ```bash {{ title: 'source' }} sqlite3 data/sqlite/moderation.db "DELETE FROM blocked_ids WHERE block_source_id = (SELECT id FROM block_sources WHERE name='Public Block List');" # This command uses a subquery to look up the id in block_sources where name='Public Block List' # This command will unblock ALL data items marked with this source value ``` ## Block ArNS Name ArNS names can be blocked so that a gateway will refuse to serve their associated content even if the name holder updates the Arweave transaction Id that the name points at. This is done via an authenticated `PUT` request to the endpoint `/ar-io/admin/block-name` containing a json object with three keys: - **name**: The ArNS name to be blocked. - **notes**: Any note the gateway operator wants to leave him/herself as to the reason the content is blocked. - **source**: A note as to where the content was identified as requiring moderation. i.e. a public block list. ```bash {{ title: 'curl'}} curl -X 'PUT' \ 'http://localhost:3000/ar-io/admin/block-name' \ -H 'accept: */*' \ -H 'Authorization: Bearer secret' \ -H 'Content-Type: application/json' \ -d '{ "name": "i-bought-a-potato", "notes": "Potatoes are offensive", "source": "Public Block list" }' ``` For moderation purposes, each [undername](/learn/arns) of an ArNS name is treated as a separate name and must be moderated separately. ### Unblock ArNS Name Gateway operators can unblock ArNS names that were previously blocked. This is done via an authenticated `PUT` request to the endpoint `/ar-io/admin/unblock-name` containing a json object with a single key: - **name**: The ArNS name to be unblocked ```bash {{title: 'curl'}} curl -X 'PUT' \ 'http://localhost:3000/ar-io/admin/unblock-name' \ -H 'accept: */*' \ -H 'Authorization: Bearer secret' \ -H 'Content-Type: application/json' \ -d '{ "name": "i-bought-a-potato", }' ``` # Environment Variables (/build/run-a-gateway/manage/environment-variables) **Default Values**: Most environment variables have sensible defaults. Only set variables when you need to override the default behavior. ## Core AR.IO Node The main AR.IO Gateway service that handles data retrieval, indexing, and serving. ### Server Configuration | Variable | Type | Default | Description | | --------------------- | ------- | ----------------- | ---------------------------------------- | | `PORT` | number | `4000` | HTTP server port | | `NODE_ENV` | string | `production` | Node.js environment | | `LOG_LEVEL` | string | `info` | Logging level (error, warn, info, debug) | | `LOG_FORMAT` | string | `simple` | Log format (simple, json) | | `LOG_FILTER` | string | `{"always":true}` | Log filtering configuration | | `LOG_ALL_STACKTRACES` | boolean | `false` | Include full stack traces in logs | | `INSTANCE_ID` | string | - | Unique instance identifier | ### Authentication & Security | Variable | Type | Default | Description | | -------------------- | ------ | --------- | ------------------------------------------------------- | | `ADMIN_API_KEY` | String | Generated | API key for admin endpoints (auto-generated if not set) | | `ADMIN_API_KEY_FILE` | String | - | Path to file containing admin API key | ### Network Configuration | Variable | Type | Default | Description | | ------------------------------------- | ------ | ---------------------------- | ------------------------------------ | | `TRUSTED_NODE_URL` | string | `https://arweave.net` | Trusted Arweave node URL | | `TRUSTED_GATEWAY_URL` | string | `https://arweave.net` | Primary trusted gateway URL | | `TRUSTED_GATEWAYS_URLS` | JSON | `{"https://arweave.net": 1}` | Weighted trusted gateway URLs | | `TRUSTED_GATEWAYS_REQUEST_TIMEOUT_MS` | number | `10000` | Request timeout for trusted gateways | | `ARWEAVE_NODE_IGNORE_URLS` | string | - | Comma-separated URLs to ignore | ### Chunk Management | Variable | Type | Default | Description | | ---------------------------------------- | ------ | --------------------------- | ----------------------------------- | | `CHUNK_POST_URLS` | string | `https://arweave.net/chunk` | URLs for posting chunks | | `CHUNK_POST_CONCURRENCY_LIMIT` | number | `2` | Max concurrent chunk posts | | `CHUNK_POST_MIN_SUCCESS_COUNT` | number | `3` | Min successful chunk posts required | | `CHUNK_POST_RESPONSE_TIMEOUT_MS` | number | - | Chunk POST response timeout | | `CHUNK_POST_ABORT_TIMEOUT_MS` | number | - | Chunk POST abort timeout | | `SECONDARY_CHUNK_POST_URLS` | string | - | Secondary chunk POST URLs | | `SECONDARY_CHUNK_POST_CONCURRENCY_LIMIT` | number | `2` | Secondary chunk POST concurrency | | `SECONDARY_CHUNK_POST_MIN_SUCCESS_COUNT` | number | `1` | Secondary chunk POST success count | ### Data Sources | Variable | Type | Default | Description | | ---------------------------- | ------ | ------------------------------------ | -------------------------------------------- | | `ON_DEMAND_RETRIEVAL_ORDER` | string | `s3,trusted-gateways,chunks,tx-data` | On-demand data retrieval priority | | `BACKGROUND_RETRIEVAL_ORDER` | string | `chunks,s3,trusted-gateways,tx-data` | Background data retrieval priority | | `CHUNK_DATA_SOURCE_TYPE` | string | `fs` | Chunk data source type (fs, legacy-s3) | | `CHUNK_METADATA_SOURCE_TYPE` | string | `fs` | Chunk metadata source type (fs, legacy-psql) | ### Indexing & Synchronization | Variable | Type | Default | Description | | -------------------------------- | ------- | ---------- | ---------------------------------- | | `START_WRITERS` | boolean | `true` | Enable indexing processes | | `START_HEIGHT` | number | `0` | Starting block height for indexing | | `STOP_HEIGHT` | number | `Infinity` | Stopping block height for indexing | | `SKIP_CACHE` | boolean | `false` | Bypass header cache | | `SIMULATED_REQUEST_FAILURE_RATE` | number | `0` | Rate of simulated request failures | ### ANS-104 Bundle Processing | Variable | Type | Default | Description | | ------------------------- | ------- | ----------------- | ------------------------------------- | | `ANS104_UNBUNDLE_FILTER` | JSON | `{"never": true}` | Filter for bundles to unbundle | | `ANS104_INDEX_FILTER` | JSON | `{"never": true}` | Filter for data items to index | | `ANS104_UNBUNDLE_WORKERS` | number | `1` | Number of unbundling workers | | `ANS104_DOWNLOAD_WORKERS` | number | `5` | Number of download workers | | `FILTER_CHANGE_REPROCESS` | boolean | `false` | Reprocess old bundles with new filter | | `BACKFILL_BUNDLE_RECORDS` | boolean | `false` | Backfill bundle records | ### Data Management | Variable | Type | Default | Description | | --------------------------------------- | ------- | -------- | ---------------------------------- | | `WRITE_ANS104_DATA_ITEM_DB_SIGNATURES` | boolean | `false` | Write data item signatures to DB | | `WRITE_TRANSACTION_DB_SIGNATURES` | boolean | `false` | Write transaction signatures to DB | | `ENABLE_DATA_DB_WAL_CLEANUP` | boolean | `false` | Enable data DB WAL cleanup | | `MAX_DATA_ITEM_QUEUE_SIZE` | number | `100000` | Max data items in queue | | `BUNDLE_DATA_IMPORTER_QUEUE_SIZE` | number | `1000` | Max bundles in import queue | | `VERIFICATION_DATA_IMPORTER_QUEUE_SIZE` | number | `1000` | Max verification items in queue | | `DATA_ITEM_FLUSH_COUNT_THRESHOLD` | number | `1000` | Data items threshold for flushing | | `MAX_FLUSH_INTERVAL_SECONDS` | number | `600` | Max interval between flushes | ### File System Cleanup | Variable | Type | Default | Description | | ------------------------------------------ | ------ | ---------- | ------------------------------------ | | `FS_CLEANUP_WORKER_BATCH_SIZE` | number | `2000` | Files per cleanup batch | | `FS_CLEANUP_WORKER_BATCH_PAUSE_DURATION` | number | `5000` | Pause between cleanup batches (ms) | | `FS_CLEANUP_WORKER_RESTART_PAUSE_DURATION` | number | `14400000` | Pause before restarting cleanup (ms) | ### Background Verification | Variable | Type | Default | Description | | ------------------------------------------------ | ------- | ------- | ----------------------------------- | | `ENABLE_BACKGROUND_DATA_VERIFICATION` | boolean | `false` | Enable background data verification | | `BACKGROUND_DATA_VERIFICATION_INTERVAL_SECONDS` | number | `600` | Verification interval | | `BACKGROUND_DATA_VERIFICATION_WORKER_COUNT` | number | `1` | Number of verification workers | | `BACKGROUND_DATA_VERIFICATION_STREAM_TIMEOUT_MS` | number | `30000` | Stream timeout for verification | ### Bundle Repair | Variable | Type | Default | Description | | -------------------------------------------------- | ------ | ------- | ----------------------------- | | `BUNDLE_REPAIR_RETRY_INTERVAL_SECONDS` | number | `300` | Bundle repair retry interval | | `BUNDLE_REPAIR_UPDATE_TIMESTAMPS_INTERVAL_SECONDS` | number | `300` | Timestamp update interval | | `BUNDLE_REPAIR_BACKFILL_INTERVAL_SECONDS` | number | `900` | Backfill interval | | `BUNDLE_REPAIR_FILTER_REPROCESS_INTERVAL_SECONDS` | number | `300` | Filter reprocess interval | | `BUNDLE_REPAIR_RETRY_BATCH_SIZE` | number | `5000` | Batch size for repair retries | ### ArNS Configuration | Variable | Type | Default | Description | | ------------------------------------ | ------ | ------------------- | ------------------------------------------------- | | `ARNS_ROOT_HOST` | string | - | Root hostname for ArNS | | `SANDBOX_PROTOCOL` | string | - | Protocol for sandboxing redirects (http or https) | | `AR_IO_SDK_LOG_LEVEL` | string | `none` | AR.IO SDK log level | | `ARNS_CACHE_TYPE` | string | `node` | ArNS cache type | | `ARNS_CACHE_TTL_SECONDS` | number | `86400` | ArNS cache TTL | | `ARNS_CACHE_MAX_KEYS` | number | `10000` | Max ArNS cache keys | | `ARNS_RESOLVER_PRIORITY_ORDER` | string | `gateway,on-demand` | ArNS resolver priority | | `ARNS_COMPOSITE_RESOLVER_TIMEOUT_MS` | number | `3000` | Composite resolver timeout | | `ARNS_NAMES_CACHE_TTL_SECONDS` | number | `3600` | Names cache TTL | | `ARNS_MAX_CONCURRENT_RESOLUTIONS` | number | `1` | Max concurrent resolutions | ### AR.IO Network | Variable | Type | Default | Description | | -------------------------- | ------ | --------------------------------------------- | -------------------------- | | `AR_IO_WALLET` | string | - | Gateway wallet | | `IO_PROCESS_ID` | string | `qNvAoz0TgcH7DMg8BCVn8jF32QH5L6T29VjHxhHqqGE` | AR.IO process ID | | `AR_IO_NODE_RELEASE` | string | `33` | AR.IO node release version | | `APEX_TX_ID` | string | - | Apex transaction ID | | `APEX_ARNS_NAME` | string | - | Apex ArNS name | | `ARNS_NOT_FOUND_TX_ID` | string | - | Not found transaction ID | | `ARNS_NOT_FOUND_ARNS_NAME` | string | `unregistered_arns` | Not found ArNS name | ### Apex Domain | Variable | Type | Default | Description | | ---------------- | ------ | ------- | ------------------- | | `APEX_TX_ID` | string | - | Apex transaction ID | | `APEX_ARNS_NAME` | string | - | Apex ArNS name | ### Caching | Variable | Type | Default | Description | | ----------------------------------------- | ------- | ------------------------ | --------------------------------------- | | `CHAIN_CACHE_TYPE` | string | `lmdb` | Chain cache type (lmdb, fs, redis) | | `REDIS_CACHE_URL` | string | `redis://localhost:6379` | Redis cache URL | | `REDIS_USE_TLS` | boolean | `false` | Use TLS for Redis | | `REDIS_CACHE_TTL_SECONDS` | number | `28800` | Redis cache TTL | | `ENABLE_FS_HEADER_CACHE_CLEANUP` | boolean | `false` | Enable FS header cache cleanup | | `CONTIGUOUS_DATA_CACHE_CLEANUP_THRESHOLD` | string | - | Contiguous data cache cleanup threshold | ### Webhooks | Variable | Type | Default | Description | | ------------------------ | ------ | ----------------- | -------------------------------------- | | `WEBHOOK_TARGET_SERVERS` | string | - | Comma-separated webhook target servers | | `WEBHOOK_INDEX_FILTER` | JSON | `{"never": true}` | Webhook index filter | | `WEBHOOK_BLOCK_FILTER` | JSON | `{"never": true}` | Webhook block filter | ### Mempool Watcher | Variable | Type | Default | Description | | ----------------------------- | ------- | ------- | ------------------------ | | `ENABLE_MEMPOOL_WATCHER` | boolean | `false` | Enable mempool watcher | | `MEMPOOL_POLLING_INTERVAL_MS` | number | `30000` | Mempool polling interval | ### AWS S3 | Variable | Type | Default | Description | | ------------------------------- | ------ | ------- | ----------------------------- | | `AWS_ACCESS_KEY_ID` | string | - | AWS access key ID | | `AWS_SECRET_ACCESS_KEY` | string | - | AWS secret access key | | `AWS_SESSION_TOKEN` | string | - | AWS session token | | `AWS_REGION` | string | - | AWS region | | `AWS_ENDPOINT` | string | - | AWS endpoint | | `AWS_S3_CONTIGUOUS_DATA_BUCKET` | string | - | S3 bucket for contiguous data | | `AWS_S3_CONTIGUOUS_DATA_PREFIX` | string | - | S3 prefix for contiguous data | ### ClickHouse | Variable | Type | Default | Description | | --------------------- | ------ | ------- | ------------------- | | `CLICKHOUSE_URL` | string | - | ClickHouse URL | | `CLICKHOUSE_USER` | string | - | ClickHouse username | | `CLICKHOUSE_PASSWORD` | string | - | ClickHouse password | ### PostgreSQL (Legacy) | Variable | Type | Default | Description | | ------------------------------------- | ------- | ------- | ----------------------------------- | | `LEGACY_PSQL_CONNECTION_STRING` | string | - | PostgreSQL connection string | | `LEGACY_PSQL_PASSWORD_FILE` | string | - | Path to PostgreSQL password file | | `LEGACY_PSQL_SSL_REJECT_UNAUTHORIZED` | boolean | `true` | Reject unauthorized SSL connections | ### AO (Autonomous Objects) | Variable | Type | Default | Description | | ------------------- | ------ | ------- | ----------------- | | `AO_CU_URL` | string | - | AO CU URL | | `NETWORK_AO_CU_URL` | string | - | Network AO CU URL | | `ANT_AO_CU_URL` | string | - | ANT AO CU URL | | `AO_MU_URL` | string | - | AO MU URL | | `AO_GATEWAY_URL` | string | - | AO Gateway URL | | `AO_GRAPHQL_URL` | string | - | AO GraphQL URL | ### Circuit Breaker | Variable | Type | Default | Description | | ----------------------------------------------------------------- | ------ | --------- | -------------------------- | | `ARIO_PROCESS_DEFAULT_CIRCUIT_BREAKER_TIMEOUT_MS` | number | `60000` | Circuit breaker timeout | | `ARIO_PROCESS_DEFAULT_CIRCUIT_BREAKER_ERROR_THRESHOLD_PERCENTAGE` | number | `30` | Error threshold percentage | | `ARIO_PROCESS_DEFAULT_CIRCUIT_BREAKER_ROLLING_COUNT_TIMEOUT_MS` | number | `600000` | Rolling count timeout | | `ARIO_PROCESS_DEFAULT_CIRCUIT_BREAKER_RESET_TIMEOUT_MS` | number | `1200000` | Reset timeout | ### Performance Tuning | Variable | Type | Default | Description | | ----------------------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ | | `NODE_JS_MAX_OLD_SPACE_SIZE` | string | - | Node.js max old space size | | `WEIGHTED_PEERS_TEMPERATURE_DELTA` | number | `2` | Weighted peers temperature delta | | `GATEWAY_PEERS_WEIGHTS_CACHE_DURATION_MS` | number | `5000` | Gateway peers weights cache duration | | `GATEWAY_PEERS_REQUEST_WINDOW_COUNT` | number | `20` | Gateway peers request window count | | `TAG_SELECTIVITY` | JSON | `{"Parent-Folder-Id": 20, "Message": 20, "Drive-Id": 10, "Process": 10, "Recipient": 10, "App-Name": -10, "Content-Type": -10, "Data-Protocol": -10}` | Tag selectivity configuration | ### Data Paths | Variable | Type | Default | Description | | ---------------------- | ------ | ------------------- | ----------------------- | | `CHUNKS_DATA_PATH` | string | `./data/chunks` | Path to chunks data | | `CONTIGUOUS_DATA_PATH` | string | `./data/contiguous` | Path to contiguous data | | `HEADERS_DATA_PATH` | string | `./data/headers` | Path to headers data | | `SQLITE_DATA_PATH` | string | `./data/sqlite` | Path to SQLite data | | `DUCKDB_DATA_PATH` | string | `./data/duckdb` | Path to DuckDB data | | `TEMP_DATA_PATH` | string | `./data/tmp` | Path to temporary data | | `LMDB_DATA_PATH` | string | `./data/lmdb` | Path to LMDB data | | `PARQUET_DATA_PATH` | string | `./data/parquet` | Path to Parquet data | ## Observer Service ### Basic Configuration | Variable | Type | Default | Description | | -------------------- | ------ | ------- | -------------------------- | | `PORT` | number | `5050` | Observer service port | | `LOG_LEVEL` | string | - | Observer log level | | `OBSERVER_WALLET` | string | - | Observer wallet | | `IO_PROCESS_ID` | string | - | AR.IO process ID | | `AR_IO_NODE_RELEASE` | string | `33` | AR.IO node release version | ### Observer Operation | Variable | Type | Default | Description | | ------------------------------------- | ------- | ------- | ------------------------------------------ | | `SUBMIT_CONTRACT_INTERACTIONS` | boolean | `true` | Submit contract interactions | | `NUM_ARNS_NAMES_TO_OBSERVE_PER_GROUP` | number | `8` | Number of ArNS names per observation group | | `REPORT_GENERATION_INTERVAL_MS` | string | - | Report generation interval | | `REPORT_DATA_SINK` | string | - | Report data sink | | `TURBO_UPLOAD_SERVICE_URL` | string | - | Turbo upload service URL | | `RUN_OBSERVER` | boolean | `true` | Run observer service | | `MIN_RELEASE_NUMBER` | number | `0` | Minimum release number | ### Report Configuration | Variable | Type | Default | Description | | ------------------------------- | ------ | ------- | -------------------------- | | `REPORT_GENERATION_INTERVAL_MS` | string | - | Report generation interval | | `REPORT_DATA_SINK` | string | - | Report data sink | ### Gateway Assessment | Variable | Type | Default | Description | | ------------------------------------- | ------ | ------- | ------------------------------------------ | | `NUM_ARNS_NAMES_TO_OBSERVE_PER_GROUP` | number | `8` | Number of ArNS names per observation group | ### ArNS Names | Variable | Type | Default | Description | | ------------------------------------- | ------ | ------- | ------------------------------------------ | | `NUM_ARNS_NAMES_TO_OBSERVE_PER_GROUP` | number | `8` | Number of ArNS names per observation group | ### Contract Interaction | Variable | Type | Default | Description | | ------------------------------ | ------- | ------- | ---------------------------- | | `SUBMIT_CONTRACT_INTERACTIONS` | boolean | `true` | Submit contract interactions | ### Offset Observation | Variable | Type | Default | Description | | ------------------------------- | ------ | ------- | -------------------------- | | `REPORT_GENERATION_INTERVAL_MS` | string | - | Report generation interval | ### AO (Autonomous Objects) | Variable | Type | Default | Description | | ------------------- | ------ | ------- | ----------------- | | `AO_CU_URL` | string | - | AO CU URL | | `NETWORK_AO_CU_URL` | string | - | Network AO CU URL | | `AO_MU_URL` | string | - | AO MU URL | | `AO_GATEWAY_URL` | string | - | AO Gateway URL | | `AO_GRAPHQL_URL` | string | - | AO GraphQL URL | ### Data Paths | Variable | Type | Default | Description | | ------------------- | ------ | ---------------- | ---------------------- | | `TEMP_DATA_PATH` | string | `./data/tmp` | Path to temporary data | | `REPORTS_DATA_PATH` | string | `./data/reports` | Path to reports data | | `WALLETS_PATH` | string | `./wallets` | Path to wallets | ## Envoy Proxy ### Basic Configuration | Variable | Type | Default | Description | | --------------------- | ------ | ------------- | --------------- | | `LOG_LEVEL` | string | `info` | Envoy log level | | `TVAL_AR_IO_HOST` | string | `core` | AR.IO host | | `TVAL_AR_IO_PORT` | number | `4000` | AR.IO port | | `TVAL_OBSERVER_HOST` | string | `observer` | Observer host | | `TVAL_OBSERVER_PORT` | number | `5050` | Observer port | | `TVAL_GATEWAY_HOST` | string | `arweave.net` | Gateway host | | `TVAL_GRAPHQL_HOST` | string | `core` | GraphQL host | | `TVAL_GRAPHQL_PORT` | number | `4000` | GraphQL port | | `TVAL_ARNS_ROOT_HOST` | string | - | ArNS root host | ## Redis Cache ### Basic Configuration | Variable | Type | Default | Description | | ------------------- | ------ | --------------------------- | ----------------- | | `REDIS_IMAGE_TAG` | string | `7` | Redis image tag | | `REDIS_MAX_MEMORY` | string | `256mb` | Redis max memory | | `EXTRA_REDIS_FLAGS` | string | `--save "" --appendonly no` | Extra Redis flags | ### Data Paths | Variable | Type | Default | Description | | ----------------- | ------ | -------------- | ------------------ | | `REDIS_DATA_PATH` | string | `./data/redis` | Path to Redis data | ## ClickHouse ### Basic Configuration | Variable | Type | Default | Description | | ---------------------- | ------ | ------- | -------------------- | | `CLICKHOUSE_IMAGE_TAG` | string | `25.4` | ClickHouse image tag | | `CLICKHOUSE_USER` | string | - | ClickHouse username | | `CLICKHOUSE_PASSWORD` | string | - | ClickHouse password | ### Data Paths | Variable | Type | Default | Description | | ---------------------- | ------ | ------------------- | ----------------------- | | `CLICKHOUSE_DATA_PATH` | string | `./data/clickhouse` | Path to ClickHouse data | | `CLICKHOUSE_LOGS_PATH` | string | `./logs/clickhouse` | Path to ClickHouse logs | ### ClickHouse Auto-Import | Variable | Type | Default | Description | | ------------------------------------------ | ------ | ------------------------------------------ | ------------------------------------------- | | `CLICKHOUSE_AUTO_IMPORT_IMAGE_TAG` | string | `79792e1b549f64edad3e338769949fd9bffa62db` | ClickHouse auto-import image tag | | `CLICKHOUSE_DEBUG` | string | - | ClickHouse debug flag | | `AR_IO_HOST` | string | `core` | AR.IO host | | `AR_IO_PORT` | number | `4000` | AR.IO port | | `ADMIN_API_KEY` | string | - | Admin API key | | `PARQUET_DATA_PATH` | string | `./data/parquet` | Path to Parquet data | | `CLICKHOUSE_HOST` | string | `clickhouse` | ClickHouse host | | `CLICKHOUSE_PORT` | string | - | ClickHouse port (defaults to 9000) | | `CLICKHOUSE_USER` | string | - | ClickHouse username (defaults to 'default') | | `CLICKHOUSE_PASSWORD` | string | - | ClickHouse password (required) | | `CLICKHOUSE_AUTO_IMPORT_SLEEP_INTERVAL` | string | - | Auto-import sleep interval | | `CLICKHOUSE_AUTO_IMPORT_HEIGHT_INTERVAL` | string | - | Auto-import height interval | | `CLICKHOUSE_AUTO_IMPORT_MAX_ROWS_PER_FILE` | string | - | Max rows per file for auto-import | ## Litestream Backup ### S3 Configuration | Variable | Type | Default | Description | | ------------------------------------------ | ------ | ------------------------------------------ | ----------------------------------- | | `LITESTREAM_IMAGE_TAG` | string | `be121fc0ae24a9eb7cdb2b92d01f047039b5f5e8` | Litestream image tag | | `AR_IO_SQLITE_BACKUP_S3_BUCKET_NAME` | string | - | S3 bucket name for SQLite backups | | `AR_IO_SQLITE_BACKUP_S3_BUCKET_REGION` | string | - | S3 bucket region for SQLite backups | | `AR_IO_SQLITE_BACKUP_S3_BUCKET_ACCESS_KEY` | string | - | S3 access key for SQLite backups | | `AR_IO_SQLITE_BACKUP_S3_BUCKET_SECRET_KEY` | string | - | S3 secret key for SQLite backups | | `AR_IO_SQLITE_BACKUP_S3_BUCKET_PREFIX` | string | - | S3 prefix for SQLite backups | ### Data Paths | Variable | Type | Default | Description | | ------------------ | ------ | --------------- | ------------------- | | `SQLITE_DATA_PATH` | string | `./data/sqlite` | Path to SQLite data | ## Autoheal Service ### Configuration | Variable | Type | Default | Description | | ------------------------------- | ------- | ---------- | ------------------------------- | | `AUTOHEAL_CONTAINER_LABEL` | string | `autoheal` | Container label for autoheal | | `AUTOHEAL_ONLY_MONITOR_RUNNING` | boolean | `false` | Only monitor running containers | | `RUN_AUTOHEAL` | boolean | `false` | Enable autoheal service | ## OpenTelemetry Tracing ### Basic Configuration | Variable | Type | Default | Description | | --------------------------------- | ------ | ------------ | ---------------------------------- | | `OTEL_SERVICE_NAME` | string | `ar-io-node` | OpenTelemetry service name | | `OTEL_EXPORTER_OTLP_ENDPOINT` | string | - | OTLP exporter endpoint | | `OTEL_EXPORTER_OTLP_HEADERS` | string | - | OTLP exporter headers | | `OTEL_EXPORTER_OTLP_HEADERS_FILE` | string | - | Path to OTLP exporter headers file | ### Performance Tuning | Variable | Type | Default | Description | | ------------------------------------------------ | ------ | ------- | ----------------------------------- | | `OTEL_BATCH_LOG_PROCESSOR_SCHEDULED_DELAY_MS` | number | `5000` | Batch log processor scheduled delay | | `OTEL_BATCH_LOG_PROCESSOR_MAX_EXPORT_BATCH_SIZE` | number | `512` | Max | `OTEL_TRACING_SAMPLING_RATE_DENOMINATOR` | number | `1000` | Tracing sampling rate denominator | ## Image Tags ### Service Images | Variable | Type | Default | Description | | ---------------------------------- | ------ | ------------------------------------------ | -------------------------------- | | `ENVOY_IMAGE_TAG` | string | `4789af164fcd3029a65a1d6739f2d9026567206e` | Envoy image tag | | `CORE_IMAGE_TAG` | string | `3a793c6ee06f5e1df56920fc70184b213ceb8c6e` | Core image tag | | `OBSERVER_IMAGE_TAG` | string | `e5f6ae36fd6eea04be5ebba2624f8ecc08db4ea0` | Observer image tag | | `LITESTREAM_IMAGE_TAG` | string | `be121fc0ae24a9eb7cdb2b92d01f047039b5f5e8` | Litestream image tag | | `CLICKHOUSE_AUTO_IMPORT_IMAGE_TAG` | string | `79792e1b549f64edad3e338769949fd9bffa62db` | ClickHouse auto-import image tag | ## Additional Paths ### Data Directories | Variable | Type | Default | Description | | ---------------------- | ------ | ------------------- | ----------------------- | | `CHUNKS_DATA_PATH` | string | `./data/chunks` | Path to chunks data | | `CONTIGUOUS_DATA_PATH` | string | `./data/contiguous` | Path to contiguous data | | `HEADERS_DATA_PATH` | string | `./data/headers` | Path to headers data | | `SQLITE_DATA_PATH` | string | `./data/sqlite` | Path to SQLite data | | `DUCKDB_DATA_PATH` | string | `./data/duckdb` | Path to DuckDB data | | `TEMP_DATA_PATH` | string | `./data/tmp` | Path to temporary data | | `LMDB_DATA_PATH` | string | `./data/lmdb` | Path to LMDB data | | `PARQUET_DATA_PATH` | string | `./data/parquet` | Path to Parquet data | | `REDIS_DATA_PATH` | string | `./data/redis` | Path to Redis data | | `CLICKHOUSE_DATA_PATH` | string | `./data/clickhouse` | Path to ClickHouse data | | `CLICKHOUSE_LOGS_PATH` | string | `./logs/clickhouse` | Path to ClickHouse logs | | `REPORTS_DATA_PATH` | string | `./data/reports` | Path to reports data | | `WALLETS_PATH` | string | `./wallets` | Path to wallets | ## Usage Notes - All environment variables are optional unless otherwise specified - Default values are shown in the "Default" column - Boolean values should be set to `true` or `false` - JSON values should be valid JSON strings - Path values should be absolute or relative to the project root - Some variables are only used in specific deployment scenarios (e.g., ClickHouse, Litestream) - Image tags can be updated to use different versions of the services - Data paths can be customized based on your storage requirements ## Configuration Examples ### Basic Gateway Setup ```bash # Core configuration NODE_ENV=production LOG_LEVEL=info PORT=4000 ADMIN_API_KEY=your-admin-key-here # Network configuration TRUSTED_NODE_URL=https://arweave.net TRUSTED_GATEWAY_URL=https://arweave.net # Data paths CHUNKS_DATA_PATH=/data/chunks CONTIGUOUS_DATA_PATH=/data/contiguous SQLITE_DATA_PATH=/data/sqlite ``` ### Advanced Gateway with ClickHouse ```bash # Core configuration NODE_ENV=production LOG_LEVEL=info PORT=4000 ADMIN_API_KEY=your-admin-key-here # ClickHouse configuration CLICKHOUSE_URL=http://clickhouse:8123 CLICKHOUSE_USER=default CLICKHOUSE_PASSWORD=your-password # Bundle processing ANS104_UNBUNDLE_FILTER={"and": [{"equals": {"App-Name": "MyApp-v1.0"}}]} ANS104_INDEX_FILTER={"and": [{"equals": {"App-Name": "MyApp-v1.0"}}]} ANS104_UNBUNDLE_WORKERS=2 ANS104_DOWNLOAD_WORKERS=5 ``` ### Gateway with Redis Caching ```bash # Core configuration NODE_ENV=production LOG_LEVEL=info PORT=4000 ADMIN_API_KEY=your-admin-key-here # Redis configuration CHAIN_CACHE_TYPE=redis REDIS_CACHE_URL=redis://redis:6379 REDIS_USE_TLS=false REDIS_CACHE_TTL_SECONDS=28800 # ArNS configuration ARNS_ROOT_HOST=your-domain.com ARNS_CACHE_TYPE=redis ``` This comprehensive reference should help you configure your AR.IO Gateway with the appropriate environment variables for your specific use case. # Gateway Filters (/build/run-a-gateway/manage/filters) Configure your AR.IO Gateway to efficiently process and index only the data you need. This comprehensive guide covers advanced filtering techniques, performance optimization, and real-world use cases. ## Overview The AR.IO Gateway uses a flexible JSON-based filtering system to control data processing and indexing. The system provides precise control over which bundles are processed and which data items are indexed for querying. ## Understanding the Filtering System The AR.IO Gateway uses two primary filters to control data processing: 1. **ANS104_UNBUNDLE_FILTER** - Controls which bundles are processed and unbundled 2. **ANS104_INDEX_FILTER** - Controls which data items from unbundled bundles are indexed for querying By default, gateways process no bundles and index no data items. You must explicitly configure filters to start processing data. ## Core Environment Variables ### Configure Data Management Optimize data storage and processing: ```bash # Number of new data items before flushing to stable storage DATA_ITEM_FLUSH_COUNT_THRESHOLD=1000 # Maximum time between flushes (in seconds) MAX_FLUSH_INTERVAL_SECONDS=600 # Maximum number of data items to queue for indexing MAX_DATA_ITEM_QUEUE_SIZE=100000 # Enable background verification ENABLE_BACKGROUND_DATA_VERIFICATION=true ``` ### Set Up GraphQL Configuration Choose between local-only or proxied queries: ```bash # For new gateways - proxy to arweave.net for complete index GRAPHQL_HOST=arweave.net GRAPHQL_PORT=443 # For local-only queries (uncomment to use) # GRAPHQL_HOST= ``` ## Filter Construction While the filters below are displayed on multiple lines for readability, they must be stored in the `.env` file as a single line for proper processing. ### Basic Filters The simplest filters you can use are `"always"` and `"never"` filters. The `"never"` filter is the default behavior and will match nothing, while the `"always"` filter matches everything. ```json {{title: "Never Match"}} { "never": true //default behavior } ``` ```json {{title: "Always Match"}} { "always": true } ``` ### Tag Filters Tag filters allow you to match items based on their tags in three different ways. You can match exact tag values, check for the presence of a tag regardless of its value, or match tags whose values start with specific text. All tag values are automatically base64url-decoded before matching. ```json {{title: "Exact Match"}} { "tags": [ { "name": "Content-Type", "value": "image/jpeg" } ] } ``` ```json {{title: "Match Tag Name Only"}} { "tags": [ { "name": "App-Name" } ] } ``` ```json {{title: "Starts With Match"}} { "tags": [ { "name": "Protocol", "valueStartsWith": "AO" } ] } ``` ### Attribute Filters Attribute filtering allows you to match items based on their metadata properties. The system automatically handles owner public key to address conversion, making it easy to filter by owner address. You can combine multiple attributes in a single filter: ```json {{title: "Basic Attributes"}} { "attributes": { "owner_address": "xyz123...", "data_size": 1000 } } ``` ### Nested Bundle Filter The `isNestedBundle` filter is a specialized filter that checks whether a data item is part of a nested bundle structure. It's particularly useful when you need to identify or process data items in bundles that are contained within other bundles. ```json {{title: "Basic Nested Bundle"}} { "isNestedBundle": true } ``` **Note**: When processing nested bundles, be sure to include filters that match the nested bundles in both `ANS104_UNBUNDLE_FILTER` and `ANS104_INDEX_FILTER`. The bundle data items (nested bundles) need to be indexed to be matched by the unbundle filter. ### Complex Filters Using Logical Operators For more complex scenarios, the system provides logical operators (AND, OR, NOT) that can be combined to create sophisticated filtering patterns. These operators can be nested to any depth: ```json {{title: "AND Operation"}} { "and": [ { "tags": [ { "name": "App-Name", "value": "ArDrive-App" } ] }, { "tags": [ { "name": "Content-Type", "valueStartsWith": "image/" } ] } ] } ``` ```json {{title: "OR Operation"}} { "or": [ { "tags": [ { "name": "App-Name", "value": "ArDrive-App" } ] }, { "attributes": { "data_size": 1000 } } ] } ``` ```json {{title: "NOT Operation"}} { "not": { "tags": [ { "name": "Content-Type", "value": "application/json" } ] } } ``` ## Filter Configuration Strategies ### Process Everything ```json { "always": true } ``` ### Process Nothing (Default) ```json { "never": true } ``` ### Process Specific App Data ```json { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] } ``` ### Single Application ```json { "tags": [ { "name": "App-Name", "value": "MyApp-v1.0" } ] } ``` ### Multiple Applications ```json { "or": [ { "tags": [ { "name": "App-Name", "value": "MyApp-v1.0" } ] }, { "tags": [ { "name": "App-Name", "value": "AnotherApp-v2.1" } ] } ] } ``` ### Application with Version Range ```json { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] } ``` ### Content Type Filtering ```json { "tags": [ { "name": "Content-Type", "valueStartsWith": "image/" } ] } ``` ### Specific File Types ```json { "or": [ { "tags": [ { "name": "Content-Type", "value": "application/json" } ] }, { "tags": [ { "name": "Content-Type", "value": "text/plain" } ] } ] } ``` ### File Size Filtering ```json { "attributes": { "data_size": 1000000 } } ``` ### Single Owner ```json { "attributes": { "owner_address": "YOUR_WALLET_ADDRESS" } } ``` ### Multiple Owners ```json { "or": [ { "attributes": { "owner_address": "WALLET_ADDRESS_1" } }, { "attributes": { "owner_address": "WALLET_ADDRESS_2" } } ] } ``` ### Exclude Specific Owners ```json { "not": { "attributes": { "owner_address": "UNWANTED_WALLET_ADDRESS" } } } ``` ### Complex Multi-Condition Filter ```json { "and": [ { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] }, { "attributes": { "owner_address": "YOUR_WALLET_ADDRESS" } }, { "not": { "tags": [ { "name": "Content-Type", "value": "application/octet-stream" } ] } } ] } ``` ### Exclude Common Bundlers ```json { "and": [ { "not": { "or": [ { "tags": [ { "name": "Bundler-App-Name", "value": "Warp" } ] }, { "tags": [ { "name": "Bundler-App-Name", "value": "Redstone" } ] }, { "attributes": { "owner_address": "-OXcT1sVRSA5eGwt2k6Yuz8-3e3g9WJi5uSE99CWqsBs" } } ] } }, { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] } ] } ``` ## Real-World Use Cases ### Personal Data Gateway Perfect for individuals who want to process only their own data: **Unbundle Filter:** ```json { "and": [ { "not": { "or": [ { "tags": [ { "name": "Bundler-App-Name", "value": "Warp" } ] }, { "tags": [ { "name": "Bundler-App-Name", "value": "Redstone" } ] } ] } }, { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] } ] } ``` **Index Filter:** ```json { "attributes": { "owner_address": "YOUR_WALLET_ADDRESS" } } ``` ### Application-Specific Service Ideal for building services around specific applications: **Unbundle Filter:** ```json { "tags": [ { "name": "App-Name", "valueStartsWith": "MyApp" } ] } ``` **Index Filter:** ```json { "or": [ { "tags": [ { "name": "ArFS", "value": "0.10" } ] }, { "tags": [ { "name": "ArFS", "value": "0.11" } ] }, { "tags": [ { "name": "ArFS", "value": "0.12" } ] } ] } ``` ### Content-Type Focused Gateway For gateways specializing in specific content types: **Unbundle Filter:** ```json { "tags": [ { "name": "Content-Type", "valueStartsWith": "image/" } ] } ``` **Index Filter:** ```json { "and": [ { "tags": [ { "name": "Content-Type", "valueStartsWith": "image/" } ] }, { "attributes": { "data_size": 100000 } } ] } ``` ## Performance Optimization ### Worker Configuration ### Understanding Default Worker Settings The gateway uses sensible defaults that work well for most users: ```bash # Default values (no need to set unless customizing) # ANS104_UNBUNDLE_WORKERS=1 (default: 0, or 1 if filters are set) # ANS104_DOWNLOAD_WORKERS=5 (default: 5) # Only adjust if you have specific hardware requirements # or want to optimize for your system's capabilities ``` **When to Adjust Workers:** Only modify worker counts if you have high-performance hardware and want to maximize throughput, or if you're experiencing resource constraints and need to reduce load. ### Optimize Data Flushing Balance between memory usage and database performance: ```bash # For high-memory systems, increase threshold DATA_ITEM_FLUSH_COUNT_THRESHOLD=2000 # For low-memory systems, decrease threshold DATA_ITEM_FLUSH_COUNT_THRESHOLD=500 # Adjust flush interval based on data volume MAX_FLUSH_INTERVAL_SECONDS=300 ``` ### Enable Background Processing ```bash # Enable background verification ENABLE_BACKGROUND_DATA_VERIFICATION=true # Enable WAL cleanup for better performance ENABLE_DATA_DB_WAL_CLEANUP=true ``` ## Webhook Filters There are also two filters available that are used to trigger webhooks. When a transaction is processed that matches one of the webhook filters, the gateway will send a webhook to the specified `WEBHOOK_TARGET_SERVERS` urls containing the transaction data. ```bash WEBHOOK_INDEX_FILTER="" WEBHOOK_BLOCK_FILTER="" ``` The `WEBHOOK_INDEX_FILTER` is used to trigger a webhook when a transaction is indexed. The `WEBHOOK_BLOCK_FILTER` is used to trigger a webhook when a block is processed. ## Important Notes - All tag names and values are base64url-decoded before matching - Owner addresses are automatically converted from owner public keys - Empty or undefined filters default to "never match" - Tag matching requires all specified tags to match - Attribute matching requires all specified attributes to match - The filter system supports nested logical operations to any depth, allowing for very precise control over what data gets processed ## Best Practices ### Filter Design 1. **Start Simple** - Begin with basic filters and gradually add complexity 2. **Test Thoroughly** - Use `FILTER_CHANGE_REPROCESS=true` when changing filters 3. **Monitor Performance** - Watch system resources during processing 4. **Document Changes** - Keep track of filter modifications and their effects ### Maintenance 1. **Regular Monitoring** - Check gateway logs for errors and warnings 2. **Resource Cleanup** - Periodically clean up old data and logs 3. **Filter Optimization** - Refine filters based on actual data patterns 4. **Backup Configuration** - Keep copies of working filter configurations ### Troubleshooting If your gateway stops processing data after changing filters, check: - Filter syntax is valid JSON - Required environment variables are set - Gateway has been restarted after changes - System has sufficient resources ## Next Steps Now that you understand gateway filtering, continue building your infrastructure: } title="Set Up Monitoring" description="Deploy Grafana to visualize your gateway's performance metrics" href="/build/extensions/grafana" /> } title="Add ClickHouse" description="Improve query performance with ClickHouse and Parquet integration" href="/build/extensions/clickhouse" /> } title="Deploy Bundler" description="Accept data uploads directly through your gateway" href="/build/extensions/bundler" /> } title="Run Compute Unit" description="Execute AO processes locally for maximum efficiency" href="/build/extensions/compute-unit" /> # Importing SQLite Database Snapshots (/build/run-a-gateway/manage/index-snapshots) ## Overview One of the challenges of running an AR.IO Gateway is the initial synchronization time as your gateway builds its local index of the Arweave network. This process can take days or even weeks, depending on your hardware and the amount of data you want to index. To accelerate this process, you can import a pre-synchronized SQLite database snapshot that contains transaction and data item records already indexed. This guide will walk you through the process of importing a database snapshot into your AR.IO Gateway. The below instructions are designed to be used in a linux environment. Windows and MacOS users must modify the instructions to use the appropriate package manager/ command syntax for their platform. Unless otherwise specified, all commands should be run from the root directory of the gateway. ## Quick Start ### Download Database Snapshot Download the latest database snapshot using BitTorrent: ```bash transmission-cli "magnet:?xt=urn:btih:62ca6e05248e6df59fac9e38252e9c71951294ed&dn=2025-04-23-sqlite.tar.gz&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=http%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fp4p.arenabg.com%3A1337%2Fannounce&tr=https%3A%2F%2Ftracker.bt4g.com%3A443%2Fannounce" ``` This downloads a 42.8GB snapshot current to April 23, 2025. ### Extract the Snapshot Extract the downloaded tarball: ```bash tar -xzf 2025-04-23-sqlite.tar.gz ``` This creates a directory with the extracted database files. ### Import the Snapshot Replace your existing database with the snapshot: ```bash # Stop the gateway docker compose down # Backup existing database (optional) mkdir sqlite-backup mv data/sqlite/* sqlite-backup/ # Remove old database rm data/sqlite/* # Import new snapshot mv 2025-04-23-sqlite/* data/sqlite/ # Start the gateway docker compose up -d ``` ## Detailed Instructions ### Obtaining a Database Snapshot SQLite database snapshots are very large and not easy to incrementally update. For these reasons, AR.IO is distributing them using BitTorrent. ### Install Torrent Client Install a BitTorrent client. We recommend [transmission-cli](https://github.com/transmission/transmission): ```bash # Ubuntu/Debian sudo apt-get install transmission-cli # CentOS/RHEL sudo yum install transmission-cli # macOS brew install transmission-cli ``` ### Download Snapshot Download the latest snapshot using the magnet link: ```bash transmission-cli "magnet:?xt=urn:btih:62ca6e05248e6df59fac9e38252e9c71951294ed&dn=2025-04-23-sqlite.tar.gz&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=http%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fp4p.arenabg.com%3A1337%2Fannounce&tr=https%3A%2F%2Ftracker.bt4g.com%3A443%2Fannounce" ``` This will download a snapshot, current to April 23, 2025, of an unbundled data set that includes all data items uploaded via an ArDrive product, including Turbo. The file will be named `2025-04-23-sqlite.tar.gz` and be approximately 42.8GB in size. ### Consider Seeding While continuing to seed the torrent after download is not required, it is highly recommended to help ensure the continued availability of the snapshot for others, as well as the integrity of the data. Seeding this file should not cause any issues with your internet service provider. ### Extracting the Database Snapshot Once the file has downloaded, you can extract it using the following command. ### Verify Download Check that the file downloaded completely: ```bash ls -lh 2025-04-23-sqlite.tar.gz # Should show approximately 42.8GB ``` ### Extract the Archive Extract the tarball: ```bash tar -xzf 2025-04-23-sqlite.tar.gz ``` Be sure to replace the filename with the actual filename of the snapshot you are using, if not using the example above. ### Verify Extraction Check that the extraction was successful: ```bash ls -la 2025-04-23-sqlite/ # Should show SQLite database files ``` This will extract the file into a directory matching the filename, minus the `.tar.gz` extension. ### Importing the Database Snapshot Once you have an extracted database snapshot, you can import it into your AR.IO gateway by replacing the existing SQLite database files. Importing a database snapshot will delete your existing database and replace it with the snapshot you are importing. ### Stop the Gateway Stop your AR.IO gateway: ```bash docker compose down ``` ### Backup Existing Database (Optional) Backup your existing SQLite database files: ```bash mkdir sqlite-backup mv data/sqlite/* sqlite-backup/ ``` ### Remove Old Database Delete the existing SQLite database files: ```bash rm data/sqlite/* ``` ### Import New Snapshot Move the snapshot files into the `data/sqlite` directory: ```bash mv 2025-04-23-sqlite/* data/sqlite/ ``` Be sure to replace `2025-04-23-sqlite` with the actual directory name of the extracted snapshot you are using. ### Start the Gateway Start your AR.IO gateway: ```bash docker compose up -d ``` ### Verifying the Import The simplest way to verify the import is to check the gateway logs to see what block number is being imported. ### Check Gateway Logs View the gateway logs to see the current block height: ```bash docker compose logs -f gateway ``` Look for messages indicating the current block being processed. ### Verify Block Height The 2025-04-23 snapshot was taken at block `1645229`, so the gateway will start importing blocks after this height if the snapshot was imported successfully. You should see logs showing blocks being processed starting from block 1645230 or higher. ### Use Grafana (Optional) You can also use the [Grafana Extension](/build/extensions/grafana) to view the last block imported in a more human readable format. # Manage your Gateway (/build/run-a-gateway/manage) Master the advanced features and configurations of your AR.IO Gateway. These comprehensive guides cover everything from performance optimization to content moderation, helping you run a professional-grade gateway infrastructure. ## Gateway Management }> Learn how to import pre-synchronized database snapshots to quickly bootstrap your gateway and reduce initial sync time from weeks to hours. }> Step-by-step guide to safely upgrade your AR.IO Gateway to the latest version without losing data or progress. ## Monitoring & Analytics } > Deploy and configure Grafana for comprehensive gateway monitoring, performance analytics, and operational insights. ## Performance Optimization }> Configure advanced filters to efficiently process and index only the data you need, optimizing performance and resource usage. }> Customize your gateway's root domain to serve custom content, project information, or documentation instead of default network info. ## Content Management } > Implement content moderation policies using blocklisting and filtering to control what content your gateway serves. ## Configuration Reference } > Comprehensive reference for all AR.IO Gateway environment variables organized by service component. ## Support & Troubleshooting } > Comprehensive troubleshooting guide and FAQ for common gateway issues, failed epoch guidance, and frequently asked questions. # Setting Apex Domain Content (/build/run-a-gateway/manage/setting-apex-domain) Configure your AR.IO Gateway to serve custom content from the apex domain instead of the default Arweave network information. This allows you to customize your gateway's root domain with useful information, project details, or any content you wish to share. ## Overview Prior to gateway Release 28, the apex domain of a gateway would only display information about the Arweave network. Release 28 introduced two new environment variables that allow a gateway to serve custom content from the apex domain: - `APEX_TX_ID`: Set to serve content from a specific transaction ID - `APEX_ARNS_NAME`: Set to serve content from an ArNS name These variables enable gateway operators to customize their gateway's apex domain with useful information, details about the operator or associated projects, or any other content they wish to share. ## Quick Start ### Choose Your Content Source Decide how you want to serve your content: **Option 1: Direct Transaction ID** - Upload your content to Arweave - Use the transaction ID directly **Option 2: ArNS Name (Recommended)** - Upload your content to Arweave - Assign your content's transaction ID to an ArNS name - Use the ArNS name for easier management ### Upload Your Content Upload your dApp, website, or other content to Arweave using your preferred method: - **ArDrive** - For simple file uploads - **Turbo** - For application bundles - **Direct upload** - For advanced users ### Configure Environment Variable Add one of these variables to your `.env` file: ```bash # Option 1: Direct transaction ID APEX_TX_ID=your-transaction-id # Option 2: ArNS name (recommended) APEX_ARNS_NAME=your-arns-name ``` You cannot set both variables simultaneously. Providing both variables will result in an error. ### Restart Your Gateway Restart your gateway to apply the changes: ```bash docker compose down docker compose up -d ``` ### Verify Configuration Visit your gateway's apex domain to confirm the custom content is being served correctly. ## Configuration Methods ### Using Direct Transaction ID ### Upload Content Upload your content to Arweave and note the transaction ID: ```bash # Example: Upload using ArDrive CLI ardrive upload-file --file-path ./my-website.html # Note the returned transaction ID # Example: abc123...def789 ``` ### Set Environment Variable Add the transaction ID to your `.env` file: ```bash APEX_TX_ID=abc123...def789 ``` ### Restart Gateway Restart your gateway to apply the configuration: ```bash docker compose down docker compose up -d ``` ### Update Content To update your content: 1. Upload new content to Arweave 2. Update `APEX_TX_ID` with the new transaction ID 3. Restart your gateway **Advantages:** - Direct control over content - No additional ArNS setup required - Simple for one-time content **Disadvantages:** - Requires gateway restart for updates - Less flexible for content management ### Using ArNS Name (Recommended) ### Upload Content Upload your content to Arweave: ```bash # Upload your website or dApp ardrive upload-file --file-path ./my-dapp.html # Note the transaction ID: xyz789...abc123 ``` ### Register ArNS Name Register an ArNS name pointing to your content: 1. Visit [ArNS App](https://arns.app) 2. Connect your wallet 3. Choose your desired name (e.g., `my-gateway-content`) 4. Set the transaction ID: `xyz789...abc123` 5. Pay the registration fee ### Configure Environment Variable Add the ArNS name to your `.env` file: ```bash APEX_ARNS_NAME=my-gateway-content ``` ### Restart Gateway Restart your gateway to apply the configuration: ```bash docker compose down docker compose up -d ``` ### Update Content To update your content: 1. Upload new content to Arweave 2. Update the ArNS name to point to the new transaction ID 3. **No gateway restart required!** **Advantages:** - No restart required for content updates - Easy content management - Professional domain naming - Can be updated independently **Disadvantages:** - Requires ArNS setup - Additional cost for ArNS registration ### Advanced Setup Options ### Custom Content Types Configure different types of content: **Static Website:** ```bash # Upload HTML/CSS/JS files APEX_ARNS_NAME=my-gateway-website ``` **Single Page Application:** ```bash # Upload SPA bundle APEX_ARNS_NAME=my-dapp ``` **Documentation Site:** ```bash # Upload documentation APEX_ARNS_NAME=my-gateway-docs ``` ### Content Management Workflow Implement a content management workflow: 1. **Development** - Test content locally 2. **Upload** - Deploy to Arweave 3. **Register** - Create/update ArNS name 4. **Verify** - Check content on gateway 5. **Monitor** - Track performance and usage ## Use Cases and Examples ### Display Gateway Service Information Perfect for showcasing your gateway service: **Content Ideas:** - Gateway operator information - Service capabilities and features - Contact information - Status and uptime statistics - Network participation details **Example Structure:** ```html My AR.IO Gateway AR.IO Gateway Service Reliable gateway infrastructure for the permanent web High availability Fast response times Global CDN Contact: operator@example.com ``` ### Showcase Associated Projects Highlight your projects and services: **Content Ideas:** - Project portfolio - Service offerings - Recent updates and news - Links to other projects - Integration examples **Example Structure:** ```html My Projects - AR.IO Gateway My Projects Project Alpha Description of project and its features Visit Project Project Beta Another project description Visit Project ``` ### Host Documentation Provide comprehensive documentation: **Content Ideas:** - Gateway setup guides - API documentation - Integration tutorials - Troubleshooting guides - FAQ sections **Example Structure:** ```html Gateway Documentation .nav { float: left; width: 200px; } .content { margin-left: 220px; } Navigation Setup Guide API Reference Troubleshooting Gateway Documentation Setup Guide Step-by-step setup instructions... ``` ### Real-World Examples Several gateway operators have implemented this feature: **arweave.tech** - Serves a custom landing page with gateway service information - Professional presentation of capabilities **arlink.xyz** - Serves the permaDapp for the Arlink project - Demonstrates integration with existing projects **frostor.xyz / love4src.com** - Serves information about the Memetic Block Software Guild - Showcases community and project information **vilenarios.com** - Serves personalized portfolio/link tree information - Personal branding and contact information **permagate.io** - Serves personalized link tree information - Professional operator presence These examples demonstrate the flexibility of the apex domain feature and how different operators use it to create unique, personalized experiences for their users. ## Troubleshooting ### Fix Configuration Problems ### Check Environment Variables Verify your `.env` file configuration: ```bash # Check if variables are set correctly grep -E "APEX_(TX_ID|ARNS_NAME)" .env # Should show only one of: # APEX_TX_ID=your-transaction-id # APEX_ARNS_NAME=your-arns-name ``` Ensure you have only ONE of the APEX variables set, not both. ### Verify Gateway Restart Ensure your gateway has been restarted after configuration changes: ```bash # Check if gateway is running docker compose ps # Restart if needed docker compose down docker compose up -d ``` ### Check Gateway Logs Review logs for any error messages: ```bash docker compose logs ar-io-core | grep -i apex ``` ### Resolve Content Issues ### Verify Content Accessibility Test if your content is accessible: ```bash # Test transaction ID directly curl -I https://arweave.net/your-transaction-id # Test ArNS name resolution curl -I https://your-arns-name.arweave.net ``` ### Check Content Format Ensure your content is properly formatted: - **HTML content** should have proper DOCTYPE - **Text content** should be UTF-8 encoded - **Binary content** should have appropriate Content-Type headers ### Test Content Rendering Verify content renders correctly in different browsers: 1. Test in Chrome, Firefox, Safari 2. Check mobile responsiveness 3. Verify all links work correctly 4. Test with different screen sizes ### Fix ArNS Problems ### Verify ArNS Resolution Check if your ArNS name resolves correctly: ```bash # Test ArNS resolution nslookup your-arns-name.arweave.net # Check if it points to the correct transaction curl -s https://your-arns-name.arweave.net | head -10 ``` ### Update ArNS Record If ArNS name points to wrong content: 1. Go to [ArNS App](https://arns.app) 2. Find your ArNS name 3. Update the transaction ID 4. Wait for propagation (usually immediate) ### Check ArNS Status Verify ArNS name is active and not expired: 1. Visit the ArNS app 2. Check your name's status 3. Ensure it's not expired 4. Verify payment is up to date ## Best Practices ### Content Design ### Optimize for Performance - Keep file sizes reasonable - Use efficient HTML/CSS - Optimize images and assets - Minimize external dependencies ### Ensure Accessibility - Use semantic HTML - Include alt text for images - Ensure good color contrast - Test with screen readers ### Mobile Responsiveness - Design for mobile-first - Use responsive CSS - Test on various devices - Ensure touch-friendly interfaces ### Content Management ### Version Control - Keep content in version control - Document changes and updates - Test changes before deployment - Maintain backup copies ### Regular Updates - Keep information current - Update contact details - Refresh project information - Monitor for broken links ### Backup Strategy - Backup content regularly - Keep multiple copies - Document restoration procedures - Test backup recovery ## Next Steps **Ready to customize your gateway?** Start with the Quick Start section above, then explore the different configuration methods and use cases to find what works best for your needs. ### Additional Resources - **ArNS Documentation** - Learn more about ArNS names and management - **Content Upload Guides** - Best practices for uploading content to Arweave - **Gateway Configuration** - Advanced gateway configuration options - **Community Examples** - See how other operators use this feature ### Getting Help If you encounter issues: 1. Check the troubleshooting section above 2. Verify your configuration is correct 3. Test content accessibility independently 4. Consult the [AR.IO Discord](https://discord.gg/cuCqBb5v) for community support # Automating SSL Certificate Renewal (/build/run-a-gateway/manage/ssl-certs) Secure your AR.IO Gateway with automated SSL certificate renewal using Certbot and DNS challenge validation. This guide covers setup for different DNS providers to automatically renew certificates without manual intervention. ## Overview Using DNS challenge validation with Certbot allows you to: - Automatically renew SSL certificates - Support wildcard certificates - Avoid manual certificate management - Ensure continuous gateway security ## Prerequisites - A running AR.IO Gateway - Domain name configured with your DNS provider - Administrative access to your server - API access to your DNS provider ## DNS Provider Setup ### Cloudflare Configuration ### Create Cloudflare API Token Navigate to **Cloudflare → My Profile → API Tokens → Create Token** Configure the token with these permissions: - **Zone → Zone → Read** - **Zone → DNS → Edit** ![Cloudflare API Token Configuration](https://arweave.net/GMzqNXNCQMSLqyt7SV7FrGOgCuGBeaO5qjRWibFkVBE) ### Install Certbot and Cloudflare Plugin ```bash apt update apt install certbot python3-certbot-dns-cloudflare -y ``` ### Configure API Credentials Create the credentials file: ```bash nano /etc/letsencrypt/cloudflare.ini ``` Add your API token: ```ini dns_cloudflare_api_token = your_api_token_here ``` Secure the file: ```bash chmod 600 /etc/letsencrypt/cloudflare.ini ``` ### Generate SSL Certificate Request the certificate with wildcard support: ```bash certbot certonly --dns-cloudflare \ --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \ -d example.com -d *.example.com ``` **Expected output:** ```bash Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/example.com/fullchain.pem Key is saved at: /etc/letsencrypt/live/example.com/privkey.pem ``` ### Test Automatic Renewal Perform a dry run to validate the renewal process: ```bash certbot renew --dry-run ``` **Expected output:** ```bash Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/example.com.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Account registered. Simulating renewal of an existing certificate for example.com and *.example.com Waiting 10 seconds for DNS changes to propagate - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Congratulations, all simulated renewals succeeded: /etc/letsencrypt/live/example.com/fullchain.pem (success) ``` ### Verify Automatic Renewal Timer Check that the certbot timer is active: ```bash systemctl list-timers | grep certbot ``` **Expected output:** ```bash Tue 2024-11-05 02:22:10 UTC 3h 21min Mon 2024-11-04 17:16:51 UTC 5h 43min ago certbot.timer certbot.service ``` ### Namecheap Configuration **API Requirements:** Namecheap requires specific conditions to create API keys: - At least 20 domains under your account - Minimum $50 account balance - At least $50 spent within the last 2 years If you don't meet these requirements, contact Namecheap support for a waiver. ### Create Namecheap API Key Navigate to **Namecheap → Profile → Tools → Manage API Access Keys** Create your API credentials and note: - Your username - Your API key ### Install Certbot and Dependencies ```bash apt update apt install certbot python3-pip -y ``` Install the Namecheap DNS plugin: ```bash pip install certbot-dns-namecheap ``` ### Configure API Credentials Create the credentials file: ```bash nano /etc/letsencrypt/namecheap.ini ``` Add your API credentials: ```ini dns_namecheap_username = your_username dns_namecheap_api_key = your_api_key ``` Secure the file: ```bash chmod 600 /etc/letsencrypt/namecheap.ini ``` ### Generate SSL Certificate Request the certificate with wildcard support: ```bash certbot certonly --dns-namecheap \ --dns-namecheap-credentials /etc/letsencrypt/namecheap.ini \ -d example.com -d *.example.com ``` **Expected output:** ```bash Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/example.com/fullchain.pem Key is saved at: /etc/letsencrypt/live/example.com/privkey.pem ``` ### Test Automatic Renewal Perform a dry run to validate the renewal process: ```bash certbot renew --dry-run ``` **Expected output:** ```bash Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/example.com.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Account registered. Simulating renewal of an existing certificate for example.com and *.example.com Waiting 10 seconds for DNS changes to propagate - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Congratulations, all simulated renewals succeeded: /etc/letsencrypt/live/example.com/fullchain.pem (success) ``` ### Verify Automatic Renewal Timer Check that the certbot timer is active: ```bash systemctl list-timers | grep certbot ``` **Expected output:** ```bash Tue 2024-11-05 02:22:10 UTC 3h 21min Mon 2024-11-04 17:16:51 UTC 5h 43min ago certbot.timer certbot.service ``` ## Post-Installation Steps After successfully setting up automatic SSL renewal: ### Update Gateway Configuration Configure your AR.IO Gateway to use the new certificates. Update your gateway's SSL configuration to point to: - **Certificate:** `/etc/letsencrypt/live/your-domain.com/fullchain.pem` - **Private Key:** `/etc/letsencrypt/live/your-domain.com/privkey.pem` ### Reload Web Server (Optional) If you're using nginx or another web server, reload it to apply the new certificates: ```bash systemctl reload nginx ``` ### Monitor Renewal Process Certbot automatically sets up a systemd timer for renewal. Certificates will be renewed when they have 30 days or less remaining. To manually check renewal status: ```bash certbot certificates ``` ## Troubleshooting ### Common Issues - **DNS propagation delays:** Wait 5-10 minutes for DNS changes to propagate - **API rate limits:** Check your DNS provider's API rate limits - **Permission errors:** Ensure credential files have correct permissions (600) ### Logs and Debugging Check certbot logs for detailed error information: ```bash tail -f /var/log/letsencrypt/letsencrypt.log ``` ## Next Steps With SSL certificates automated, consider: - [Setting up monitoring](/build/extensions/grafana) to track certificate expiration - [Configuring gateway filters](/build/run-a-gateway/manage/filters) for optimal performance - [Implementing content moderation](/build/run-a-gateway/manage/content-moderation) policies # Troubleshooting (/build/run-a-gateway/manage/troubleshooting) Welcome to the comprehensive troubleshooting and FAQ resource for AR.IO Gateway operators. Use the quick lookup table below for fast answers, or browse the detailed sections for in-depth guidance. ## Quick Lookup Below is a quick summary of what you should check when troubleshooting your gateway. Find more detailed information in the sections below. | Issue | What to Check | | ------------------------------------------------------------ | ------------------------------------------------------------------------- | | My release number is wrong | Pull the latest github updates and make sure you are on the `main` branch | | Gateway appears offline on Viewblock or https://gateways.ar.io | Probably fine, but verify that your gateway is still running. | | '/ar-io/observer/reports/current' just says "report pending" | Normal behavior, wait for the report to complete. | | Observer error "Cannot read properties of undefined" | Normal behavior, Observer is checking for data not implemented yet. | | Observing my gateway shows failures | Check `AR_IO_WALLET` and `ARNS_ROOT_HOST` settings. | | Updated .env settings not reflected on gateway | Rebuild your gateway after editing .env file. | | Out of disk space error | Check for inode exhaustion and delete files if necessary. | | Can't load ArNS names | Check `ARNS_ROOT_HOST` setting in .env file, and DNS records. | | "Your connection is not private" error | Generate or renew SSL certificates. | | 404/Nginx error when accessing domain | Check Nginx settings and restart Nginx if necessary. | | 502 error from Nginx | Check for errors in your gateway. | | Trouble generating SSL certificates | Ensure TXT records have propagated and follow certbot instructions. | ## General Troubleshooting ### My Gateway Seems to be Running but... If your release number when you go to `/ar-io/info` is lower than the current release, you simply need to upgrade your gateway in order to reach the latest release. If your release number includes the suffix "-pre" it means you are running your gateway from the development branch of the github repository, instead of the main branch. The development branch is used for staging work that the engineering team is in the middle of. Because of this, it can be much less stable than the main branch used for production and can cause significant issues. Ensure that you are running the latest release, from the main branch, by running the below commands in your terminal: ```console sudo docker compose down --rmi all git checkout main git pull sudo docker compose up -d ``` If this doesn't resolve the issue, you can also try a more extreme method of clearing out the incorrect docker images: ```console sudo docker compose down sudo docker system prune -a sudo docker compose up -d ``` Viewblock and https://gateways.ar.io use a very simple ping method for determining if a gateway is "up". There are plenty of reasons why this ping may fail while the gateway is running perfectly, so showing as down is not cause for concern. Just verify that your gateway is still running, and wait. Your gateway will show as up again soon. This is normal. Your Observer is working to generate a report and that report will be displayed once it is complete. This is not an issue with your observer. The short explanation is that your Observer is looking for tasks assigned to it by the AR.IO network contract, but there isnt anything there. You can safely ignore this error message. When observing a gateway, there are two main pass/fail tests. "Ownership" and "ArNS Assessment" - Ownership: This tests to see if the value set in your gateway `AR_IO_WALLET` value (in .env) matches the wallet used to join the AR.IO Network. If they don't match, update the value in your .env file and restart your gateway. - ArNS Assessment: This tests to see if a gateway is able to resolve ArNS names correctly. The first thing you should check is if you have the `ARNS_ROOT_HOST` value set in your .env file. If not, set the value and restart your gateway. If this value is set, check to make sure you have current DNS records and SSL certificates for wildcard subdomains on your gateway. Once you edit your .env file, you need to "rebuild" your gateway for the changes to take effect. As of release 3, every time you start your gateway with `docker-compose` it is automatically rebuilt. So all you need to do is shut your gateway down and restart it. The most likely cause of this is inode exhaustion. Test this by running the command: ``` df -i ``` If one of the lines in the output says 100%, you have run out of inodes and so your filesystem is not capable of creating new files, even if you have available space. The solution is to delete files from your `data` folder in order to free up inodes. This was a common issue prior to release #3, when Redis caching was introduced to reduce the number of small files created. If you are using an older version of the gateway, consider upgrading to mitigate the risk of inode exhaustion. The first thing you should check if your gateway is not resolving ArNS names is that you have `ARNS_ROOT_HOST` set in your .env file. If not, set it to your domain name used for the gateway. For example, `ARNS_ROOT_HOST=arweave.dev`. Once this value is set, restart your gateway for the changes to take effect. If that doesn't resolve the issue, check your dns records. You need to have a wildcard subdomain ( \*. ) set with your domain registrar so that ArNS names will actually point at your gateway. You can set this record, and generate an SSL certificate for it, in the same way you set the records for your primary domain. This error message means that your SSL certificates have expired. You need to renew your certificates by running the same certbot command you used when you initially started your gateway: ``` sudo certbot certonly --manual --preferred-challenges dns --email -d .com -d '*..com' ``` Certbot SSL certificates expire after 90 days, and you will need to rerun this command to renew every time. If you provide an email address, you will receive an email letting you know when it is time to renew. If you navigate to your domain and see a 404 error from Nginx (the reverse proxy server used in the setup guide) it means that your domain is correctly pointed at the machine running your gateway, but you have not properly configured your Nginx settings (or your gateway is not running). The [Set up Networking](./linux-setup.md#set-up-networking) section of the setup guide has detailed instructions on configuring your Nginx server. If all else fails, try restarting Nginx, that usually clears any issues with the server clinging to old configurations. ``` sudo service nginx restart ``` A 502 error from Nginx means that Nginx is working correctly, but it is receiving an error from your gateway when it tries to forward traffic. When using the manual certbot command provided in the setup guide: ``` sudo certbot certonly --manual --preferred-challenges dns --email -d .com -d '*..com' ``` You need to be sure that you are waiting after creating your TXT records for them to completely propagate. You can check propagation using a tool like [dnschecker.org](https://dnschecker.org). If you continue to have issues, you can check the [official certbot instructions guide](https://certbot.eff.org/instructions). - Visit your gateway in a browser and see if your SSL certs are expired. This is the most common issue causing sudden stops in proper operation. - Try restarting nginx, it sometimes has trouble looking at the new certs without a restart. - Make sure `ARNS_ROOT_HOST` is properly set in your `.env` file. Updating this requires restarting your gateway. - Make sure you have a DNS record set for `*.`. Since ArNS names are served as subdomains, you need to make sure all subdomains are pointed at your gateway. - If your gateway is attempting to resolve the name, but times out, it's most likely a CU issue. - AR.IO gateways are very robust, they can handle temporary errors gracefully and not affect normal operation. You should only be concerned if the error is consistent or it is causing your gateway to not function properly. - Observers generate and submit their reports at specific times throughout the epoch. This is to ensure a healthy network throughout the entire epoch, not just at the start. - Your observer wallet must match the observer wallet associated with your gateway in the AR.IO contract. You can check this by navigating to your gateway in https://gateways.ar.io. - This happens when a request to a CU fails, and your gateway receives an html failure message instead of the expected JSON response. This will normally clear up on its own after congestion on that CU dies down, but if it is persistent try switching to a different CU. - This is normal. It means you have reached the current Arweave block and need to wait for more before you can index them. - This is normal. If a gateway fails to resolve an arns name within 3 seconds, it will fall back to a trusted gateway (arweave.net by default) to help resolve the name. - There are many reasons a gateway could fail an epoch. Following these steps is usually enough to identify and correct the issue: - Try to visit your gateway in a browser and see if your SSL certs are bad - Try to resolve an ArNS name on your gateway. If it fails to resolve, check the console and your gateway logs for errors - Look at the observation reports that failed your gateway, they will list the reason for failure ## Troubleshooting Failed Epochs ### Overview The ARIO Network provides several tools to help troubleshoot problems with a gateway. The most powerful among these is the [Observer](/learn/oip). The Observer, which is a component of every gateway joined to the ARIO Network, checks all gateways in the network to ensure that they are functioning properly, and returning the correct data. The Observer then creates a report of the results of these checks, including the reasons why a gateway might have failed the checks. If a gateway fails the checks from more than half of the prescribed observers, the gateway is marked as failed for the epoch, and does not receive any rewards for that epoch. The first step in troubleshooting a failed gateway is always to attempt to resolve data on that gateway in a browser, but if that does not make the issue clear, the Observer report can be used to diagnose the problem. ### Manual Observation Manual observations may be run on a gateway at any time buy using the [Network Portal](https://gateways.ar.io). This allows operators (or anyone with an interest in the gateway's performance) to check the gateway's performance at any time. To run a manual observation: 1. Navigate to the [Network Portal](https://gateways.ar.io) 2. Select the gateway you are interested in from the list of gateways 3. Click on the "Observe" button in the top right corner of the page. ![Diagram](https://arweave.net/0G52dTWe65abQ6qDGvI99ERAaGU7DHR9srimJXnYRGA) 4. Click on the "Run Observation" button in the bottom right corner of the page. ![Diagram](https://arweave.net/A_B_58rufQ0Pj4ri0AKuC0DJn61u5ayO5ONWpkMerQw) Two randomly selected ArNS names will be entered automatically in the "ArNS names" field to the left of the "Run Observation" button. These can be changed, or additional ArNS names can be added to the list before running the observation. The Manual observation will run the same checks as the observer, and will display the results on the right side of the page. ![Diagram](https://arweave.net/vgRXfbx4fa47qGDpjndq128VCHl1wajKaq464KeA0Qg) ### Accessing the Observer Report The simplest way to access an observer report is via the [Network Portal](https://gateways.ar.io), following the steps below: 1. Navigate to the [Network Portal](https://gateways.ar.io) 2. Select the gateway you are interested in from the list of gateways 3. In the Observation window, select the epoch you are interested in. This will display a list of the observers that failed the gateway for that epoch. 4. Click on the "View Report" button to the right any observer on that list. This will display the entire report that observer generated. ![Diagram](https://arweave.net/ynbxYU_8xLRaU1D6a_LMoUq00roWwsMKgr-xrsDE0Sk) 5. Locate the gateway you are interested in in the report, and click on that row. This will display the report for that gateway. ### Understanding the Observer Report The observer report will display a list of checked ArNS names, and a reason if the gateway failed to return the correct data for that name. There are several reasons why a gateway might fail to return the correct data for an ArNS name. Below is a list of the most common reasons, and how to resolve them. #### Timeout awaiting 'socket', or Timeout awaiting 'connect' ![Diagram](https://arweave.net/_GupbMa-EW_wiCD201MuOkDQrLT0MXTfxDXhLSDmyh4) ![Diagram](https://arweave.net/0WkzxdyN-9hJfv0pSiTs0Ozg_wqFvE3-OWlgzYPimtU) This failure means that the observer was unable to connect to the gateway when it tried to check the ArNS name. There are lots of reasons why this might happen, many of them unrelated to the gateway itself. If an observer report has a small number of these failures, among a larger number of successful checks, it is unlikely to be an issue with the gateway. If this failure occurs persistently for a large number, or all ArNS names checked, it likely means that the observer is having trouble connecting to the gateway at all. You can verify this by: - Attempting to connect to the gateway in a browser - Running manual observations on the gateway using the [Network Portal](https://gateways.ar.io) - Using tools like `curl` or `ping` to check the gateway's connectivity If these methods consistently fail to connect to the gateway, it is likely that the gateway is not properly configured or powered on. If this is the case: - Check Docker and the gateway's logs to see if the gateway is on. - Ensure that the SSL certificates are valid for the gateway's domain. - Check DNS records for the gateway's domain, misconfigured or conflicting DNS records can cause connectivity issues. Some gateway operators who run their gateways on their personal home networks have also reported issues with their ISP blocking, throttling, or otherwise delaying traffic to a gateway. If none of the above steps resolve the issue, it may be worth checking with your ISP to see if they are blocking or throttling traffic to the gateway. Using [Grafana](/build/extensions/grafana) can also provide a visual representation of the gateway's ArNS resolution times. If this is consistently high (above 10 seconds), it is likely that the gateway is not properly configured to resolve ArNS names. Ensure that the gateway is operating on the latest Release. #### Cert has expired This failure means that the gateway's SSL certificate has expired. Obtaining a new SSL certificate and updating the gateway's reverse proxy (nginx, etc) configuration to use the new certificate is the only solution to this issue. #### dataHashDigest mismatch ![Diagram](https://arweave.net/xXe0bHne--0JJv-HRf5HT9R1V1UbzaOh2AxvAdQZhjg) This failure means that the gateway did respond to a resolution request, but the data it returned did not match the data that was expected. This could be due to a number of reasons, including: - Cached data was returned by the gateway that doesnt match the most current data on the network. - The gateway is configured to operate on testnet or devnet. Gateways joined to the ARIO Network MUST operate on mainnet in order to pass observation checks. - The gateway is intentionally returning fraudulent data. A gateway will not return fraudulent data unless that operator intentionally rewrote the gateway's code to do so, and a major purpose of the Observation and Incentive Protocol is to catch and prevent this behavior. A gateway may return mistaken data on occasion, usually due to a cache mismatch between the gateway and the observer's authority (usually arweavae.net). This is a relatively rare occurrence, and should only be considered an issue if it occurs persistently. If most or all of the ArNS names checked are failing for this reason, it is likely that the gateway is not operating on mainnet. #### Response code 502 (Bad Gateway) ![Diagram](https://arweave.net/NBQsYUKP6IZt_rYg77QXgzwUUPvimFGXCQqtesbW1_I) This failure means that the observer was able to connect to the gateway's network, but the reverse proxy returned a 502 error. This is almost always a reverse proxy issue. Ensure that the gateway's reverse proxy is running, and that it is configured to forward requests to the gateway. Testing the validity of the reverse proxy's configuration file (`sudo nginx -t` on Nginx) may provide more information about the issue, and restarting the reverse proxy (`sudo nginx -s reload`) often resolves the issue if there are no problems with the configuration file. It is also possible that the gateway itself is not running at all. Check Docker and the gateway's logs to see if the gateway is on. #### Response code 503 (Service Unavailable) ![Diagram](https://arweave.net/7eFKSm-cs81-aJ_H4xkolR2nSlxl5tYWXJdFTei8Dbs) This failure means that the observer was able to connect to the gateway's network, but the reverse proxy was unable to forward the request to the gateway. It differs from the 502 error in that the reverse proxy is likely able to see that the gateway is running, but is unable to communicate with it. This is often a temporary issue, caused by the gateway not being able to handle a heavy load of requests, or the gateway being in the process of restarting. If this failure occurs once or twice in a report, it is likely a temporary issue and should not be considered an issue with the gateway. However, when this failure occurs persistently, particularly for every ArNS name checked on the report, it is likely that the gateway may have crashed. Manually restarting the gateway can likely resolve the issue. #### connect EHOSTUNREACH ![Diagram](https://arweave.net/O-uG-yGm5bNxjw2ADH_yBjOcGo-ZEiFym8GeFZZNueY) This failure means that the observer was unable to connect to the gateway at all. The connection was either refused, or the gateway was not able to find a target based on the domain name's DNS records. This is almost always an issue with DNS records or local network configuration. Ensure that the gateway domain has correct DNS records, and that the local network is set up to allow connections. Checking logs from the local network's reverse proxy (nginx, etc) may provide more information about the issue. #### getaddrinfo ENOTFOUND ![Diagram](https://arweave.net/WJDwW0NM29uKC-9puvhXK_n75vgFXLpa6VKFVMhRsLQ) This is another DNS related issue. Likely, the gateway does not have a valid DNS record either for the top level domain or the required wildcard subdomain. Having this failure occur once or twice in a report could mean that the DNS server being used by the observer is having temporary issues and should not be considered an issue with the gateway. However, when this failure occurs persistently, particularly for every ArNS name checked on the report, it is likely that the gateway's DNS records are not set, or are misconfigured. #### Hostname/IP does not match certificate's altnames: Host: \. is not in the cert's altnames: DNS:\ ![Diagram](https://arweave.net/HfOfpAYm811dWFPNQC7bANEvjGVK4ch3kO7K7qMN9qs) This failure means that the observer's SSL certificate does not match the gateway's domain name. This is almost always an issue with the gateway's SSL certificate. This most likely occurred because the gateway's operator did not update the gateway's SSL certificate when the gateway's domain name was changed. Obtaining a new SSL certificate and updating the gateway's reverse proxy configuration to use the new certificate is the only solution to this issue. #### write EPROTO \:error:\:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name:\:SSL alert number 112 ![Diagram](https://arweave.net/Hbip_ZmqmFN8-uXijw1aylyYp1YllwgyZTNAcsPCxSg) This failure almost always means that the gateway operator did not properly obtain SSL certificates for the gateway's wildcard subdomain. Obtaining a new SSL certificate and updating the gateway's reverse proxy configuration to use the new certificate is the only solution to this issue. ## FAQ - Gateway protocol rewards are calculated as 0.1% of the protocol balance (0.05% after August 2025) split between all gateways in the network. A change in the protocol balance or the number of gateways in the network between epochs will result in the reward for an individual gateway changing. - The Observer rewards are separate from protocol rewards, and if your gateway is selected as an observer for an epoch, assuming it performs its duties well, it will receive additional rewards The observer selection process uses a weighted random selection method that considers multiple factors beyond just stake: - **Stake Weight (SW)**: Ratio of your total staked ARIO tokens (including delegated stake) to the network minimum - **Tenure Weight (TW)**: How long your gateway has been part of the network (capped at 4 after 2 years) - **Gateway Performance Ratio Weight (GPRW)**: Ratio of epochs where you correctly resolved names vs total participation - **Observer Performance Ratio Weight (OPRW)**: Ratio of epochs where you successfully submitted reports vs total observer periods A composite weight (CW) is calculated as: CW = SW × TW × GPRW × OPRW Up to 50 gateways are chosen as observers per epoch. If there are more than 50 gateways, selection is randomized based on these normalized weights. Even with a high stake, other factors like performance and tenure affect your chances of being selected. - There is a 90 day locking period when withdrawing stake, either from delegated stake or operator stake on your gateway. This locking period can be skipped, for a fee. The fee starts at 50% of the withdrawal amount, and goes down over time. If you selected instant withdrawal, you paid the fee to skip the locking period. - The minimum operator stake for gateways (10,000 ARIO) cannot be instantly withdrawn, it is subject to the full 90 day locking period, and withdrawal can only be started by removing your gateway from the network. - If possible, leave your original server running while you prepare the new one - Set up the new server following the same steps you used to set up the original server - This includes setting up SSL certificates for the new server - You must use the same gateway wallet when setting up the new server - The observer wallet may be changed at any point, but requires extra steps. It is recommended you use the original observer wallet as well - Once the new server is set up, change your DNS A records to point at the new server - After your DNS records are set and you have verified your gateway is operating correctly, shut down the original server - No changes need to be made in the network contract or on https://gateways.ar.io - Yes - Configure your new domain to point at your gateway, including setting up SSL certificates - Update your NGINX (or other reverse proxy) server to recognize the new domain. This usually requires a restart of NGINX - Update the `ARNS_ROOT_HOST` variable in your `.env` and restart the gateway - Using https://gateways.ar.io, update your gateway settings to change the FQDN in the contract - Your gateway is now using the new domain name for normal operation. ## Getting Help If you encounter any issues during the troubleshooting process, please seek assistance from the [AR.IO community](https://discord.gg/cuCqBb5v). **Ready to get back to building?** Once your gateway is running smoothly, check out [Manage your Gateway](/build/run-a-gateway/manage) for guides on optimization, monitoring, and more. # Upgrading your Gateway (/build/run-a-gateway/manage/upgrading-a-gateway) To ensure the optimal performance and security of your AR.IO Gateway, it's essential to regularly upgrade to the latest version. Notably, indexed data resides separate from Docker. As a result, neither upgrading the Gateway nor pruning Docker will erase your data or progress. Here's how you can perform the upgrade: ## Prerequisites - Your Gateway should have been cloned using git. If you haven't, follow the [installation instructions](/build/run-a-gateway/join-the-network). ## Checking your Release Number Effective with release 3, you can view the currently implemented release on any gateway by visiting `https:///ar-io/info` in a browser. Be sure to replace `` with the domain of the gateway you are checking. If the release number displayed includes `-pre` it means that your gateway is using the `develop` branch of the github repo for the gateway code. Follow steps in our [troubleshooting guide](/build/run-a-gateway/manage/troubleshooting) to switch over to the more stable `main` branch. Announcements will be made in our [discord server](https://discord.gg/cuCqBb5v) showing each new release. ## Quick Start ### Pull Latest Changes Navigate to your cloned repository directory and execute: ```bash git pull ``` ### Shut Down Docker Stop your gateway: ```bash sudo docker compose down -v ``` ```bash docker compose down -v ``` ### Restart Gateway Start your gateway with the new version: ```bash sudo docker compose up -d ``` ```bash docker compose up -d ``` Effective with Release #3, it is no longer required to include the `--build` flag when starting your gateway. Docker will automatically build using the image specified in the `docker-compose.yaml` file. ## Detailed Upgrade Process ### Full Upgrade Process ### Pull Latest Changes Navigate to your cloned repository directory and execute: ```bash git pull ``` ### Shut Down Docker Stop your gateway: ```bash sudo docker compose down -v ``` ```bash docker compose down -v ``` ### Check for New Environment Variables Read the update release change logs and community announcements to see if the new version includes any new environmental variables that you should set before restarting your gateway. ### Restart the Gateway Start your gateway with the new version: ```bash sudo docker compose up -d ``` ```bash docker compose up -d ``` Effective with Release #3, it is no longer required to include the `--build` flag when starting your gateway. Docker will automatically build using the image specified in the `docker-compose.yaml` file. ### Docker Pruning (Optional) It's a good practice to clean up unused Docker resources after shutting down your gateway. This will erase all inactive docker containers on your machine. If you use docker for anything beyond running a gateway be extremely careful using this command. ### Shut Down Gateway First, stop your gateway: ```bash sudo docker compose down -v ``` ```bash docker compose down -v ``` ### Prune Docker System Clean up unused Docker resources: ```bash sudo docker system prune ``` ```bash docker system prune ``` ### Restart Gateway Start your gateway: ```bash sudo docker compose up -d ``` ```bash docker compose up -d ``` ### Checking for New Environment Variables New gateway releases may introduce new environment variables that you need to configure. ### Review Release Notes Check the release notes and community announcements for any new environment variables: - Review the [GitHub releases](https://github.com/ar-io/ar-io-node/releases) - Check the [AR.IO Discord](https://discord.gg/7zUPfN4D6g) for announcements - Look for changes in the `.env.example` file ### Update Your .env File Add any new environment variables to your `.env` file: ```bash # Example: Add new environment variables NEW_FEATURE_ENABLED=true NEW_CONFIG_VALUE=default_value ``` ### Restart Gateway Restart your gateway to apply the new environment variables: ```bash sudo docker compose up -d ``` ```bash docker compose up -d ``` That's it! Your AR.IO Gateway is now upgraded to the latest version. Ensure to test and verify that everything is functioning as expected. If you encounter any issues, reach out to the [AR.IO community](https://discord.gg/7zUPfN4D6g) for assistance. # Installation & Setup (/build/run-a-gateway/quick-start) New to AR.IO gateways? Learn more about what they are and how they work at [AR.IO Gateways](/learn/gateways). Get your AR.IO gateway running in **30 seconds** with Docker. No configuration needed - just run and test. ## Quickstart ```bash # Start AR.IO gateway with Docker docker run -p 4000:4000 ghcr.io/ar-io/ar-io-core:latest ``` **Test it's working:** ```bash # Fetch a test transaction curl localhost:4000/4jBV3ofWh41KhuTs2pFvj-KBZWUkbrbCYlJH0vLA6LM # Expected output: test ``` That's it! Your gateway is now serving Arweave data at `localhost:4000`. ## Production Setup with Custom Domain Ready to run a gateway with your own domain name and SSL certificates? Follow these comprehensive steps: ### System Requirements **Minimum requirements:** - 4 core CPU - 4 GB RAM - 500 GB storage (SSD recommended) - Stable 50 Mbps internet connection **Recommended:** - 12 core CPU - 32 GB RAM - 2 TB SSD storage - Stable 1 Gbps internet connection External storage devices should be formatted as ext4. ### Install Required Packages **Quick install all packages:** ```bash sudo apt update -y && sudo apt upgrade -y && sudo apt install -y curl openssh-server git certbot nginx sqlite3 build-essential && sudo systemctl enable ssh && curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash && source ~/.bashrc && sudo ufw allow 22 80 443 && sudo ufw enable ``` **Install Docker:** ```bash # Add Docker's official GPG key sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` **Install Node.js and Yarn:** ```bash nvm install 20.11.1 && nvm use 20.11.1 && npm install -g yarn@1.22.22 ``` ### Install the Node **Clone the repository:** ```bash git clone -b main https://github.com/ar-io/ar-io-node cd ar-io-node ``` **Note:** Your indexing databases will be created in the project directory unless otherwise specified in your .env file, not your Docker environment. So, if you are using an external hard drive, you should install the node directly to that external drive. **Create environment file:** ```bash nano .env ``` **Add configuration:** ```bash GRAPHQL_HOST=arweave.net GRAPHQL_PORT=443 START_HEIGHT=1000000 RUN_OBSERVER=true ARNS_ROOT_HOST= AR_IO_WALLET= OBSERVER_WALLET= ``` **Supply Observer Wallet Keyfile:** Save your wallet keyfile as `.json` in the `wallets` directory. By default, the Observer will use [Turbo Credits](https://docs.ardrive.io/docs/turbo/credits) to pay for uploading reports to Arweave. This allows reports under 100kb to be uploaded for free, but larger reports will fail if the Observer wallet does not contain Credits. Including `REPORT_DATA_SINK=arweave` in your `.env` file will configure the Observer to use AR tokens instead of Turbo Credits, without any free limit. **Start the Docker container:** ```bash sudo docker compose up -d ``` ### Set Up Networking **Register a Domain Name:** Choose a domain registrar (e.g., [Namecheap](https://namecheap.com)) to register a domain name. **Point Domain at Your Home Network:** - Get your public IP address: `curl ifconfig.me` - Create A records for your domain and wildcard subdomains (`*.yourdomain.com`) **Set up Port Forwarding:** - Get local IP: `ip addr show | grep -w inet | awk '{print $2}' | awk -F'/' '{print $1}'` - Configure router to forward ports 80 and 443 to your local machine **Create SSL Certificates:** ```bash sudo certbot certonly --manual --preferred-challenges dns -d .com -d '*..com' ``` Previous versions of these instructions advised providing an email address to Certbot. As of June 2025, LetsEncrypt (the certificate authority used by Certbot) no longer supports email notifications. **Important:** Wild card subdomain (*.<your-domain>.com) cannot auto renew without obtaining an API key from your domain registrar. Not all registrars offer this. Certbot certificates expire every 90 days. Be sure to consult with your chosen registrar to see if they offer an API for this purpose, or run the above command again to renew your certificates. ### Configure nginx **Open nginx configuration:** ```bash sudo nano /etc/nginx/sites-available/default ``` **Replace with this configuration:** ```nginx # Force redirects from HTTP to HTTPS server { listen 80; listen [::]:80; server_name .com *..com; location / { return 301 https://$host$request_uri; } } # Forward traffic to your node and provide SSL certificates server { listen 443 ssl; listen [::]:443 ssl; server_name .com *..com; ssl_certificate /etc/letsencrypt/live/.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/.com/privkey.pem; location / { proxy_pass http://localhost:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; # Forward AR.IO headers if present in the request proxy_set_header X-AR-IO-Origin $http_x_ar_io_origin; proxy_set_header X-AR-IO-Origin-Node-Release $http_x_ar_io_origin_node_release; proxy_set_header X-AR-IO-Hops $http_x_ar_io_hops; } } ``` **Test and restart nginx:** ```bash sudo nginx -t sudo service nginx restart ``` **Note:** Previous versions of these instructions advised checking a gateway's ability to fetch content using `localhost`. Subsequent security updates prevent this without first unsetting `ARNS_ROOT_HOST` in your `.env`. ### Test Your Gateway **Verify it's working:** ```bash curl https:///3lyxgbgEvqNSvJrTX2J7CfRychUD5KClFhhVLyTPNCQ ``` **Expected output:** ``` 1984 ``` If you see `1984`, your gateway is working perfectly! ### System Requirements **Minimum requirements:** - 4 core CPU - 4 GB RAM - 500 GB storage (SSD recommended) - Stable 50 Mbps internet connection **Recommended:** - 12 core CPU - 32 GB RAM - 2 TB SSD storage - Stable 1 Gbps internet connection External storage devices should be formatted as ext4. ### Install Required Software **Install Docker Desktop:** - Download from [Docker Desktop for Windows](https://www.docker.com/products/docker-desktop/) - Run installer and follow prompts - Select WSL (Windows Subsystem for Linux) during installation - Restart your PC - Update WSL: ```cmd wsl --update wsl --shutdown ``` - Restart Docker Desktop **Install Git:** - Download from [Git for Windows](https://git-scm.com/download/win) - Run installer with default settings ### Clone the Repository **Open Command Prompt:** - Press `Windows Key + R` - Type `cmd` and press `Enter` **Navigate to desired directory:** ```cmd cd Documents ``` **Clone the repository:** ```cmd git clone -b main https://github.com/ar-io/ar-io-node ``` **Note:** Your database will be created in the project directory, not Docker. If using an external hard drive, install directly to that drive. ### Create Environment File **Open a text editor (e.g., Notepad):** - Press `Windows Key` and search for "Notepad" **Create .env file with this content:** ```bash GRAPHQL_HOST=arweave.net GRAPHQL_PORT=443 START_HEIGHT=0 RUN_OBSERVER=true ARNS_ROOT_HOST= AR_IO_WALLET= OBSERVER_WALLET= ``` **Save as `.env`** (select "All Files" as file type) **Supply Observer Wallet Keyfile:** Save your wallet keyfile as `.json` in the `wallets` directory. ### Start Docker Containers **Navigate to project directory:** ```cmd cd Documents\ar-io-node ``` **Start the container:** ```cmd docker compose up -d ``` **Explanation of flags:** - `up`: Start the Docker containers - `-d`: Run containers as background processes (detached mode) **Shutdown command:** ```cmd docker compose down ``` ### Set Up Router Port Forwarding **Obtain a Domain Name:** Choose a domain registrar (e.g., [Namecheap](https://namecheap.com)) and purchase a domain name. **Point Domain at Your Home Network:** - Visit https://www.whatsmyip.org/ to get your public IP address - Access your domain registrar's settings - Create A records for your domain and wildcard subdomains (`*.yourdomain.com`) **Get Local IP Address:** ```cmd ipconfig ``` Look for IPv4 Address (format: `192.168.X.X` or `10.X.X.X`) **Set Up Router Port Forwarding:** - Access router settings (usually `192.168.0.1`) - Navigate to port forwarding settings - Forward ports 80 and 443 to your local machine's IP address ### Install and Configure NGINX Docker **Clone NGINX Docker repository:** ```cmd cd Documents git clone -b main https://github.com/bobinstein/dockerized-nginx ``` **Follow the repository instructions** for setting up NGINX Docker. **Important:** When configuring your nginx setup, ensure that your nginx configuration includes the following AR.IO headers in the proxy configuration: ```nginx # Forward AR.IO headers if present in the request proxy_set_header X-AR-IO-Origin $http_x_ar_io_origin; proxy_set_header X-AR-IO-Origin-Node-Release $http_x_ar_io_origin_node_release; proxy_set_header X-AR-IO-Hops $http_x_ar_io_hops; ``` These headers are essential for proper AR.IO network functionality. ### Test Your Gateway **Verify it's working:** Visit `https:///3lyxgbgEvqNSvJrTX2J7CfRychUD5KClFhhVLyTPNCQ` in your browser. **Expected output:** ``` 1984 ``` If you see `1984`, your gateway is working perfectly! ## Useful Docker Commands Monitor and manage your AR.IO gateway with these commands: ```bash # View all running services docker ps # Run services in background daemon docker compose up -d # Turn off all services docker compose down # Pull latest images docker compose pull # Follow the logs of the core service docker compose logs core -f -n 10 # Follow the logs of the observer docker compose logs core observer -f -n 10 ``` ## What's Next? Your gateway is running! Now you can: } /> } /> } /> } /> **Need more context?** Learn [What is an AR.IO Gateway](/learn/gateways) to understand the full capabilities. # Advanced Uploading with Turbo (/build/upload/advanced-uploading-with-turbo) Learn how to upload data to Arweave using the **Turbo SDK** for a streamlined upload experience with multiple payment options and authentication methods. ## What You'll Learn - How to install and authenticate with the Turbo SDK - Different authentication methods (Arweave, Ethereum, Solana, etc.) - How to purchase Turbo Credits - How to upload files, strings, binary data, and entire folders to Arweave - Browser and Node.js implementation examples - Using the versatile `upload` method for all data types ## Prerequisites - Node.js environment or modern web browser - Wallet for authentication (Arweave, Ethereum, Solana, etc.) - Basic understanding of JavaScript/TypeScript ## Quick Start ### Install the Turbo SDK ```bash # For Node.js npm install @ardrive/turbo-sdk # For Yarn users yarn add @ardrive/turbo-sdk ``` ### Authenticate with Your Wallet Choose your preferred authentication method: ```typescript // Load your Arweave JWK file const jwk = JSON.parse(fs.readFileSync('wallet.json', 'utf-8')) const turbo = await TurboFactory.authenticated({ privateKey: jwk, // ArweaveJWK type token: 'arweave', // Default token type }) ``` ```typescript // Your Ethereum private key (with 0x prefix) const privateKey = '0x1234...' // EthPrivateKey type // Create an Ethereum signer instance const signer = new EthereumSigner(privateKey) const turbo = await TurboFactory.authenticated({ signer, token: 'ethereum', }) ``` ```typescript // Your Solana secret key (as Uint8Array) const secretKey = new Uint8Array([...]) // SolSecretKey type const turbo = await TurboFactory.authenticated({ privateKey: bs58.encode(secretKey), token: 'solana' }) ``` ```typescript // Your Polygon private key (with 0x prefix) const privateKey = '0x1234...' // EthPrivateKey type // Create an Ethereum signer instance for Polygon const signer = new EthereumSigner(privateKey) const turbo = await TurboFactory.authenticated({ signer, token: 'matic', // or 'pol' }) ``` ```typescript // Your KYVE private key (hexadecimal) const privateKey = '0x1234...' // KyvePrivateKey type const turbo = await TurboFactory.authenticated({ privateKey, token: 'kyve', }) ``` ```typescript async function initializeTurbo() { await window.arweaveWallet.connect([ 'ACCESS_ADDRESS', 'ACCESS_PUBLIC_KEY', 'SIGN_TRANSACTIONS', 'SIGN_MESSAGE', 'SIGNATURE', ]) const turbo = await TurboFactory.authenticated({ signer: new ArConnectSigner(window.arweaveWallet), }) } ``` ```typescript // Global variables for Wagmi config and connector let config = null let connector = null let turboInstance = null // Function to set up Wagmi configuration config = wagmiConfig connector = wagmiConnector } // Function to initialize Turbo with Wagmi try { if (!config || !connector) { throw new Error( 'Wagmi config and connector not set. Call setWagmiConfig first.', ) } console.log('Initializing Turbo client...') // Create a provider that uses wagmi's signMessage const provider = { getSigner: () => ({ signMessage: async (message) => { const arg = message instanceof String ? message : { raw: message } const ethAccount = getAccount(config) return await signMessage(config, { message: arg, account: ethAccount.address, connector: connector, }) }, }), } // Create the Turbo signer const signer = new InjectedEthereumSigner(provider) // Set up the public key signer.setPublicKey = async () => { const message = 'Sign this message to connect to Turbo' const ethAccount = getAccount(config) const signature = await signMessage(config, { message: message, account: ethAccount.address, connector: connector, }) const hash = await hashMessage(message) const recoveredKey = await recoverPublicKey({ hash, signature, }) signer.publicKey = Buffer.from(toBytes(recoveredKey)) } // Initialize the signer await signer.setPublicKey() turboInstance = await TurboFactory.authenticated({ signer: signer, token: 'base-eth', // Can be changed to 'ethereum' or 'matic', etc. }) console.log('Turbo client initialized successfully') return turboInstance } catch (error) { console.error('Error initializing Turbo client:', error) turboInstance = null throw error } } ``` ```typescript if (!window.ethereum) { throw new Error('Please install MetaMask to use this application') } try { const accounts = await window.ethereum.request({ method: 'eth_requestAccounts', }) const metaMaskProvider = window.ethereum.providers?.find( (p) => p.isMetaMask, ) const provider = new BrowserProvider(metaMaskProvider ?? window.ethereum) const signer = await provider.getSigner() const turbo = TurboFactory.authenticated({ signer: new InjectedEthereumSigner({ getSigner: () => signer }), token: 'ethereum', }) return { turbo, address: accounts[0] } } catch (error) { console.error('Connection failed:', error) throw error } } ``` ```typescript try { // Check if Phantom is installed if (window.solana) { const provider = window.solana const publicKey = new PublicKey((await provider.connect()).publicKey) const wallet: SolanaWalletAdapter = { publicKey, signMessage: async (message: Uint8Array) => { // Call Phantom's signMessage method const { signature } = await provider.signMessage(message) return signature }, } solanaTurboInstance = TurboFactory.authenticated({ token: 'solana', walletAdapter: wallet, }) } } catch (err) { console.error(err) } } ``` ### Purchase Turbo Credits [Turbo Credits](/build/upload/turbo-credits) are the payment medium used by the Turbo Upload Service. Each Credit represents a 1:1 conversion from the upload power of the Arweave native token (AR). - **Fiat Currency**: Credit/debit cards via the [Turbo Top Up App](https://turbo-topup.com/) - **Cryptocurrencies**: AR, ETH, SOL, MATIC, ARIO, USDC, KYVE, ETH (BASE) - **Multiple Wallets**: Ethereum, Solana, and Arweave wallets supported ```typescript // Initialize authenticated client const turbo = await TurboFactory.authenticated({ privateKey: jwk }) // Top up with AR tokens const topUpResult = await turbo.topUpWithTokens({ tokenAmount: WinstonToTokenAmount(100_000_000), // 0.0001 AR }) ``` ```typescript // Initialize authenticated client const turbo = await TurboFactory.authenticated({ signer: new EthereumSigner(privateKey), token: 'ethereum', }) // Top up with ETH tokens const topUpResult = await turbo.topUpWithTokens({ tokenAmount: 0.001, // 0.001 ETH }) ``` ```typescript // Initialize authenticated client const turbo = await TurboFactory.authenticated({ privateKey: bs58.encode(secretKey), token: 'solana' }) // Top up with SOL tokens const topUpResult = await turbo.topUpWithTokens({ tokenAmount: 0.1, // 0.1 SOL }) ``` ```typescript // Initialize authenticated client const turbo = await TurboFactory.authenticated({ signer: new EthereumSigner(privateKey), token: 'matic', }) // Top up with MATIC tokens const topUpResult = await turbo.topUpWithTokens({ tokenAmount: 1.0, // 1.0 MATIC }) ``` ```typescript // Initialize authenticated client const turbo = await TurboFactory.authenticated({ privateKey, token: 'kyve', }) // Top up with KYVE tokens const topUpResult = await turbo.topUpWithTokens({ tokenAmount: 100, // 100 KYVE }) ``` ### Upload Your First File ```typescript // Upload a single file using the versatile upload method const result = await turbo.upload({ data: file, // Can be File, Blob, Buffer, Uint8Array, ArrayBuffer, or string dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); console.log("File uploaded!", { id: result.id, url: `https://arweave.net/${result.id}`, owner: result.owner, dataCaches: result.dataCaches, }); ``` ## Uploading Files ### Basic File Upload ```typescript // Upload a single file using the versatile upload method const result = await turbo.upload({ data: file, // Can be File, Blob, Buffer, Uint8Array, ArrayBuffer, or string dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); console.log("File uploaded!", { id: result.id, url: `https://arweave.net/${result.id}`, owner: result.owner, dataCaches: result.dataCaches, }); ``` ### Upload with Custom Tags ```typescript const result = await turbo.upload({ data: file, dataItemOpts: { tags: [ { name: "Content-Type", value: "application/json" }, { name: "App-Name", value: "MyApp-v1.0" }, { name: "App-Version", value: "1.0.0" }, { name: "Description", value: "My application data" }, ], }, }); ``` ### Upload Strings ```typescript // Upload a string const stringResult = await turbo.upload({ data: "Hello, Arweave!", dataItemOpts: { tags: [ { name: "Content-Type", value: "text/plain" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); ``` ### Upload JSON Data ```typescript // Upload a JSON object const jsonData = { message: "Hello", timestamp: Date.now() }; const jsonResult = await turbo.upload({ data: JSON.stringify(jsonData), dataItemOpts: { tags: [ { name: "Content-Type", value: "application/json" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); ``` ### Upload Binary Data ```typescript // Upload binary data const binaryData = new Uint8Array([1, 2, 3, 4, 5]); const binaryResult = await turbo.upload({ data: binaryData, dataItemOpts: { tags: [ { name: "Content-Type", value: "application/octet-stream" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); ``` ### Upload Multiple Files ```typescript const files = [file1, file2, file3]; const uploadPromises = files.map((file) => turbo.upload({ data: file, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream", }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }) ); const results = await Promise.all(uploadPromises); console.log("All files uploaded!", results); ``` ### Upload an Entire Folder (Node.js) ```typescript const folderResult = await turbo.uploadFolder({ folderPath: "./my-website", dataItemOpts: { tags: [ { name: "App-Name", value: "MyWebsite-v1.0" }, { name: "Content-Type", value: "application/x.arweave-manifest+json" }, ], }, manifestOptions: { indexFile: "index.html", fallbackFile: "404.html", }, }); console.log("Folder uploaded!", { manifestId: folderResult.manifestResponse?.id, fileCount: folderResult.fileResponses.length, manifest: folderResult.manifest, }); ``` ### Upload Multiple Files as a Folder (Browser) ```typescript const files = [file1, file2, file3]; const webFolderResult = await turbo.uploadFolder({ files: files, dataItemOpts: { tags: [{ name: "App-Name", value: "MyWebsite-v1.0" }], }, manifestOptions: { indexFile: "index.html", }, }); ``` ## Browser Implementation Examples ### File Input with Drag & Drop ```html Turbo Upload Example .drop-zone { border: 2px dashed #ccc; border-radius: 10px; padding: 20px; text-align: center; margin: 20px 0; } .drag-over { border-color: #007bff; background-color: #f8f9fa; } Drag and drop files here or click to select // Your Turbo initialization code here // ... (authentication code from above) const fileInput = document.getElementById("file-input"); const dropZone = document.getElementById("drop-zone"); // File input handler fileInput.addEventListener("change", async (event) => { const files = Array.from(event.target.files); for (const file of files) { await uploadFile(file); } }); // Drag and drop handlers dropZone.addEventListener("dragover", (e) => { e.preventDefault(); e.stopPropagation(); dropZone.classList.add("drag-over"); }); dropZone.addEventListener("dragleave", (e) => { e.preventDefault(); e.stopPropagation(); dropZone.classList.remove("drag-over"); }); dropZone.addEventListener("drop", async (e) => { e.preventDefault(); e.stopPropagation(); dropZone.classList.remove("drag-over"); const files = Array.from(e.dataTransfer.files); for (const file of files) { await uploadFile(file); } }); async function uploadFile(file) { try { const result = await turbo.upload({ data: file, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream", }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, }); console.log("File uploaded!", { id: result.id, url: `https://arweave.net/${result.id}`, name: file.name, size: file.size, }); } catch (error) { console.error("Upload failed:", error); } } ``` ## Advanced Features ### Check Upload Costs ```typescript // Get upload cost for specific file size const costs = await turbo.getUploadCosts({ bytes: [file.size], }); console.log(`Upload cost: ${costs[0].winc} Winston Credits`); console.log(`USD cost: $${costs[0].usd}`); ``` ### Check Balance ```typescript // Get current balance const balance = await turbo.getBalance(); console.log(`Available credits: ${balance.controlledWinc} Winston Credits`); ``` ### Upload with Progress Tracking ```typescript const result = await turbo.upload({ data: file, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, events: { onUploadProgress: (progress) => { console.log( `Upload progress: ${Math.round((progress.processedBytes / progress.totalBytes) * 100)}%` ); }, onSigningProgress: (progress) => { console.log( `Signing progress: ${Math.round((progress.processedBytes / progress.totalBytes) * 100)}%` ); }, }, }); ``` ### Upload with Error Handling ```typescript try { const result = await turbo.upload({ data: file, dataItemOpts: { tags: [ { name: "Content-Type", value: file.type || "application/octet-stream" }, { name: "App-Name", value: "MyApp-v1.0" }, ], }, events: { onUploadError: (error) => { console.error("Upload failed:", error); }, onSigningError: (error) => { console.error("Signing failed:", error); }, }, }); console.log("Upload successful:", result); } catch (error) { console.error("Upload error:", error); // Handle error appropriately } ``` ## Benefits of Using Turbo - **Versatile upload method** - Upload files, strings, binary data, or entire folders with a single method - **Multiple payment options** - Pay with fiat, crypto, or AR tokens - **Easy integration** - Simple SDK for both Node.js and browsers - **Automatic retry** - Built-in retry logic for failed uploads - **Cost transparency** - See upload costs before confirming - **Fast uploads** - Optimized for speed and reliability - **Folder support** - Upload entire directories with automatic manifest generation ## Ready to Upload? } arrow > Buy Turbo Credits with fiat or crypto }> Discover best practices for organizing your data } > Learn how to organize files with manifests # Getting Started with Turbo (/build/upload/bundling-services) Upload data to Arweave using **Turbo** - the most reliable way to upload data to Arweave. Turbo provides enterprise-grade infrastructure with flexible payment options and optimized performance. ## What is Turbo? Turbo is a ultrahigh-throughput Permaweb service that streamlines the funding, indexing, and transmission of data to and from Arweave. It provides graphical and programmatic interfaces for payment options in fiat currency with credit or debit cards as well as cryptocurrencies such as ETH, SOL, USDC, and AR. It integrates two key components: a service that bundles uploads for efficiency and ease, and a payment system designed for straightforward transactions. Turbo Credits, which users can purchase within the ArDrive web app, the [Turbo Top Up App](https://turbo-topup.com/), or by using the [Turbo SDK/CLI](/sdks/turbo-sdk), have the same storage purchasing power of AR tokens, along with the additional benefits provided by Turbo. These credits are meticulously calibrated, with the Winston Credit (winc) representing the smallest unit, ensuring users have precise control over their storage needs. As an open-source technology, Turbo encourages community engagement, allowing developers to contribute to its continuous enhancement. ## Get Started ### Install the SDK ```bash npm install @ardrive/turbo-sdk ``` ### Set Up Your Wallet Create a new wallet or use an existing one: ```bash # Create a new wallet (easy way) npx permaweb/wallet > key.json ``` Then load it in your code: ```js // Load your wallet const jwk = JSON.parse(fs.readFileSync("./key.json", "utf-8")); const signer = new ArweaveSigner(jwk); // Initialize Turbo const turbo = TurboFactory.authenticated({ signer }); ``` ### Get Turbo Credits Purchase Turbo Credits to pay for uploads. When you upload, credits are automatically used and Turbo handles the payment to Arweave. **Option 1: Via the Web Interface** - Go to [turbo-topup.com](https://turbo-topup.com) - Pay with fiat currencies (credit cards) or crypto tokens (ARIO, USDC, SOL, MATIC, AR) **Option 2: Via the SDK** ```js // Purchase credits programmatically const fundResult = await turbo.topUpWithTokens({ tokenAmount: TOKEN_AMOUNT, tokenType: "solana", // or 'ethereum', 'matic', 'arweave' }); ``` **Check Your Balance** ```js const balance = await turbo.getBalance(); console.log(`Balance: ${balance.winc} Winston Credits`); ``` ### Upload Your Data ```js tab="Upload File" const fileData = fs.readFileSync("./myfile.jpg"); const result = await turbo.upload({ data: fileData, dataItemOpts: { tags: [ { name: "Content-Type", value: "image/jpeg" }, { name: "Title", value: "My Image" }, ], }, }); console.log("Upload ID:", result.id); console.log("Owner:", result.owner); ```` ```js tab="Upload Folder" const folderResult = await turbo.uploadFolder({ folderPath: "./my-folder", dataItemOpts: { tags: [ { name: "Bundle-Format", value: "binary" }, { name: "Bundle-Version", value: "2.0.0" }, ], }, }); console.log("Folder Upload ID:", folderResult.id); console.log("Manifest ID:", folderResult.manifestId); ```` ```js tab="Upload Raw Data" const data = JSON.stringify({ message: "Hello Arweave!", timestamp: Date.now(), }); const result = await turbo.upload({ data: data, dataItemOpts: { tags: [ { name: "Content-Type", value: "application/json" }, { name: "App-Name", value: "MyApp" }, ], }, }); console.log("Upload ID:", result.id); console.log("Owner:", result.owner); ``` ## Advanced Features ### Turbo Credits System Learn about our flexible payment system that supports multiple currencies and payment methods. → [Understanding Turbo Credits](/build/upload/turbo-credits) ### Data Organization - [**Tagging**](/build/upload/tagging) - Organize your data with metadata - [**Manifests**](/build/upload/manifests) - Create folder structures and bundles - [**Encryption**](/build/upload/encryption) - Secure your sensitive data - [**ArFS**](/build/advanced/arfs) - File system protocol for structured storage ## Production Ready Turbo implements the **[ANS-104 bundling specification](https://github.com/ArweaveTeam/arweave-standards/blob/master/ans/ANS-104.md)**, providing enterprise-grade infrastructure for permanent data storage. **Perfect for:** Developers, production applications, high-volume uploads, and any project needing reliable permanent storage with flexible payment options. | Feature | Turbo Bundling | Alternative Options | | -------------------- | -------------------------------- | ---------------------------- | | **Payment Options** | Fiat, ARIO, USDC, SOL, MATIC, AR | AR tokens only | | **Implementation** | Simple SDK integration | Manual transaction handling | | **Performance** | Optimized bundling & retry logic | Depends on implementation | | **Reliability** | Built-in redundancy | Manual error handling | | **Cost** | Optimized for large uploads | Higher per-transaction costs | | **Setup Complexity** | Easy with SDK | Complex protocol knowledge | **Need help deciding?** Most developers should use Turbo for its simplicity and payment flexibility. Only consider alternatives for specialized use cases requiring maximum control. ## Ready to Get Started? } arrow > Start building with Turbo's powerful bundling service. }> Explore the full SDK documentation and examples. } > Organize your data with metadata and tags. # Encryption (/build/upload/encryption) **Arweave has no built-in encryption.** All encryption and decryption must be handled client-side before uploading data to the network. Arweave is completely data-agnostic - it stores whatever data you provide without any knowledge of whether it's encrypted or not. ## How Encryption Works on Arweave **Critical Points:** - **No native encryption**: Arweave provides no encryption services whatsoever - **Client-side only**: You must encrypt data before uploading - **Data-agnostic storage**: Arweave stores any data type, including encrypted data - **Your responsibility**: You handle all encryption, key management, and decryption - **Permanent security**: Once encrypted and stored, data remains secure forever ## Encryption Options ### 1. Manual Client-Side Encryption Encrypt your data before uploading with Turbo: ```js // Encrypt sensitive data const data = "Sensitive information"; const secretKey = "your-secret-key"; const encryptedData = CryptoJS.AES.encrypt(data, secretKey).toString(); // Upload encrypted data const result = await turbo.upload({ data: encryptedData, dataItemOpts: { tags: [ { name: "Content-Type", value: "application/octet-stream" }, { name: "Encrypted", value: "true" }, { name: "Cipher", value: "AES-256-GCM" }, { name: "Cipher-IV", value: "YWJjZGVmZ2hpams=" }, // 12 byte initialization vector as Base64 ], }, }); ``` ## Encryption Standards ### Encryption Methods - **AES-256-GCM**: Authenticated encryption (recommended) - **AES-256-CTR**: Stream cipher for large files - **Any encryption method**: Arweave supports any encryption you choose (must be indicated in `Cipher` tag for ArFS compliance) ### Required Tags When uploading encrypted data, include these tags: ```js { name: "Content-Type", value: "application/octet-stream" // Required for encrypted data }, { name: "Cipher", value: "AES-256-GCM" // Specify encryption method }, { name: "Cipher-IV", value: "base64-encoded-iv" // Initialization vector } ``` ## ArFS Protocol (Optional Standardization) The [Arweave File System (ArFS)](/build/advanced/arfs) protocol provides optional standardization for encrypted storage: - **Private Drives**: Encrypt entire file systems - **File-level encryption**: Each file has its own encryption key - **Selective sharing**: Share individual files without exposing the entire drive - **Key derivation**: Uses HKDF-SHA256 with wallet signatures - **Completely optional**: You can use any encryption method you prefer **ArDrive Web App:** Data uploaded through the ArDrive web app to Private Drives is encrypted for you using the standards set in the ArFS protocol. ArDrive is simply a web application that implements ArFS - there is no separate "ArDrive Encryption Service." **ArFS Privacy:** To learn more about ArFS encryption schema, key derivation, and private drive management, see our detailed [ArFS Privacy & Encryption documentation](/build/advanced/arfs/privacy). ## Getting Started For most users, the ArDrive web app provides the easiest way to encrypt and store data using ArFS standards: **Create a private drive** in the ArDrive web app **Set a strong password** for your drive **Upload files** - they're automatically encrypted using ArFS **Access files** using your password and wallet For developers who need custom encryption: **Choose an encryption library** (Crypto-JS, Web Crypto API) **Encrypt your data** before uploading **Add proper tags** to indicate encryption **Store keys securely** for decryption ## Security Considerations **Important:** Never store encryption keys in your code or public repositories. Use secure key management practices and consider hardware security modules for production applications. **Best Practices:** - Use strong, randomly generated keys - Implement proper key rotation - Store keys securely (not in code) - Use authenticated encryption (AES-GCM) - Validate data integrity after decryption ## Next Steps } arrow > Use the ArDrive web app for easy encrypted file storage using ArFS. }> Explore the Arweave File System protocol for structured storage. } > Purchase credits for programmatic uploads. # Upload Data (/build/upload) import { CreditCard, Upload, Code, Tag, Shield, FolderOpen, Zap, Check, Image, } from "lucide-react"; Arweave enables **permanent data storage** with a single payment. Unlike traditional cloud storage that requires ongoing fees, your data is preserved forever. ## Upload Methods There are multiple ways to upload data to Arweave. Each has its own attributes and characteristics to help you decide which is best for your use case. Turbo{" "} Recommended } description={ Production-ready bundling service with enterprise features Credit cards, AR, ETH, SOL, MATIC Free uploads under 100 KiB Automatic retry & confirmation } href="/build/upload/bundling-services" icon={} /> No-code solution for personal and business files Drag-and-drop interface End-to-end encryption Powered by Turbo } href="https://app.ardrive.io" icon={} /> Raw protocol access for advanced use cases • AR tokens required • Manual transaction handling • Complex setup needed } href="https://arweave.org" icon={} /> ## Why Developers Choose Turbo **Cost Effective** - Pay per byte, not empty chunks - only pay for actual data uploaded - Free uploads under 100 KiB - subsidized small file uploads - No failed upload charges - automatic retry without extra costs **Developer Experience** - TypeScript & CLI support - choose your preferred tools - Simple 3-line integration - get started in minutes - Comprehensive documentation - extensive guides and examples **Enterprise Ready** - High Availability - reliable service for production apps - Handles millions of uploads daily - battle-tested infrastructure used by ArDrive - Open source infrastructure - fully auditable and transparent ## Get Started in Minutes With Turbo, uploading to Arweave is as simple as using any cloud storage API: ```typescript const turbo = TurboFactory.authenticated({ privateKey }); const uploadResult = await turbo.uploadFile({ fileStreamFactory: () => fs.createReadStream("./my-file.pdf"), }); // Your file is now permanently stored! ``` ## Organize Your Data Before uploading, learn best practices for structuring and tagging your data for optimal retrieval and organization. ## Additional Resources # Manifests (/build/upload/manifests) Manifests enable friendly-path-name routing for data on Arweave, greatly improving the programmability of data relationships. Instead of accessing data with complex transaction IDs, manifests allow you to organize files with readable paths and relative links. ## What are Manifests? Manifests, also known as "Path Manifests" or "Arweave Manifests," are JSON objects that connect various Arweave data items and define relational paths for easy navigation. A common use case is permanently hosting websites on Arweave by linking all necessary files together. ### The Problem Manifests Solve Without manifests, accessing data on Arweave looks like this: ``` http:///cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI (txID of a website's index.html) http:///3zFsd7bkCAUtXUKBQ4XiPiQvpLVKfZ6kiLNt2XVSfoV (txID of its js/style.css) http:///or0_fRYFcQYWh-QsozygI5Zoamw_fUsYu2w8_X1RkYZ (txID of its assets/img/logo.png) ``` With manifests, the same data becomes: ``` http:/// (resolves to the txID of index.html) http:////js/style.css http:////assets/img/logo.png ``` ## Manifest Structure Manifests are JSON objects that define how data items are connected and accessed through friendly paths. ### Sample Manifest ```json { "manifest": "arweave/paths", "version": "0.2.0", "index": { "path": "index.html" }, "fallback": { "id": "iXo3LSfVKVtXUKBzfZ4d7bkCAp6kiLNt2XVUFsPiQvQ" }, "paths": { "index.html": { "id": "cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI" }, "404.html": { "id": "iXo3LSfVKVtXUKBzfZ4d7bkCAp6kiLNt2XVUFsPiQvQ" }, "js/style.css": { "id": "3zFsd7bkCAUtXUKBQ4XiPiQvpLVKfZ6kiLNt2XVSfoV" }, "css/style.css": { "id": "sPiQvpAUXLVK3zF6iXSfo7bkCVQkiLNt24dVtXUKBfZ" }, "css/mobile.css": { "id": "fZ4d7bkCAUiXSfo3zFsPiQvpLVKVtXUKB6kiLNt2XVQ" }, "assets/img/logo.png": { "id": "or0_fRYFcQYWh-QsozygI5Zoamw_fUsYu2w8_X1RkYZ" }, "assets/img/icon.png": { "id": "0543SMRGYuGKTaqLzmpOyK4AxAB96Fra2guHzYxjRGo" } } } ``` ### How it Works A resolver, typically an AR.IO gateway, resolves URLs requesting content based on a manifest transaction ID to the corresponding path key in the `paths` object. The URL schema for this type of request is `https:////`. ### Example Usage Assume the manifest above is uploaded to Arweave with the transaction ID `UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk`. The below table shows https requests to the AR.IO gateway `arweave.net`: | Request Path | Manifest Path | Data served from txID | | ---------------------------------------------------------------------------- | ------------- | ------------------------------------------- | | https://arweave.net/UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk | index | cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI | | https://arweave.net/UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk/index.html | index.html | cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI | | https://arweave.net/UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk/js/style.css | js/style.css | 3zFsd7bkCAUtXUKBQ4XiPiQvpLVKfZ6kiLNt2XVSfoV | | https://arweave.net/UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk/foobar | fallback | iXo3LSfVKVtXUKBzfZ4d7bkCAp6kiLNt2XVUFsPiQvQ | ## Creating Manifests with Turbo Turbo makes it easy to create manifests automatically when uploading folders, or you can create custom manifests manually. ### Folder Upload with Manifest ```js const folderResult = await turbo.uploadFolder({ folderPath: "./my-website", dataItemOpts: { tags: [ { name: "Bundle-Format", value: "binary" }, { name: "Bundle-Version", value: "2.0.0" }, { name: "App-Name", value: "Website" }, ], }, }); console.log("Folder Upload ID:", folderResult.id); console.log("Manifest ID:", folderResult.manifestId); ``` ### Custom Manifest Creation ```js const manifest = { manifest: "arweave/paths", version: "0.2.0", index: { path: "index.html", }, fallback: { id: "fallback-tx-id", }, paths: { "index.html": { id: "abc123...def789", }, "css/style.css": { id: "def456...ghi012", }, }, }; const manifestResult = await turbo.upload({ data: JSON.stringify(manifest), dataItemOpts: { tags: [ { name: "Content-Type", value: "application/x.arweave-manifest+json" }, { name: "App-Name", value: "CustomManifest" }, ], }, }); ``` ## Manifest Specifications ### Required Transaction Tags Manifests must be uploaded with specific tags so that AR.IO gateways can recognize and properly resolve them: ```json { "name": "Content-Type", "value": "application/x.arweave-manifest+json" } ``` **Important:** This tag must be attached to the upload transaction, NOT placed inside the JSON object. Failure to provide this tag will result in resolvers not recognizing the manifest. ### Required JSON Attributes #### manifest ```json "manifest": "arweave/paths" ``` Must have the value `arweave/paths` for gateways to resolve the manifest. #### version ```json "version": "0.2.0" ``` Defines the version of manifest schema being used. #### index ```json "index": { "path": "index.html" } ``` or ```json "index": { "id": "cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI" } ``` Defines the base or 'starting' data item. Accepts either `path` (key in paths object) or `id` (specific transaction ID). If both are defined, `id` overrides `path`. #### fallback ```json "fallback": { "id": "iXo3LSfVKVtXUKBzfZ4d7bkCAp6kiLNt2XVUFsPiQvQ" } ``` Defines a fallback data item for when requested paths don't exist (like a 404 page). #### paths ```json "paths": { "index.html": { "id": "cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI" }, "css/style.css": { "id": "3zFsd7bkCAUtXUKBQ4XiPiQvpLVKfZ6kiLNt2XVSfoV" } } ``` Defines the URL paths that a manifest can resolve to. Each path maps to a specific Arweave transaction ID. ## Relative Path Routing AR.IO gateways support relative path routing, making it easy to develop and maintain websites hosted on Arweave. Instead of using fully qualified URLs: ```html ``` You can use relative paths: ```html ``` This makes HTML more readable and ensures links remain valid even if the hosting domain changes. ## Best Practices ### File Organization - Use descriptive file paths - Organize files in logical folders - Keep manifest files small - Use consistent naming conventions ### Performance Considerations - Minimize manifest size - Use relative paths - Avoid deep nesting - Consider file size limits ## Next Steps } > Secure your sensitive data with encryption. } > Advanced file organization with ArFS. } > Complete upload guide with Turbo. # Tagging (/build/upload/tagging) Tags are key-value pairs that provide metadata about your uploaded data on Arweave. They enable discoverability, proper content serving, and integration with various protocols. ## Essential Tags Every upload should include these tags: - **Content-Type**: Required - tells gateways how to serve your data - **App-Name**: Best practice - identifies your application for discoverability ```js const result = await turbo.upload({ data: fileData, dataItemOpts: { tags: [ { name: "Content-Type", value: "image/jpeg" }, { name: "App-Name", value: "MyApp-v1.0" }, { name: "Title", value: "My Image" }, ], }, }); ``` ## Common Tag Types ### Content Types - `image/jpeg`, `image/png` - Images - `application/json` - JSON data - `text/html` - HTML pages - `video/mp4` - Videos - `application/pdf` - Documents ### App-Specific Tags - `App-Name` - Your application identifier (e.g., "MyApp-v1.0", "PhotoGallery-2024") - `Title` - Human-readable title - `Description` - Content description - `Author` - Content creator - `Version` - Application version ### Protocol Tags - `License` - Universal Data License (UDL) transaction ID - `License-Fee` - Fee for UDL licensing **UDL Integration:** Learn about the [Universal Data License](https://mirror.xyz/0x64eA438bd2784F2C52a9095Ec0F6158f847182d9/AjNBmiD4A4Sw-ouV9YtCO6RCq0uXXcGwVJMB5cdfbhE) for monetizing your data. ## Advanced Tagging ### Folder Uploads ```js const folderResult = await turbo.uploadFolder({ folderPath: "./my-website", dataItemOpts: { tags: [ { name: "Bundle-Format", value: "binary" }, { name: "Bundle-Version", value: "2.0.0" }, { name: "App-Name", value: "MyWebsite-v2.1" }, { name: "Version", value: "2.1.0" }, ], }, }); ``` ### Licensed Content ```js const licensedTags = [ { name: "Content-Type", value: "image/jpeg" }, { name: "App-Name", value: "ArtGallery-v3.2" }, { name: "Version", value: "3.2.1" }, { name: "License", value: "udl-tx-id-here" }, { name: "License-Fee", value: "1000000" }, // Fee in Winston ]; ``` ## App-Name Best Practices ### Naming Convention Use descriptive, versioned App-Name values for better organization: - **Include version**: `MyApp-v1.0`, `PhotoGallery-2024` - **Be specific**: `EcommerceStore-v2.1` instead of just `Store` - **Use consistent format**: `ProjectName-vMajor.Minor` - **Include year for time-based apps**: `YearlyReport-2024` ## Tag Limitations - **4KB total** for bundled data items (Turbo) - **2KB total** for direct L1 uploads - **No maximum number of tags** (limited by total size) - Tag names are case-sensitive - No duplicate tag names allowed **Important:** Total tag size is limited to 4KB (bundled) or 2KB (L1). For larger metadata, store it in the data payload instead. ## Querying Data by Tags Once you've tagged your data, you can use GraphQL to search and filter based on those tags. This enables powerful discovery and retrieval of your stored content. ## Next Steps } > Organize files with manifests for better structure. }> Secure your sensitive data with encryption. } > Advanced file organization with ArFS. # Paying for Uploads (/build/upload/turbo-credits) import { CreditCard, Upload, Code, FileText, Tag, FolderOpen, } from "lucide-react"; Turbo Credits are the payment medium used by Turbo's upload service, providing a 1:1 conversion from Arweave's native AR token with additional benefits and flexible payment options. ## What are Turbo Credits? Turbo Credits represent upload power on the Arweave network, divisible down to 1 trillionth of a credit (Winston Credit). Unlike traditional crypto tokens, Turbo Credits cannot be traded, transferred, or exchanged - they exist solely for uploading data to Arweave. **Important:** Turbo Credits are non-refundable and cannot be withdrawn or exchanged for other cryptocurrencies. ## How to Purchase Credits ### Payment Methods - **Fiat Currency**: Credit/debit cards via Stripe - **Crypto Tokens**: AR, ETH, SOL, MATIC, ARIO, USDC, KYVE, ETH (BASE) - **Multiple Wallets**: Ethereum, Solana, and Arweave wallets supported ### Supported Tokens & Purchase Methods | Payment Method | Turbo SDK | Turbo CLI | Turbo API | Top Up App | ArDrive App | | ---------------------------- | --------- | --------- | --------- | ---------- | ----------- | | **Fiat (credit/debit card)** | ✅ | ✅ | ✅ | ✅ | ✅ | | **AR** | ✅ | ✅ | ✅ | ✅ | ❌ | | **ETH** | ✅ | ✅ | ✅ | ✅ | ❌ | | **SOL** | ✅ | ✅ | ✅ | ✅ | ❌ | | **MATIC** | ✅ | ✅ | ✅ | ❌ | ❌ | | **KYVE** | ✅ | ✅ | ✅ | ❌ | ❌ | | **ETH (BASE)** | ✅ | ✅ | ✅ | ❌ | ❌ | | **ARIO** | ✅ | ✅ | ✅ | ❌ | ❌ | | **USDC** | ❌ | ✅ | ✅ | ❌ | ❌ | **Wallet Compatibility:** When purchasing with cryptocurrencies, credits are deposited into the corresponding wallet type. You cannot top up an ETH wallet by paying with AR. ### Purchase Options **Option 1: Turbo Top Up App** - Visit [turbo-topup.com](https://turbo-topup.com) - Purchase with USD or AR tokens - Credits can be purchased into Ethereum or Solana wallets **Option 2: ArDrive App** - Use [ArDrive](https://app.ardrive.io) for simple purchases - Buy credits with USD directly in the app - Perfect for occasional users **Option 3: Turbo SDK/CLI** ```js // Purchase credits programmatically const fundResult = await turbo.topUpWithTokens({ tokenAmount: TOKEN_AMOUNT, tokenType: "solana", // or 'ethereum', 'matic', 'arweave' }); // Check your balance const balance = await turbo.getBalance(); console.log(`Balance: ${balance.winc} Winston Credits`); ``` ## Credit Sharing Turbo Credits can be shared with other users without transferring ownership, perfect for organizations and teams. ### How Credit Sharing Works - **Authorize Users**: Grant specific wallets access to your credits - **Set Limits**: Control how much each user can spend - **Time Limits**: Set expiration dates for access - **Revoke Anytime**: Regain control of shared credits instantly ### Use Cases - **Organizational Funds**: Central wallet shares credits with employees - **Onboarding**: Give new users free upload power for trials - **Collaboration**: Share credits with project contributors - **Educational Programs**: Provide students with controlled access **Perfect for:** Companies, teams, educational institutions, and any organization needing controlled access to upload power. ## Pricing & Fees - **23.4% fee** applied to credit purchases (covers infrastructure and benefits) - **No additional fees** when using credits for uploads - **Same value per GiB** as AR regardless of price fluctuations - **Subsidized uploads** under 100 KiB are completely free ## Getting Started Ready to start using Turbo Credits? Choose your path: } arrow > Buy credits instantly with credit cards or crypto } > Learn how to upload data with your new credits } > Integrate credit sharing and advanced features ## Next Steps } > Complete upload guide with Turbo. }> Organize with metadata and tags. } > Create folder structures with manifests. # Glossary (/glossary) ## AO Computer AO (Actor Oriented) is a hyper-parallel computing platform built on Arweave that enables decentralized applications to run with unlimited computational capacity. AO provides the compute layer for the AR.IO Network's smart contracts and token operations. ## Public Key A cryptographic key that can be shared publicly and is used to verify digital signatures or encrypt data. In the AR.IO context, public keys are used to identify wallet addresses and verify transactions. ## Native Address An address format that uses the raw public key bytes directly, without additional encoding or transformation. This is the most basic form of an Arweave address. ## Normalized Address A standardized address format that ensures consistent representation across different systems and contexts. Normalized addresses help prevent issues with address matching and lookup operations. ## Optimistic Indexing A data indexing strategy where new data is immediately made available for queries while verification processes continue in the background. This approach improves performance while maintaining data integrity through eventual consistency. # What are Bundles? (/learn/(introduction)/ans-104-bundles) ANS-104 bundles are **data packaging standards** that efficiently bundle multiple data items and submit them to Arweave as single transactions, reducing transaction overhead and improving network efficiency. ## The Problem ANS-104 Solves **Individual Arweave transactions have inherent limitations:** - **Transaction overhead** - Each transaction requires separate processing and storage - **Network inefficiency** - Multiple small transactions consume more network resources - **Indexing complexity** - Individual transactions are harder to organize and query - **Storage fragmentation** - Related data items are stored separately **ANS-104 provides:** - **Reduced transaction overhead** by batching multiple data items - **Improved network efficiency** through consolidated transactions - **Better indexing capabilities** with structured data item format - **Standardized data format** for interoperability across applications ## How ANS-104 Bundling Works ### The ANS-104 Standard ANS-104 is the [official specification](https://github.com/ArweaveTeam/arweave-standards/blob/master/ans/ANS-104.md) for bundling data on Arweave: - **Data Items** - Individual pieces of data with standardized binary format - **Bundle** - Single Arweave transaction containing multiple data items - **Binary Serialization** - Consistent format for data item structure - **Standardized Format** - Ensures interoperability across applications ### How ANS-104 Works 1. **Data Item Creation** - Create individual data items with ANS-104 format 2. **Bundle Assembly** - Combine multiple data items into a single bundle 3. **Transaction Creation** - Submit bundle as one Arweave transaction 4. **Network Processing** - Miners process the single bundle transaction 5. **Data Retrieval** - Individual data items can be extracted and indexed ## Key Benefits of ANS-104 **Reduced Overhead** - Bundle multiple data items into a single transaction to reduce processing overhead **Network Efficiency** - Consolidate multiple uploads into fewer network transactions **Standardized Format** - Consistent binary serialization ensures interoperability across applications **Better Indexing** - Structured data item format enables more efficient data retrieval and querying ## Why ANS-104 Matters for the Permaweb ANS-104 bundles are essential for building scalable applications on the permaweb because they: - **Enable efficient data storage** by reducing transaction overhead for multiple data items - **Improve network performance** through consolidated transactions - **Support better data organization** with standardized data item formats - **Enable scalable applications** that need to store many related data items efficiently ## Explore Bundling } title="Upload Data" description="Learn how to upload data to Arweave using bundling" href="/build/upload" /> } title="Run a Bundler" description="Deploy your own bundling infrastructure" href="/build/extensions/bundler" /> } title="Gateway Extensions" description="Integrate bundling with your AR.IO gateway" href="/build/extensions" /> } title="Turbo SDK" description="Use Turbo SDK for easy data uploads and bundling" href="/sdks/turbo-sdk" /> # Introduction (/learn/(introduction)) import { BookOpen, Wrench, Package, Code, Server, Globe, ArrowRight, Zap, Shield, Infinity, } from "lucide-react"; AR.IO is the decentralized gateway protocol for Arweave. Deploy gateways, register permanent names, and build applications with enterprise-grade infrastructure that lasts forever. **For AI and LLM users**: Access the complete documentation in plain text format at [llms-full.txt](/llms-full.txt) for easy consumption by AI agents and language models. ## Explore the Documentation } title="What is AR.IO?" description="Learn about the decentralized gateway protocol and how it powers the permanent web" href="/learn/what-is-ario" /> } title="Build" description="Get started building applications, running gateways, and uploading data" href="/build" /> } title="SDKs" description="Integrate AR.IO services into your applications with our developer SDKs" href="/sdks" /> } title="API Reference" description="Complete API documentation for AR.IO Node and Turbo services" href="/apis" /> ## Quick Start Guides } title="Upload Data to Arweave" description="Learn how to permanently store files and data using Turbo SDK" href="/build/upload" /> } title="Run a Gateway" description="Deploy your own AR.IO gateway and participate in the network" href="/build/run-a-gateway" /> } title="Register ArNS Names" description="Get human-readable names for your permanent applications" href="/learn/arns" /> ## AR.IO Ecosystem } title="ArNS Registry" description="Register and manage permanent names for your applications" href="https://arns.app" /> } title="Network Portal" description="Monitor gateway performance and network statistics" href="https://gateways.ar.io" /> } title="ArDrive" description="User-friendly permanent storage for files and folders" href="https://ardrive.io" /> ## Join the Community Connect with developers, gateway operators, and the AR.IO team. Get help, share ideas, and stay updated on the latest developments. - [Join Discord](https://discord.gg/cuCqBb5v) - [View Guides](/build/guides) # What is the Permaweb? (/learn/(introduction)/permaweb) The **permaweb** is a decentralized, permanent layer of the internet where data, applications, and websites are stored forever and remain accessible through a global network of gateways. ## How the Permaweb Works Unlike the traditional web where data can disappear when servers go offline, the permaweb creates a **permanent archive of human knowledge** through a multi-layer architecture: **Foundation**: Arweave blockchain provides immutable storage **Computation**: AO and other platforms enable smart contracts and processing **Access**: AR.IO gateway network makes everything accessible globally **Users**: Developers and users interact through familiar web interfaces This architecture ensures that once something is published to the permaweb, it remains accessible forever, creating a truly permanent internet. ## Permaweb Network Architecture The permaweb operates through a layered system architecture where each component provides specialized services to create a permanent, decentralized internet. Applications & Users Web Apps, dApps, Developers, End Users AO Decentralized Compute AR.IO Decentralized Access Arweave Permanent Storage Foundation ## Permaweb Architecture: How It All Connects The permaweb operates through a **four-layer architecture** where each layer serves a specific purpose in creating permanent, accessible internet infrastructure: ### Layer 1: Permanent Storage (Arweave) **Core Responsibility: Forever Data Preservation** Arweave's primary job is to **store data permanently** with mathematical guarantees: - **Immutable storage** - Once written, data cannot be changed or deleted - **Economic sustainability** - Endowment model ensures miners are paid to store data forever - **Cryptographic verification** - Proof that data exists and hasn't been tampered with - **Decentralized replication** - Data survives even if most miners go offline ### Layer 2: Decentralized Compute (AO) **Core Responsibility: Smart Contract Execution** AO's primary job is to **run programs that work with permanent data**: - **Process execution** - Runs smart contracts and applications on permanent data - **Message routing** - Enables communication between different processes - **State management** - Maintains application state using permanent storage - **Parallel computation** - Scales processing across multiple nodes ### Layer 3: Decentralized Access (AR.IO) **Core Responsibility: Data Access & Discovery** AR.IO's primary job is to **make permanent data fast and accessible**: - **Data retrieval** - Fetches and serves content from Arweave storage - **Content indexing** - Organizes and catalogs data for search and discovery - **ArNS resolution** - Converts human-readable names to Arweave transaction IDs - **Performance optimization** - Caches popular content for faster access - **Quality assurance** - Validates data integrity and provides reliable access ### Layer 4: Applications & Users **Core Responsibility: User Interface & Experience** Applications and users are responsible for **interacting with the permanent web**: - **User interfaces** - Create familiar web experiences backed by permanent data - **Data submission** - Upload new content and applications to the permaweb - **Application logic** - Build decentralized apps using permanent storage and compute - **Content consumption** - Browse, search, and interact with permanent web content ## The Vision of the Permaweb The permaweb represents a fundamental shift from the ephemeral nature of today's internet to a permanent, censorship-resistant foundation for human knowledge and applications. By combining Arweave's immutable storage with AO's decentralized compute and AR.IO's accessible gateway network, the permaweb creates an internet where data never disappears, applications run without central points of failure, and users maintain true ownership of their digital assets. This architecture enables a new generation of applications that can operate indefinitely without relying on traditional hosting services, where digital artifacts become truly permanent, and where the collective knowledge of humanity is preserved for future generations. The permaweb isn't just about storing data forever—it's about building a more resilient, equitable, and permanent foundation for the digital world. ## Explore the Permaweb } /> } /> } /> } /> # What is AR.IO? (/learn/(introduction)/what-is-ario) The AR.IO Network is the first permanent cloud network. A decentralized infrastructure layer built on Arweave and AO, making permanent data accessible, discoverable, and usable. Think of it as the gateway to Arweave's permaweb, turning its tamper-proof storage into a fully functional, user-friendly ecosystem for apps, websites, and data. ## Features of The AR.IO Network ### Gateways AR.IO operates a network of [gateways](/learn/gateways) — nodes that serve as entry points to Arweave’s data. These gateways fetch and deliver data quickly, supporting everything from static files to dynamic web apps. ### Arweave Name System (ArNS) The [Arweave Name System (ArNS)](/learn/arns) is a decentralized naming system for Arweave. It allows users to register and resolve human-readable names to Arweave transaction IDs. ### Data Access AR.IO offers a range of tools for accessing and querying data on Arweave, including: - [HTTP Requests](/build/access/fetch-data) via gateways - [GraphQL Queries](/build/access/find-data) for finding data by tags and metadata - [ArNS](/learn/arns) for human-readable URLs - [Wayfinder](/learn/wayfinder) for decentralized content discovery ## The Problem Arweave stores data forever, but accessing and organizing that data isn't always straightforward. Without efficient tools, retrieving files, serving websites, or finding specific content on Arweave's blockweave can be slow or complex, limiting its potential for developers and users. ## The Solution The AR.IO Network builds on Arweave's permanent storage to create a decentralized, scalable access layer. It provides gateways, domain names, and indexing services, making it easy to interact with permaweb content as seamlessly as the traditional web. ### How It Works - **Decentralized Gateways**: AR.IO operates a network of gateways—nodes that serve as entry points to Arweave’s data. These gateways fetch and deliver data quickly, supporting everything from static files to dynamic web apps. - **ArNS (Arweave Name Service)**: AR.IO introduces decentralized domain names (e.g., yourname.arweave), mapping human-readable names to Arweave’s data IDs. This makes content easy to find and share, like URLs on the traditional web. - **Indexing and Querying**: AR.IO enables efficient data indexing, allowing developers to search and retrieve specific content from Arweave’s vast storage without scanning the entire blockweave. - **Routing and verification**: AR.IO 's ar://wayfinder Protocol intelligently routes requests to available gateways in the network and verifies the data's authenticity. - **Observation and incentives**: ARIO's [Observation and Incentive Protocol (OIP)](/learn/oip), ensures gateway operators are serving the right data and rewards them in the protocol native token, $ARIO, to create a secure and self-sustaining network. **In Simple Terms**: Imagine Arweave as a massive, unerasable library. The AR.IO Network is the librarian who organizes the shelves, provides a catalog, and hands you the books you need—fast. ## Why It Matters - **Accessible**: Gateways make permaweb content load as quickly as traditional websites. - **Discoverable**: ArNS provides user-friendly domain names, simplifying navigation. - **Scalable**: Supports growing permaweb usage, from small apps to global platforms. - **Decentralized**: No single entity controls access, ensuring censorship resistance. ## Building on The AR.IO Network The AR.IO Network empowers developers to create permaweb apps with tools for hosting, querying, and monetizing content, all while leveraging Arweave's permanent storage. **In Simple Terms**: Arweave locks data forever; The AR.IO Network makes it ready for the world to use. ## Ready to Dive Deeper? The AR.IO Network transforms Arweave into a vibrant permaweb ecosystem. Ready to start building? Explore our comprehensive guides and start creating on the permanent web. ## Explore AR.IO } /> } /> } /> } /> # What is Arweave? (/learn/(introduction)/what-is-arweave) Arweave is a decentralized storage network that ensures data is **permanent**, **affordable**, and **scalable**. Think of it as a global, tamper-proof hard drive where your files—photos, documents, or apps—stay accessible forever. It's the foundation for the [AR.IO Network](https://ar.io), powering a "permaweb" where data never disappears. Below, we break down Arweave's core features in a simple, beginner-friendly way. ## A Datachain for Permanent Storage Arweave is like Bitcoin, but for data. It solves one problem really well: **storing data permanently**. Once uploaded, your data—whether a tweet, NFT, or website—is immutable and preserved indefinitely. ### How Does It Work? - **Blockweave Architecture**: Unlike a blockchain's single chain, Arweave's blockweave links each new data block to the previous one and a random older block. Data is split into 256 KiB chunks in a secure Merkle tree, ensuring miners keep all data to add new blocks. - **Succinct Proofs of Random Access (SPoRA)**: Miners prove they store multiple data copies by accessing random chunks, verified efficiently with Verifiable Delay Functions (VDFs). This combines proof-of-work and proof-of-storage, making data loss nearly impossible. **In Simple Terms**: Picture a library where new books reference older ones, and librarians must keep every book to add more. SPoRA ensures they prove they’ve got the books, keeping your data safe forever. ## Pay Once, Store Forever: No Recurring Fees Pay a one-time fee to upload data, and it's stored "forever"—no subscriptions or renewals. ### How Does It Work? - **Endowment Fund**: Your fee, based on 200 years of storage for 20 replicas, goes mostly into a fund that slowly pays miners in AR tokens to maintain data. It assumes storage costs drop over time, making the fund sustainable. **In Simple Terms**: It's a “forever stamp” for data. Your payment funds a pot that keeps paying storage keepers, lasting longer as tech gets cheaper. ## Practically Unlimited Storage Arweave can practically store unlimited data, from small files to entire digital archives, without hitting a ceiling. The theoretical limit is 2^256 bytes which for scale is more atoms than there are in the universe. ### How Does It Work? - **Layer 1 Transactions**: Data is stored as 256 KiB chunks on the blockweave, replicated across many nodes. As more nodes join with standard hardware, storage capacity grows limitlessly. - **Bundling with [AR.IO Network](https://ar.io) and Turbo**: Bundling packs multiple files into one transaction, reducing costs and congestion. AR.IO Network and Turbo (a Layer 2 tool) optimize this, enabling fast, cheap uploads of large datasets like websites. ## What Arweave doesn't solve well? Access Arweave solve's one problem and solve's it well. Storing your data for a very long-time. It doesn't, however, incentivise the indexing and access for data. ## Ready to Dive Deeper? Arweave powers a permaweb where apps, websites and data live forever. For AR.IO Network developers, it's the bedrock for unstoppable decentralized applications. Learn more in the next section: [What is AR.IO Network](/learn/what-is-ario). ## Explore Arweave } /> } /> } /> } /> # Arweave Name Tokens (ANTs) (/learn/arns/ants) To establish ownership of a record in the ArNS Registry, each record contains both a friendly name and a reference to an Arweave Name Token, ANT. Name Tokens are unique AO Computer based tokens/processes that give their owners the ability to update the Arweave Transaction IDs that their associated friendly names point to. ## What is an ANT? The ANT smart contract process is a standardized contract that implements the specific Arweave Name Process specification required by AR.IO gateways who resolve ArNS names and their Arweave Transaction IDs. It also contains other basic functionality to establish ownership and the ability to transfer ownership and update the Arweave Transaction ID. Name Tokens have an owner, who can transfer the token and control its modifiable settings. These settings include modifying the address resolution time to live (ttl) for each name contained in the ANT, and other settings like the ANT Name, Ticker, and an ANT Controller. ## Ownership and Control The controller can only manage the ANT and set and update records, name, and the ticker, but cannot transfer the ANT. Note that ANTs are initially created in accordance with network standards by an end user who then has the ability to transfer its ownership or assign a controller as they see fit. Owners of names should ensure their ANT supports evolve ability if future modifications are desired. Loss of a private key for a permanently purchased name can result in the name being "bricked". ### Under_name Ownership Undernames can have an `owner` set on them. This owner is empowered to set that undername as their primary name, can remove that undername as their primary name, and has full control over that Undername's metadata, such as: - Transaction Id - the data the record resolves to. - TTL Seconds - the Time To Live in seconds the data is cached for by clients. - Owner - the owner of the record. - Description - the description of the record. - Display Name - the display name for the owner of the record. - Keywords - the keywords for the record. - Logo - the logo of the record. They do *NOT* have control over the `priority` of the undername, which is restricted to the ANT Controllers and Owner. ## ANT Interactions The table below indicates some of the possible interactions with the ArNS registry, corresponding ANTs, and who can perform them: | Type | ANT Owner | ANT Controller | Undername Owner | Any ARIO Token Holder | | ----------------------------------------- | ----------- | ---------------- | ----------------- | ----------------------- | | Transfer ownership | ✔ | | | | | Add / remove controllers | ✔ | | | | | Approve/Remove Primary name | ✔ | | ✔ | | | Reassign name to new ANT process | ✔ | | | | | Return a permanent name | ✔ | | | | | Set records (pointers, record metadata) | ✔ | ✔ | ✔ | | | Update records, name, ticker | ✔ | ✔ | | | | Update descriptions and keywords | ✔ | ✔ | | | | Create and assign undernames | ✔ | ✔ | | | | Extend / renew lease | ✔ | ✔ | ✔ | ✔ | | Increase undernames | ✔ | ✔ | ✔ | ✔ | | Convert lease to permanent | ✔ | ✔ | ✔ | ✔ | ## Under_names ANT owners and controllers can configure multiple subdomains for their registered ArNS name known as "under_names" or more easily written "undernames". These undernames are assigned individually at the time of registration or can be added on to any registered name at any time. Under*names use an underscore "*" in place of a more typically used dot "." to separate the subdomain from the main ArNS domain. ## Secondary Markets Secondary markets could be created by ecosystem partners that facilitate the trading of Name Tokens. Additionally, tertiary markets could be created that support the leasing of these friendly names to other users. Such markets, if any, would be created by third parties unrelated to and outside of the scope of this paper or control of the Foundation. ## Next Steps Ready to understand how pricing works? Learn about the [Pricing Model](/learn/arns/pricing-model) to see how costs are calculated dynamically, or go back to [Name Registration](/learn/arns/name-registration) to review the registration process. # Arweave Name System (ArNS) (/learn/arns) ## What is ArNS? Arweave URLs and transaction IDs are long, difficult to remember, and occasionally categorized as spam. The Arweave Name System (ArNS) aims to resolve these problems in a decentralized manner. ArNS is a **censorship-resistant naming system** stored on Arweave, powered by [ARIO tokens](/learn/token), enabled through [AR.IO gateway](/learn/gateways) domains, and used to connect friendly domain names to permaweb apps, web pages, data, and identities. It's an open, permissionless, domain name registrar that doesn't rely on a single TLD. ## How ArNS Works This system works similarly to traditional DNS services, where users can purchase a name in a registry and DNS Name servers resolve these names to IP addresses. The system is flexible and allows users to purchase names permanently or lease them for a defined duration based on their use case. With ArNS, the registry is stored permanently on Arweave via [AO](/glossary), making it immutable and globally resilient. This also means that apps and infrastructure cannot just read the latest state of the registry but can also check any point in time in the past, creating a "Wayback Machine" of permanent data. ```mermaid graph TB subgraph "AR.IO Smart Contract" Registry[ArNS Registry] Registry --> Name1[ardrive] Registry --> Name2[ao] Registry --> Name3[gateways] Registry --> NameN[...] end subgraph "Arweave Name Token" Name1 -.->|owned by| ANT1[Owner: 0x242424...Record: @Target: TxID_123] Name2 -.->|owned by| ANT2[Owner: Zjgamagh...Record: @Target: TxID_456] Name3 -.->|owned by| ANT3[Owner: Hboaalf...Record: @Target: TxID_789] end subgraph "Arweave" ANT1 -.->|points to| TX1[TxID_123ArDrive App] ANT2 -.->|points to| TX2[TxID_456Email App] ANT3 -.->|points to| TX3[TxID_789Pages App] end style Registry fill:#f9f,stroke:#333,stroke-width:3px,color:#333 style ANT1 fill:#b3d9ff,stroke:#333,stroke-width:2px,color:#333 style ANT2 fill:#b3d9ff,stroke:#333,stroke-width:2px style ANT3 fill:#b3d9ff,stroke:#333,stroke-width:2px style TX1 fill:#bfb,stroke:#333,stroke-width:2px,color:#333 style TX2 fill:#bfb,stroke:#333,stroke-width:2px,color:#333 style TX3 fill:#bfb,stroke:#333,stroke-width:2px,color:#333 ``` ## Name Resolution Users can register a name, like `ardrive`, within the ArNS Registry. Before owning a name, they must create an Arweave Name Token (ANT), an AO Computer based token and open-source protocol used by ArNS to track the ownership and control over the name. ANTs allow the owner to set a mutable pointer to any type of permaweb data, like a page, app or file, via its Arweave transaction ID. Each AR.IO gateway acts as an ArNS Name resolver. They fetch the latest state of both the ArNS Registry and its associated ANTs from an AO compute unit (CU) and serve this information rapidly for apps and users. AR.IO gateways will also resolve that name as one of their own subdomains, e.g., `https://ardrive.arweave.net` and proxy all requests to the associated Arweave transaction ID. This means that ANTs work across all AR.IO gateways that support them: `https://ardrive.ar-io.dev`, `https://ardrive.g8way.io/`, etc. Users can easily reference these friendly names in their browsers, and other applications and infrastructure can build rich solutions on top of these ArNS primitives. ```mermaid sequenceDiagram participant User participant Gateway as AR.IO Gateway participant Registry as ArNS Registry participant ANT as Arweave Name Token participant Arweave User->>Gateway: Request ardrive.arweave.net Gateway->>Registry: Query "ardrive" record Registry-->>Gateway: Returns ANT address Gateway->>ANT: Get target TxID ANT-->>Gateway: Returns TxID (abc123...) Gateway->>Gateway: Check cache for TxID alt TxID not in cache Gateway->>Arweave: Fetch data from TxID Arweave-->>Gateway: Returns permaweb content end Gateway-->>User: Serves content ``` ## Key Benefits - **Human-readable URLs** instead of complex transaction IDs - **Censorship-resistant** and decentralized - **Permanent storage** on Arweave - **Cross-gateway compatibility** - works on all AR.IO gateways - **Historical data access** - check any point in time - **Flexible ownership** - permanent or leased names ## Explore ArNS } /> } /> } /> } /> # Name Registration (/learn/arns/name-registration) There are two different types of name registrations that can be utilized based upon the needs of the user: ## Registration Types ### Lease Registration A name may be **leased on a yearly basis**. A leased name can have its lease extended or renewed but only up to a maximum active lease of **five (5) years** at any time. ### Permanent Registration (Permabuy) A name may be **purchased for an indefinite duration** with no expiration date. Registering a name requires spending ARIO tokens corresponding to the name's character length and purchase type. ## Name Registry The ArNS Registry is a list of all registered names and their associated ANT Process IDs. Key rules embedded within the smart contract include: - **Genesis Prices**: Set within the contract as starting conditions - **Dynamic Pricing**: Varies based on name length, purchase type (lease vs buy), lease duration, and current Demand Factor - **Name Records**: Include a pointer to the Arweave Name Token process identifier, lease end time (if applicable), and undername allocation - **Reassignment**: Name registrations can be reassigned from one ANT to another - **Lease Extension**: Anyone with available ARIO Tokens can extend any name's active lease - **Lease to Permanent Buy**: Anyone with available ARIO Tokens can convert a name's lease to a permanent buy - **Undername Capacity**: Additional undername capacity can be purchased for any actively registered name - **Name Removal**: Name records can only be removed from the registry if a lease expires, or a permanent name is returned to the protocol ## Name Validation Rules All names registered must meet the following criteria: 1. **Valid characters**: Only numbers 0-9, characters a-z and dashes 2. **Dash placement**: Dashes cannot be leading or trailing characters 3. **Single character domains**: Dashes cannot be used in single character domains 4. **Length limits**: 1 character minimum, 51 characters maximum 5. **Reserved names**: Cannot be an invalid name predesignated to prevent unintentional use/abuse such as `www` ## Lease Management ### Lease Expirations When a lease term ends, there is a **grace period of two (2) weeks** where the lease can be renewed before it fully expires. If this grace period elapses, the name is considered expired and returns to the protocol for public registration. Once expired, a name's associated undername registrations and capacity also expire. A recently expired name's registration shall be priced subject to the "Returned Name Premium" mechanics. ### Lease to Permabuy Conversions An actively leased name may be converted to a permanent registration. The price for this conversion shall be treated as if it were a new permanent name purchase. This functionality allows users to transition from leasing to permanent ownership based on changing needs and available resources. It generates additional protocol revenue through conversion fees, contributing to the ecosystem's financial health and reward system. ### Permanent Name Return Users have the option to "return" their permanently registered names back to the protocol. This process allows users to relinquish their ownership, returning the name to the protocol for public re-registration. Only the Owner of a name can initiate a name return. When a permanent name is returned, the name is subject to a "Returned Name Premium", similar to expired leases. A key difference is that if the name is repurchased during the premium window, the proceeds are split between the returning owner and the protocol balance. ## Primary Names The Arweave Name System (ArNS) supports the designation of a "Primary Name" for users, simplifying how Arweave addresses are displayed across applications. A Primary Name is a user-friendly alias that replaces complex wallet addresses, making interactions and profiles easier to manage and identify. Users can set one of their owned ArNS names as their Primary Name, subject to a small fee. This allows applications to use a single, human-readable identifier for a wallet, improving user experience across the network. ## Next Steps Now that you understand name registration, learn about [Arweave Name Tokens (ANTs)](/learn/arns/ants) to see how ownership and control work, or explore the [Pricing Model](/learn/arns/pricing-model) to understand how costs are calculated. # Pricing Model (/learn/arns/pricing-model) ## Addressing Variable Market Conditions The future market landscape is unpredictable, and the AR.IO Network smart contract is designed to be immutable, operating without governance or manual intervention. Using a pricing oracle to fix name prices relative to a stable currency is not viable due to the infancy of available solutions and reliance on external dependencies. To address these challenges, ArNS is self-contained and adaptive, with name prices reflecting network activity and market conditions over time. To achieve this, ArNS incorporates: 1. A **dynamic pricing model** that adjusts fees using a "Demand Factor" based on ArNS purchase activity 2. A **Returned Name Premium (RNP)** system that applies a timed, descending multiplier to registration prices for names that have recently expired or been returned to the protocol This approach ensures that name valuations adapt to market conditions within the constraints of an immutable, maintenance-free smart contract framework. You can view current live pricing at [ArNS.app](https://arns.ar.io/#/prices) to see these formulas in action. ## Key Definitions - **Protocol Revenue:** Accumulated ARIO tokens from name registrations, lease extensions, and under_name sales - **Period (P):** The time unit for DF adjustments, equivalent to one (1) day, denoted in milliseconds - **n:** The current period indicator - **Price:** The cost for permabuy or lease of a name - **Under_names:** Subdomain equivalents, denoted by an underscore "\_" prefixing the base domain ## Dynamic Pricing Model ArNS employs an adaptive pricing model to balance market demand with pricing fairness for name registration within the network. This model integrates static and dynamic elements, adjusting prices based on name length and purchase options like leasing, permanent acquisition, and undername amounts. ### Core Pricing Components #### Base Registration Fee (BRF) The fundamental price for names, varying by character length, adjusted periodically. #### Genesis Registration Fee (GRF) The starting price for name registrations varies by character length. This is superseded by Base Registration Fees as the protocol evolves. **Table: Genesis Registration Fees** | Name Length | Fee (ARIO) | | ----------- | ---------- | | 1 | 1,000,000 | | 2 | 200,000 | | 3 | 20,000 | | 4 | 10,000 | | 5 | 2,500 | | 6 | 1,500 | | 7 | 800 | | 8 | 500 | | 9 | 400 | | 10 | 350 | | 11 | 300 | | 12 | 250 | | 13-51 | 200 | #### Demand Factor (DF) A global price multiplier, reflecting namespace demand, adjusted each period based on revenue trends. **DF Mechanics:** - **Intent:** The Demand Factor adjusts based on protocol revenue comparison to the Revenue Moving Average (RMA) - **Increase DF:** When recent revenue is higher than or equal to (but non-zero) the RMA, the DF increases by 5.0% - **Decrease DF:** When recent revenue is less than the RMA or both are zero, the DF decreases by 1.5% - **Maximum DF Value:** Unbounded - **Minimum DF Value:** 0.5 - **Starting Demand Factor:** 1 (initial value at network launch) #### Revenue Moving Average (RMA) The average of protocol revenue from the past seven (7) periods. ### Pricing Formulas #### Adjusted Registration Fee (ARF) ```math ARF = BRF × DF ``` #### Annual Fee ```math Annual ~Fee = ARF × 20% ``` #### Lease Pricing - **Lease Registration Price:** ```math Lease ~Price = ARF + (Annual ~Fee × Years) ``` - **Lease Extension/Renewal Price:** ```math Lease ~Renewal ~Price = Annual ~Fee × Years (max 5 years) ``` - **Grace period:** Two (2) weeks #### Permanent Purchases - **Permabuy Price:** ```math Permabuy ~Price = ARF + (Annual Fee × 20 years) ``` - **Lease to Permabuy Price:** Same as above #### Under Name Fees - **Initial Allocation:** 10 `under_names` are included with each name registration - **For Leases:** ```math Lease ~Under ~Name ~Fee = BRF × DF × 0.1% ``` - **For Permabuys:** ```math Permabuy ~Under ~Name ~Fee = BRF × DF × 0.5% ``` #### Primary Name Fee Set or change primary name: The fee is equal to the associated fee for a single `under_name` purchase of a 51-character name of equivalent purchase type to the new primary name, regardless of the new primary name's length. ### Step Pricing Mechanics - Synchronizes BRF (Base Rate Factor) with ARF (Adjusted Rate Factor) after seven (7) consecutive periods at the minimum DF value - Resets DF to 1 following a step pricing adjustment ## Returned Name Premiums (RNP) ArNS applies a **Returned Name Premium (RNP)** to names that re-enter the market after expiration or permanent return. This premium starts at a maximum value and decreases linearly over a predefined window, ensuring fair and transparent pricing for re-registered names. ### RNP Mechanics #### Intent The premium starts at its maximum and decreases linearly until the name is purchased. If the name is not purchased before the premium window closes, it reverts to standard pricing and is no longer classified as "recently returned." #### RNP Window - **Duration:** Fourteen (14) periods #### Returned Name Premium Formula The premium multiplier follows a linearly declining function: ```math RNP = 50 - (49 / 14) × t ``` Where: - **RNP:** The Returned Name Premium multiplier applied to the purchased name price - **t:** Amount of time (or time-intervals) elapsed since the start of the return window #### RNP Registration Price ```math Price = RNP × (Lease~Or~Permabuy) ~Registration~ Price ``` #### Permanent Name Return Proceeds Split - **50%** goes to the returning name owner - **50%** goes to the protocol balance The RNP multiplier is applied to the registration price of both permanently purchased and leased names. ## Gateway Operator ArNS Discount Gateway operators who demonstrate consistent, healthy participation in the network are eligible for a **20% discount** on certain ArNS interactions. ### Qualification Requirements To qualify for the discount: - The gateway must maintain a "Gateway Performance Ratio Weight" (GPRW) of **0.9 or higher** - The gateway must have a "Tenure Weight" (TW) of **1.0 or greater** - A gateway marked as "Leaving" shall not be eligible for this discount ### Eligible Discounted Interactions - Purchasing a name - Extending a lease - Upgrading a lease to permabuy - Increasing undernames capacity ## Next Steps Congratulations! You now understand the complete ArNS pricing system. Ready to get started? }> See current ArNS pricing in real-time with the live pricing chart. }> Visit ArNS.app to register your first name and explore the pricing in action. }> Learn about AR.IO gateways and how they integrate with ArNS. }> Start building applications that leverage ArNS for decentralized naming. # Architecture (/learn/gateways/architecture) AR.IO gateways are sophisticated data access layers built on top of the Arweave network. They transform the raw Arweave blockchain into a performant, reliable, and developer-friendly platform for storing and retrieving data. Gateways act as bridges between applications and the permanent storage capabilities of Arweave. ```mermaid graph TB subgraph Gateway ["AR.IO Gateway"] ENVOY[Envoy ProxyLoad Balancer & Routing] API[Core ServiceGateway API] subgraph "Data Layer" DB1[(Chain IndexSQLite)] DB2[(Bundle IndexSQLite)] DB3[(Data IndexSQLite)] DB4[(Config & MetadataSQLite)] REDIS[(Redis CacheHigh-Speed Layer)] end subgraph "Storage Layer" FS[File System StorageLocal Cache] end end subgraph External ["External Network"] MEMPOOLS[MempoolsTransaction Pool] ARWEAVE[(Arweave NodesBlockchain Network)] MEMPOOLS --> ARWEAVE end ENVOY --> API API DB1 API DB2 API DB3 API DB4 API REDIS API FS API ARWEAVE API MEMPOOLS classDef database fill:#3b82f6,stroke:#1d4ed8,stroke-width:2px,color:#fff classDef service fill:#10b981,stroke:#059669,stroke-width:2px,color:#fff classDef proxy fill:#f59e0b,stroke:#d97706,stroke-width:2px,color:#fff classDef external fill:#8b5cf6,stroke:#7c3aed,stroke-width:2px,color:#fff classDef storage fill:#ef4444,stroke:#dc2626,stroke-width:2px,color:#fff class DB1,DB2,DB3,DB4,REDIS database class API service class ENVOY proxy class ARWEAVE,MEMPOOLS external class FS storage ``` ## Core Technology Stack AR.IO gateways are built using modern, scalable technologies designed for high-performance data operations: ### Runtime and Language - **Node.js**: The primary runtime environment for all gateway services - **TypeScript**: Core services written with flexible interfaces for customization - **Event-driven architecture**: Enables efficient handling of concurrent operations ### Data Storage - **SQLite**: Four specialized databases handle different aspects of gateway operations: - Chain data indexing - Bundle transaction processing - Data item management - Configuration and metadata - **Redis**: High-speed caching layer for frequently accessed data - **File system storage**: Local caching for frequently accessed data ### Processing Model - **Worker-based concurrency**: Specialized workers handle different background tasks - **Event-driven processing**: Loosely coupled components communicate via events - **Streaming data handling**: Minimizes memory overhead for large data operations ## Key Architectural Decisions Several important design decisions shape how AR.IO gateways operate: ### Data Retrieval Strategy AR.IO gateways use a sophisticated **hierarchical fallback system** for data retrieval: 1. **Trusted gateways**: Prioritize data from verified, high-performance peers 2. **AR.IO network**: Leverage the broader network of AR.IO gateways 3. **Chunks data items**: Reconstruct data from individual chunks when needed 4. **Transaction data**: Fall back to raw Arweave transaction data This approach ensures data availability while optimizing for speed and reliability. ### Verification and Trust Model - **Multi-level cryptographic verification**: Data integrity is verified at multiple points - **Trust hierarchy**: Cached verified data → trusted cached data → network streams - **Self-healing mechanisms**: Automatic recovery and re-verification of corrupted data - **Verification headers**: HTTP headers indicate the verification status of returned data ### Worker Specialization Different background workers handle specific responsibilities: - **Block synchronization workers**: Keep the gateway synchronized with Arweave blocks - **Bundle processing workers**: Handle Layer 2 bundled data items (ANS-104) - **Data verification workers**: Continuously verify stored data integrity - **Maintenance workers**: Perform cleanup and optimization tasks ## Scalability and Configuration AR.IO gateways are designed to scale from small personal deployments to large enterprise installations: ### Modular Architecture Gateway services can be independently configured or disabled based on operator needs: - **Data serving**: Serve cached data to applications - **Data indexing**: Index and process new Arweave data - **Bundle processing**: Handle Layer 2 bundled transactions - **ArNS routing**: Provide Arweave Name System resolution ## Core Philosophy: Builder Independence A fundamental principle of AR.IO gateway architecture is **empowering builders to do the things they care about** without relying on any centralized resource to leverage Arweave. This philosophy manifests in several key ways: ### Extensibility Through Modularity Gateways are designed as extensible platforms that operators can customize through **[Extensions](/build/extensions/)**, sidecar services, and plugin architectures for specialized functionality. ### Data Sovereignty Operators maintain complete control through **[Data Retrieval](/learn/gateways/data-retrieval)** strategies and **[Data Verification](/learn/gateways/data-verification)** systems that ensure independence from trusted intermediaries. ### Network Resilience The modular design creates a resilient ecosystem where distributed infrastructure and customizable trust models prevent single points of failure. This architecture ensures that builders can create powerful applications on Arweave while maintaining independence from any centralized infrastructure or service provider. ## Explore Gateway Capabilities } /> } /> } /> } /> # Data Retrieval (/learn/gateways/data-retrieval) AR.IO gateways use a sophisticated multi-tier architecture to retrieve and serve Arweave data. This system ensures high availability, fast response times, and data integrity by leveraging multiple data sources with automatic fallback mechanisms. ## How Gateways Retrieve Data When a gateway needs to serve data, it follows a hierarchical retrieval pattern, trying each source in order until the data is successfully retrieved: ```mermaid graph TD REQUEST[Data Request] --> CACHE{Local Cache?} CACHE -->|Hit| SERVE[Serve Data] CACHE -->|Miss| SOURCES[Try Data Sources] SOURCES --> TRUSTED[Trusted Gateways] SOURCES --> NETWORK[AR.IO Network] SOURCES --> CHUNKS[Chunk Assembly] SOURCES --> ARWEAVE[Arweave Nodes] TRUSTED -->|Success| VALIDATE NETWORK -->|Success| VALIDATE CHUNKS -->|Success| VALIDATE ARWEAVE -->|Success| VALIDATE TRUSTED -->|Fail| NETWORK NETWORK -->|Fail| CHUNKS CHUNKS -->|Fail| ARWEAVE VALIDATE{Valid?} -->|Yes| STORE[Cache & Serve] VALIDATE -->|No| NEXT[Try Next Source] classDef source fill:#2563eb,stroke:#1d4ed8,stroke-width:2px,color:#fff classDef process fill:#16a34a,stroke:#15803d,stroke-width:2px,color:#fff class TRUSTED,NETWORK,CHUNKS,TXDATA source class VALIDATE,SERVE,STORE process ``` ## Data Sources AR.IO gateways can retrieve data from multiple sources, each with different characteristics: ### 1. Trusted Gateways - **Purpose**: Peer-to-peer data sharing between verified AR.IO gateways - **Benefits**: Distributed redundancy, load balancing, network resilience - **Trust Mechanism**: Performance-based trust scores and reciprocity monitoring - **Selection**: Prioritized based on established trust relationships ### 2. AR.IO Network (Untrusted Peers) - **Purpose**: Broader network of AR.IO gateways without established trust - **Benefits**: Geographic distribution, expanded data availability - **Selection**: Weighted random selection based on performance metrics - **Validation**: Enhanced verification required due to untrusted nature ### 3. Chunk Assembly - **Purpose**: Direct reconstruction from Arweave chunks via known offsets - **Benefits**: Data integrity guarantee, no intermediary trust required - **Process**: Fetches individual chunks efficiently and assembles them into complete data - **Optimization**: Uses offset awareness for faster chunk retrieval ### 4. TX Data - **Purpose**: Direct access to transaction data from Arweave nodes - **Benefits**: Authoritative data source, complete historical access - **Trade-off**: Higher latency but guaranteed availability - **Use Case**: Final fallback when other sources fail ## Retrieval Strategies Gateways employ different strategies based on the use case: ### On-Demand Retrieval Optimized for user requests with emphasis on speed: 1. **Priority order**: Trusted Gateways → Untrusted Peers (AR.IO Network) → Chunks Assembly → Arweave 2. **Aggressive timeouts**: Quick fallback to next source 3. **Parallel attempts**: May query multiple sources simultaneously 4. **Response streaming**: Begin serving data as soon as available ### Background Retrieval Used specifically for unbundling and verification processes: 1. **Unbundling operations**: Extracting individual data items from ANS-104 bundles 2. **Data verification**: Comprehensive validation of retrieved data integrity 3. **Integrity focus**: Prefers authoritative sources for accurate processing 4. **Relaxed timeouts**: Allows for slower but reliable retrieval during verification 5. **Verification priority**: Extensive validation before caching verified data ## Trust and Validation ### Peer Trust Management Gateways maintain sophisticated trust relationships: ```mermaid graph TD PEER[Peer Gateway] --> METRICS[Performance Metrics] METRICS --> LATENCY[Response Time] METRICS --> SUCCESS[Success Rate] METRICS --> VALIDITY[Data Validity] LATENCY --> SCORE[Trust Score] SUCCESS --> SCORE VALIDITY --> SCORE SCORE --> SELECTION{Peer Selection} SELECTION -->|High Trust| PREFER[Preferred] SELECTION -->|Medium Trust| NORMAL[Normal] SELECTION -->|Low Trust| AVOID[Avoided] classDef metric fill:#7c3aed,stroke:#6d28d9,stroke-width:2px,color:#fff class LATENCY,SUCCESS,VALIDITY metric ``` Trust factors include: - **Response performance**: Latency and throughput metrics - **Success rates**: Percentage of successful requests - **Data validity**: Cryptographic verification results - **Reciprocity**: Mutual data sharing behavior ### Data Validation Process Every piece of retrieved data undergoes validation: 1. **Hash Verification**: Computed hash must match expected value 2. **Merkle Proof Validation**: Chunks proven against transaction root 3. **Signature Verification**: Transaction signatures validated 4. **Size Confirmation**: Data size matches header declaration ## Why Multi-Source Retrieval Matters ### For Gateway Operators - **Reduced infrastructure costs**: Leverage peer resources - **Improved reliability**: Multiple fallback options - **Better performance**: Optimal source selection - **Network effects**: Benefit from collective infrastructure ### For Users - **Faster access**: Data served from optimal source - **High availability**: Multiple paths to data - **Geographic optimization**: Nearby sources preferred - **Consistent experience**: Transparent source selection --- The data retrieval system is fundamental to AR.IO's mission of providing reliable, performant access to the permaweb. This sophisticated architecture ensures that Arweave's permanent data remains accessible through a resilient, distributed gateway network. ## Related Gateway Concepts } /> } /> } /> } /> # Data Verification (/learn/gateways/data-verification) AR.IO gateways continuously verify that data chunks are correctly stored and retrievable from Arweave. This ensures users receive authentic, uncorrupted data with cryptographic proof of integrity. The verification system is what makes AR.IO gateways trustworthy data providers for the permaweb. ## How Gateways Verify Data Data verification is an ongoing process that uses Merkle tree cryptography to provide mathematical proof of data integrity. The process involves multiple specialized components working together to ensure cached data matches what's stored on Arweave: ```mermaid sequenceDiagram participant Scheduler participant Worker as DataVerificationWorker participant ContigIndex as ContiguousDataIndex participant RootTxIndex as DataItemRootTxIndex participant DataRootComp as DataRootComputer participant DataSource as ContiguousDataSource participant Importer as DataImporter participant Bundler as BundleQueue Note over Scheduler,Bundler: Data Discovery Phase Scheduler->>Worker: Triggers queueRootTx() periodically Worker->>ContigIndex: getVerifiableDataIds() ContigIndex-->>Worker: Returns list of data IDs loop For each dataId Worker->>RootTxIndex: getRootTxId(dataId) RootTxIndex-->>Worker: Returns rootTxId Worker->>Worker: Enqueue rootTxId if not processed end Note over Scheduler,Bundler: Verification Phase Worker->>ContigIndex: getDataAttributes(rootTxId) ContigIndex-->>Worker: Returns attributes (indexedDataRoot, hash) alt indexedDataRoot is present Worker->>DataRootComp: computeDataRoot(rootTxId) DataRootComp->>DataSource: getData(rootTxId) DataSource-->>DataRootComp: Returns data stream DataRootComp-->>Worker: Returns computedDataRoot alt computedDataRoot matches indexedDataRoot Worker->>ContigIndex: saveVerificationStatus(rootTxId) ContigIndex-->>Worker: ✓ Verification Success Note over Worker: Data is now verified and cached for serving else computedDataRoot does NOT match Worker->>Importer: queueItem({id: rootTxId}, priority=true) Importer-->>Worker: Queued for re-import from Arweave end else indexedDataRoot is MISSING Worker->>Bundler: queueBundle({id: rootTxId}) Bundler-->>Worker: Queued for bundle unbundling end ``` **The Verification Workflow:** Gateways achieve verification through a systematic five-phase process orchestrated by the DataVerificationWorker. This process ensures that every piece of cached data cryptographically matches its original form on Arweave, providing mathematical proof of integrity before serving data to users. **1. Discovery Phase** - Periodically scan for unverified data items - Priority-based queue management (higher priority items first) - Track retry attempts for failed verifications **2. Data Retrieval** - Fetch data attributes from gateway storage - Retrieve the complete data stream - Gather metadata needed for verification **3. Cryptographic Computation** - Calculate Merkle data root from actual data stream - Generate cryptographic proofs using the same algorithm as Arweave - Create verifiable hash chains **4. Root Comparison** - Compare computed root against indexed root in database - Verify data hasn't been corrupted or altered - Validate chunk integrity against Merkle proofs **5. Action Based on Results** - **Success**: Mark data as verified with timestamp - **Failure**: Trigger re-import from Arweave or unbundle from parent - **Error**: Increment retry counter and requeue for later ## Verification Types AR.IO gateways handle different types of data verification based on the data's origin: ### Transaction Data Verification For individual Arweave transactions: - **Direct root validation** against transaction data roots stored on-chain - **Complete data reconstruction** from chunks to ensure availability - **Cryptographic proof** that data matches what was originally stored ### Bundle Data Verification For ANS-104 data bundles (collections of data items): - **Bundle integrity checks** to verify the container is valid - **Individual item verification** within each bundle - **Recursive unbundling** when verification fails to re-extract items - **Nested bundle support** for bundles containing other bundles ### Chunk-Level Validation At the most granular level: - **Merkle proof validation** for individual data chunks - **Sequential integrity** ensuring chunks form complete data - **Parallel verification** of multiple chunks for performance ## Why Verification Matters ### Cryptographic Trust Foundation - **Mathematical Proof**: Merkle tree cryptography provides irrefutable proof of data integrity - **Independent Validation**: Multiple gateways verify the same data independently - **Network Consensus**: Distributed verification creates trust without central authority ### Data Integrity Guarantees - **Tamper Detection**: Any alteration to data is immediately detectable - **Corruption Recovery**: Automatic healing of corrupted data through re-import - **Permanent Storage Validation**: Ensures Arweave's permanence promise is maintained ### Gateway Reliability - **Continuous Monitoring**: Ongoing verification catches issues before users encounter them - **Self-Healing System**: Automatic recovery mechanisms maintain data availability - **Transparent Operations**: Verification status and timestamps provide audit trails ## Explore Gateway Systems } /> } /> } /> } /> # Gateway Registry (/learn/gateways/gateway-registry) ## Overview The AR.IO Network consists of [AR.IO gateway nodes](/learn/what-is-ario), which are identified by their registered Arweave wallet addresses and either their IP addresses or hostnames, as stored in the network's [smart contract](/learn/token) Gateway Address Registry (GAR). Any gateway operator that wishes to join the AR.IO Network must register their node in the AR.IO smart contract's Gateway Address Registry. Registration involves staking a minimum amount of ARIO tokens and providing additional metadata describing the gateway service offered. These nodes adhere to the AR.IO Network's protocols, creating a collaborative environment of gateway nodes that vary in scale and specialization. The network promotes a fundamental level of service quality and trust minimization among its participants. The gateways.ar.io portal displays all gateways currently in the network, showing their stakes, performance scores, and operational metrics ### Benefits of Joining the Network Being part of the network grants AR.IO gateways an array of advantages: - Simplified advertising of services and discovery by end users via the Gateway Address Registry - More rapid bootstrapping of key gateway operational data due to prioritized data request fulfillment among gateways joined to the network - Sharing of data processing results - Auditability and transparency through the use of AGPL-3 licenses, which mandate public disclosure of any software changes, thereby reinforcing the network's integrity and reliability - Improved network reliability and performance through an incentive protocol, which uses a system of evaluations and rewards to encourage high-quality service from gateways - Eligibility to accept delegated staking improving a gateway's discoverability and reward opportunities - **Eligibility to receive distributions from the protocol balance** - Gateways that have joined the network are eligible to receive token distributions based on their performance and contributions to the network ### How the GAR Works After joining the network, the operator's gateway can be easily discovered by permaweb apps, its health can be observed, and it can participate in data sharing protocols. A gateway becomes eligible to participate in the network's incentive protocol in the epoch following the one they joined in. The GAR advertises the specific attributes of each gateway including its stake, delegates, settings and services. This enables permaweb apps and users to discover which gateways are currently available and meet their needs. Apps that read the GAR can sort and filter it using the gateway metadata, for example, ranking gateways with the highest stake, reward performance, or feature set at the top of the list. This allows users to prefer the higher staked, more rewarded gateways with certain capabilities over lower staked, less rewarded gateways. ## Token Incentives and Network Monitoring The AR.IO network uses a sophisticated incentive system to ensure gateway quality and reliability: - **Token Incentives**: Learn more about how gateways earn rewards and participate in the network economy in the [Token section](/learn/token/) - **Observer Protocol**: The network employs an Observer system that monitors gateway performance and ensures quality of service. Learn more about the [Observer & Incentive Protocol](/learn/oip) and how it maintains network integrity ## Recap The Gateway Registry is the foundation of the AR.IO network's decentralized infrastructure. Key takeaways: - **Network Participation**: Gateways must register and stake ARIO tokens to join the network - **Protocol Distributions**: Registered gateways are eligible to receive token distributions from the protocol balance - **Observer Monitoring**: The network employs an [Observer and Incentives Protocol](/learn/oip) that monitors gateway performance and ensures quality of service - **Staking & Rewards**: Gateways earn rewards based on performance through a sophisticated [staking system](/learn/token/staking) that includes delegation opportunities - **Discoverability**: The GAR enables apps and users to find suitable gateways based on their needs - **Performance-Based Selection**: Gateway metadata allows for intelligent routing based on stake, performance, and capabilities - **Transparent Ecosystem**: All gateway information is publicly accessible through the smart contract and at gateways.ar.io By joining the network, gateways become part of a collaborative ecosystem that rewards quality service and ensures reliable access to the permaweb. ## Explore the Gateway Ecosystem } /> } /> } /> } /> # AR.IO Gateways (/learn/gateways) ## What Are AR.IO Gateways? AR.IO gateways are specialized infrastructure nodes that serve as bridges between the Arweave network and applications. They transform raw Arweave blockchain data into a fast, reliable, and developer-friendly platform for storing and retrieving permanent data. ## Core Responsibilities AR.IO gateways handle three fundamental responsibilities: ### Data Writing & Proxying - **Transaction relay**: Forward transaction headers to Arweave miners for mempool inclusion - **Chunk distribution**: Proxy data chunks to Arweave nodes for storage and replication - **Bundle processing**: Receive and bundle ANS-104 data items into base layer transactions ### Data Retrieval & Serving - **Fast access**: Serve cached data with optimized performance and reliability - **Multi-source fallback**: Retrieve data from trusted gateways, network peers, or directly from Arweave - **Content delivery**: Stream complete transactions, individual chunks, or bundled data items ### Data Discovery & Indexing - **Structured queries**: Enable efficient searches across transactions, bundles, and wallet data - **Real-time indexing**: Process incoming data streams and maintain searchable databases - **ArNS routing**: Provide human-readable name resolution for Arweave content ## Key Features ### Modular Architecture Gateways are built with interchangeable components that operators can customize: - **Configurable services**: Enable or disable features based on specific needs - **Scalable storage**: From SQLite for small deployments to cloud databases for enterprise scale - **Flexible infrastructure**: Adaptable to different operational environments and requirements ### Network Connectivity - **Decentralized network**: Connect to other AR.IO gateways for data sharing and redundancy - **Trust-minimized access**: Cryptographically verify data integrity without relying on central authorities - **Performance optimization**: Intelligent caching and content delivery strategies ### Developer Experience - **HTTP APIs**: Standard web interfaces for all gateway functionality - **Monitoring & telemetry**: Built-in observability for operational insights - **Content moderation**: Configurable policies for community and compliance needs ## What Gateways Are Not It's important to understand the boundaries of what AR.IO gateways do and don't provide: ### Not Storage Providers - **Don't enforce Arweave protocol**: Gateways don't validate consensus or mining rules - **Don't guarantee permanence**: Storage permanence comes from Arweave itself, not gateways - **Don't replicate all data**: Gateways cache popular content but aren't full blockchain replicas ### Not Compute Platforms - **Don't depend on AO**: Gateways operate independently of any compute layer - **Don't execute smart contracts**: Computation happens on AO or other platforms, not gateways - **Don't process application logic**: Gateways focus purely on data access and delivery ### Not Centralized Services - **Don't control data**: Content ownership and control remain with original creators - **Don't gate access**: Anyone can run a gateway and access Arweave data - **Don't create vendor lock-in**: Gateway APIs and protocols are open and interoperable ## Explore Gateways } /> } /> } /> } /> # Observation & Incentive Protocol (/learn/oip) ## Overview The Observation and Incentive Protocol ensures network quality through peer monitoring and performance-based rewards. Gateways are incentivized to maintain high performance while also serving as "observers" that evaluate their peers' ArNS resolution capabilities and data integrity verification. The protocol operates on **24-hour epochs** where up to 50 gateways are selected as observers to test other gateways against ArNS name resolution criteria and chunk/offset validation. This creates a self-regulating ecosystem with transparent, consensus-based performance evaluation. ## Architecture Overview The Observer Protocol operates through a systematic process where selected gateways monitor their peers and report findings to maintain network quality: ```mermaid sequenceDiagram participant SC as AR.IO Smart Contract participant OBS as Observer Gateway participant GW as Target Gateway participant AR as Arweave Network Note over SC,AR: Epoch Start SC->>SC: Select up to 50 observers(weighted random) SC->>SC: Generate 2 prescribed ArNS names SC->>OBS: Notify selection & provide names Note over SC,AR: Observation Phase OBS->>OBS: Choose 8 additional ArNS names OBS->>OBS: Select subset for chunk validation loop For each gateway to test OBS->>GW: Test ArNS resolution (10 names) GW-->>OBS: Response data OBS->>OBS: Score: Pass/Fail end loop For selected gateways OBS->>GW: Test chunk/offset validation GW-->>OBS: Chunk data + Merkle proof OBS->>OBS: Verify cryptographic proof end Note over SC,AR: Reporting Phase OBS->>AR: Upload detailed JSON report AR-->>OBS: Confirm storage OBS->>SC: Submit failed gateways list SC-->>OBS: Confirm interaction Note over SC,AR: Evaluation & Rewards SC->>SC: Tally all observer votes SC->>SC: Determine gateway status(≥50% pass = functional) SC->>SC: Calculate reward distribution SC->>OBS: Distribute observer rewards SC->>GW: Distribute gateway rewards(if functional) ``` ## Epoch Cycle and Responsibilities Each 24-hour epoch follows a structured process with specific responsibilities for gateways and observers: ### Epoch Start - **Smart Contract**: Selects up to 50 observers using weighted random selection - **Smart Contract**: Generates 2 prescribed ArNS names for all observers to test - **Selected Observers**: Receive notification of selection and prescribed names ### Observation Phase - **Observers**: Choose 8 additional ArNS names to test (total of 10 names per gateway) - **Observers**: Select subset of gateways for chunk/offset validation based on sampling rate - **Observers**: Test assigned gateways for ArNS resolution, wallet ownership, content hashes, and response times - **Observers**: Validate chunk/offset data integrity using cryptographic Merkle proofs - **Target Gateways**: Respond to resolution requests, serve content, and provide chunk data with proofs ### Reporting Phase - **Observers**: Upload detailed JSON reports to Arweave for transparency - **Observers**: Submit failed gateway lists to the AR.IO Smart Contract for consensus voting ### Evaluation and Distribution - **Smart Contract**: Tallies all observer votes (≥50% pass = functional gateway) - **Smart Contract**: Distributes rewards at epoch end based on performance - **Functional Gateways/Observers**: Receive ARIO token rewards automatically ## Key Features - **Decentralized Monitoring**: Peer-to-peer evaluation ensures no single point of failure - **Consensus-Based Scoring**: Majority rule (≥50% pass votes) determines gateway functionality - **Performance Incentives**: Only functional gateways and observers receive ARIO token rewards - **Data Integrity Validation**: Cryptographic verification of chunk/offset data using Merkle proofs - **Transparent Accountability**: All reports permanently stored on Arweave and viewable at [gateways.ar.io](https://gateways.ar.io) - **Sustainable Funding**: Protocol balance funded by ArNS name purchases, aligning rewards with network usage ## Chunk/Offset Validation The protocol includes advanced data integrity verification through chunk/offset observation. Observers validate that gateways can correctly serve and verify Arweave data chunks using cryptographic proofs: ### Validation Process - **Sampling**: A subset of gateways is selected for chunk validation each epoch - **Offset Testing**: Random offsets within the stable weave range are tested - **Merkle Proof Verification**: Cryptographic validation ensures chunk authenticity - **Binary Search Optimization**: Efficient transaction lookup using cached metadata ### Technical Implementation - **Chunk Retrieval**: `GET /chunk/{offset}` returns chunk data and Merkle proof - **Proof Validation**: Uses Arweave's `validatePath()` function for cryptographic verification - **Performance Optimization**: LRU caching for blocks, transactions, and metadata - **Early Stopping**: Tests stop immediately upon first successful validation --- **View Live Data**: See current observers and performance metrics at [gateways.ar.io](https://gateways.ar.io) ## Explore the Protocol } /> } /> } /> } /> # Observer Selection (/learn/oip/observer-selection) ## Epochs and Selection Timeline The AR.IO network operates on **24-hour epochs**, during which the observer selection and evaluation process takes place. At the start of each epoch: - **50 observers** are selected to monitor the network - **2 prescribed ArNS names** are chosen for all observers to test - **8 additional names** are selected by each observer individually - **Gateway subset** is selected for chunk/offset validation based on sampling rate This creates a consistent evaluation framework where all observers test the same baseline names while having flexibility to choose additional targets for comprehensive network monitoring, plus advanced data integrity verification. ## Selection Process Up to **fifty (50)** gateways are selected as observers per epoch using a sophisticated weighted random selection system. The selection uses **hashchain entropy** from previous AR.IO contract state messages to ensure unpredictable and tamper-resistant selection. The hashchain-based entropy provides cryptographic randomness for selecting: - **Observer Gateways**: The 50 gateways chosen to perform observations - **Prescribed ArNS Names**: The 2 common names all observers must evaluate This approach prevents manipulation while maintaining weighted probabilities based on gateway performance and commitment. ![Current epoch observers showing their observation chance (normalized composite weight) and report status](/observers.png) gateways.ar.io/#/observers {" "} shows the current epoch prescribed observers and arns names, as well as their submission status ## Weighted Selection Criteria Observer selection is based on **normalized composite weights** that combine multiple performance and commitment factors. These weights determine each gateway's probability of being selected as an observer for the epoch. The selection considers four key factors that are multiplied together to create a composite weight (CW): - **Stake Weight (SW)**: Financial commitment to the network - **Tenure Weight (TW)**: Length of network participation - **Gateway Performance Ratio Weight (GPRW)**: Historical gateway performance - **Observer Performance Ratio Weight (OPRW)**: Historical observer performance These weights are then normalized across all eligible gateways to create selection probabilities. For detailed weight calculations and formulas, see [Performance Evaluation](/learn/oip/performance-evaluation). ## Hashchain Random Selection The selection process uses **hashchain entropy** from previous AR.IO contract state messages to achieve cryptographically secure randomness: ### How Hashchain Selection Works 1. **Entropy Source**: Random numbers are generated from the hashchain of previous contract state messages 2. **Weighted Mapping**: Each random number maps to normalized weight ranges of eligible gateways 3. **Observer Selection**: The gateway whose weight range contains each random number is selected 4. **Prescribed Names**: The same entropy selects 2 ArNS names that all observers must test This creates tamper-resistant selection where higher-weighted gateways have proportionally better chances of selection, while maintaining true randomness that cannot be predicted or manipulated. ## Chunk/Offset Sampling In addition to observer selection, the protocol includes a separate sampling process for chunk/offset validation: ### Gateway Selection for Chunk Validation - **Deterministic Selection**: Uses PRNG seeded with observation entropy to select gateway subset - **Sampling Rate**: Configurable percentage of gateways tested per observation (default: 1%) - **Minimum Guarantee**: At least 1 gateway is always selected for testing - **Offset Selection**: Random offsets within the stable weave range are chosen for each selected gateway **Initial Implementation**: During the initial rollout phase, only a very small portion of gateways will be checked for chunk/offset validation, and the current validation criteria are extremely lenient to ensure smooth network operation. ### Validation Process - **Chunk Retrieval**: Observers request chunk data using `GET /chunk/{offset}` - **Merkle Proof Verification**: Cryptographic validation ensures data integrity - **Early Stopping**: Tests stop immediately upon first successful validation - **Performance Optimization**: Uses LRU caching for efficient transaction lookup ## Fairness and Meritocracy This system ensures: - **Meritocratic Selection**: Higher-performing gateways have better selection odds - **Fair Opportunity**: All gateways maintain non-zero selection probability - **Tamper Resistance**: Hashchain entropy prevents manipulation - **Consistent Standards**: Prescribed names create common evaluation baseline The selection is saved in the contract state at epoch start to ensure that activities during the epoch do not affect selection or reward distribution. --- ## Next Steps Ready to understand how performance is evaluated? Learn about [Performance Evaluation](/learn/oip/performance-evaluation) to see how gateways are scored, or explore [Reward Distribution](/learn/oip/reward-distribution) to understand how rewards are calculated and distributed. # Performance and Weights (/learn/oip/performance-evaluation) ## Gateway Classifications Consider the following classifications: - **Functional or Passed Gateways**: are gateways that meet or surpass the network's performance and quality standards, including ArNS resolution and chunk/offset validation (if selected). - **Deficient or Failed Gateways**: are gateways that fall short of the network's performance expectations, including failures in ArNS resolution or chunk/offset validation. - **Functional or Submitted Observers**: are selected observers who diligently perform their duties and submit observation reports and contract interactions. - **Deficient or Failed Observers**: are selected observers who do not fulfill their duty of submitting observation reports and contract interactions. ## Evaluation Process At the end of an epoch, the AR.IO Smart Contract processes observer submissions to determine gateway performance through a consensus-based vote tallying system. This evaluation transforms individual observer reports into network-wide performance assessments. ### Vote Tallying and Gateway Classification After observers submit their detailed reports (see [Reporting](/learn/oip/reporting) for submission details), the smart contract performs consensus calculation: **Vote Processing:** - **Data Collection**: All observer contract interactions for each gateway are collected - **Vote Counting**: Each observer submission contributes either a PASS or FAIL vote - **Majority Determination**: If ≥50% of submitted observer interactions indicate PASS, the gateway is considered Functional - **Binary Classification**: Gateways are classified as either Functional (eligible for rewards) or Deficient (ineligible for rewards) **Consensus Mechanism:** - Multiple observers evaluate each gateway independently, ensuring reliable assessment - The 50% threshold requires majority agreement for positive performance determination - Binary scoring provides clear, unambiguous performance classification - Vote tallying occurs after the 40-minute confirmation period to ensure all interactions are finalized ## Chunk/Offset Validation Criteria For gateways selected for chunk/offset validation, additional performance criteria are evaluated: ### Validation Requirements - **Chunk Retrieval**: Gateway must successfully respond to `GET /chunk/{offset}` requests - **Data Integrity**: Chunk data must be non-empty and within reasonable size limits (<1MB) - **Merkle Proof Validation**: Cryptographic proof must decode correctly and validate against transaction data_root - **Performance Standards**: Response times must meet network expectations **Initial Implementation**: During the initial rollout phase, only a very small portion of gateways will be checked for chunk/offset validation, and the current validation criteria are extremely lenient to ensure smooth network operation. ### Assessment Process - **Binary Scoring**: Each offset test results in pass/fail determination - **Consensus Integration**: Chunk/offset results are integrated into overall gateway assessment - **Performance Tracking**: Individual offset assessments are tracked and reported ## Weight Impact on Gateway Performance Gateway performance directly affects multiple weighted factors that influence future observer selection and overall network participation: ### Gateway Performance Ratio Weight (GPRW) A gateway's evaluation results directly impact their Gateway Performance Ratio Weight, which affects their likelihood of being selected as an observer in future epochs: ```math GPRW = \frac{1 + \text{Passed Epochs}}{1 + \text{Participated Epochs}} ``` **Impact:** - **Functional Gateways**: Increase their passed epochs count, improving their GPRW - **Deficient Gateways**: Decrease their GPRW as participated epochs increase without corresponding passes - **Observer Selection**: Higher GPRW increases chances of being selected as an observer ### Observer Performance Ratio Weight (OPRW) For gateways selected as observers, their performance in submitting reports affects future selection: ```math OPRW = \frac{1 + \text{Submitted Epochs}}{1 + \text{Selected Epochs}} ``` **Impact:** - **Functional Observers**: Who submit reports increase their OPRW - **Deficient Observers**: Who fail to submit reports see their OPRW decrease - **Future Selection**: Higher OPRW improves chances of future observer selection ### Composite Weight Calculation All performance factors combine to determine overall network influence: ```math CW = SW \times TW \times GPRW \times OPRW ``` Where: - **SW** = Stake Weight (financial commitment) - **TW** = Tenure Weight (network longevity) - **GPRW** = Gateway Performance Ratio Weight - **OPRW** = Observer Performance Ratio Weight **Long-term Effects:** - Consistently functional gateways accumulate higher composite weights - Poor performers see diminishing influence and selection chances - Performance history creates compounding effects on network participation ## Evaluation Timeline Rewards are distributed **at the end of each epoch** by the AR.IO Smart Contract directly based on the tallied observer votes. The smart contract processes all observer submissions and automatically distributes rewards to functional gateways and observers based on their performance during the epoch. ## Key Features - **Majority Rule**: Gateway performance is determined by majority vote from observers - **Binary Scoring**: Simple pass/fail system for clear performance assessment - **Network Confirmation**: Delay ensures all votes are confirmed before evaluation - **Transparent Process**: All evaluations are based on onchain data ## Consequences of Performance ### Functional Gateways - Eligible for gateway rewards - Maintain good standing in the network - Continue to be considered for observer selection ### Deficient Gateways - Ineligible for gateway rewards - Risk being marked as deficient for multiple epochs - May face additional penalties for prolonged poor performance ### Observer Performance - Functional observers receive observer rewards - Deficient observers forfeit observer rewards - Deficient observers who are also functional gateways have their gateway reward reduced by 25% --- ## Next Steps Ready to understand how rewards are distributed? Learn about [Reward Distribution](/learn/oip/reward-distribution) to see the formulas and mechanics, or go back to [Observer Selection](/learn/oip/observer-selection) to review the selection process. # Reporting (/learn/oip/reporting) ## Observer Responsibilities Selected observers have specific duties each epoch: test gateways, document results, and submit findings through two channels. Proper completion of these responsibilities determines observer rewards and future selection chances. ## Dual Submission Process Observers must submit their findings through both channels to fulfill their duties: ### 1. Detailed Reports to Arweave - **Format**: Comprehensive JSON reports with full evaluation data - **Purpose**: Permanent audit trail and transparency - **Content**: Complete test results, timing data, and failure details ### 2. Contract Interactions to AR.IO Smart Contract - **Format**: List of failed gateways - **Purpose**: Efficient vote tallying for consensus - **Content**: Binary pass/fail determinations for each gateway tested ## Observer Evaluations Observers test assigned gateways against 10 ArNS names (2 prescribed + 8 chosen) and document their findings: ![Observer Report Overview showing multiple gateway evaluations](/observer-report.png) ### Evaluation Results Passing Report: Gateway successfully resolves ArNS names with correct status codes (200), transaction IDs, and data hashes. Failing Report: Gateway fails ArNS resolution tests due to ownership issues, timeouts (5000ms), or missing content. Observers evaluate gateways based on: - **Gateway Wallet Ownership**: Verifies correct wallet address - **ArNS Resolution**: Tests successful name-to-transaction resolution - **Content Hash Verification**: Ensures data integrity - **Response Times**: Measures performance within limits - **Chunk/Offset Validation**: Cryptographic verification of data chunks (for selected gateways) ## Chunk/Offset Assessment Reporting For gateways selected for chunk/offset validation, observers perform additional testing and reporting: ### Validation Process - **Offset Selection**: Random offsets within the stable weave range are chosen for testing - **Chunk Retrieval**: Observers request chunk data using `GET /chunk/{offset}` endpoint - **Merkle Proof Verification**: Cryptographic validation ensures chunk authenticity - **Binary Search**: Efficient transaction lookup using cached metadata for proof validation ### Reporting Details - **Individual Assessments**: Each offset test is tracked with pass/fail/skipped status - **Enforcement Status**: Reports include whether chunk/offset failures affect gateway status - **Performance Metrics**: Response times and validation results are documented - **Early Stopping**: Tests stop immediately upon first successful validation ### Report Structure ```json { "offsetAssessments": { "plannedOffsets": [12345, 67890, ...], "actualAssessments": [...], "validatedOffset": 12345, "pass": true, "enforcementEnabled": true } } ``` **Initial Implementation**: During the initial rollout phase, only a very small portion of gateways will be checked for chunk/offset validation, and the current validation criteria are extremely lenient to ensure smooth network operation. ## Observer Rewards and Penalties Observer performance directly impacts rewards and future participation: ### Successful Observer Performance - **Observer Reward**: Observers who submit both reports and contract interactions receive the Observer Reward - **Future Selection**: Successful reporting improves Observer Performance Ratio Weight (OPRW) - **Increased Chances**: Higher OPRW increases likelihood of future observer selection and more reward opportunities ### Failed Observer Performance - **No Observer Reward**: Observers who fail to submit required reports forfeit their Observer Reward - **Gateway Penalty**: If the deficient observer is also a functional gateway, their gateway reward is **reduced by 25%** - **Reduced Selection**: Failed submissions decrease OPRW, diminishing future observer selection chances - **Lost Opportunities**: Lower selection probability means fewer chances to earn Observer Rewards ## Observer Accountability The system tracks observer performance to ensure network quality: - **Submission Tracking**: Both Arweave reports and contract interactions must be submitted - **Performance History**: Observer submission record affects future selection probability - **Reward Impact**: Consistent reporting builds credibility and increases earning potential --- ## Next Steps Ready to understand how these reports are processed? Learn about [Performance Evaluation](/learn/oip/performance-evaluation) to see how reports become votes and determine gateway rewards, or explore [Reward Distribution](/learn/oip/reward-distribution) to understand the complete incentive structure. # Distributions (/learn/oip/reward-distribution) ## Protocol Balance and Funding The AR.IO network maintains a protocol balance that funds all gateway and observer rewards. This balance is primarily funded through ArNS name purchases, ensuring sustainable network incentives aligned with usage. ## Epoch Allocation Each epoch, a portion of the protocol balance is earmarked for distribution as rewards. This value shall begin at 0.1% per epoch for the first year of operation, then linearly decline down to and stabilize at 0.05% over the following 6 months. ### Funding Sources - **ArNS Name Purchases**: Primary funding mechanism - fees from ArNS name registrations and renewals - **Network Genesis Allocation**: Initial ARIO tokens allocated at network launch - **Undistributed Rewards**: Rewards not claimed due to poor performance roll forward to future epochs From this allocation, two distinct reward categories are derived: ## Base Rewards ### Base Gateway Reward (BGR) This is the portion of the reward allocated to each Functional Gateway within the network and is calculated as: ```math BGR = \frac{\text{Epoch Reward Allocation} \times 0.9}{\text{Total Gateways in the Network}} ``` ### Base Observer Reward (BOR) Observers, due to their additional responsibilities, have a separate reward calculated as: ```math BOR = \frac{\text{Epoch Reward Allocation} \times 0.1}{\text{Total Selected Observers for the Epoch}} ``` ## Distribution Based on Performance The reward distribution is contingent on the performance classifications derived from the Performance Evaluation: - **Functional Gateways**: Gateways that meet the performance criteria receive the Base Gateway Reward. - **Deficient Gateways**: Gateways falling short in performance do not receive any gateway rewards. - **Functional Observers**: Observers that fulfilled their duty receive the Base Observer Reward. - **Deficient Observers**: Observers failing to meet their responsibilities do not receive observer rewards. Furthermore, if they are also Functional Gateways, their gateway reward is reduced by **25%** for that epoch as a consequence for not performing their observation duty. ![Epoch reward distributions showing eligible vs distributed ARIO tokens](/epoch-distributions.png) Epoch reward distributions showing the relationship between eligible rewards (total available) and distributed rewards (actually paid out) across epochs. The difference represents rewards not distributed due to gateway or observer deficiencies. ## Auto-Staking Gateways shall be given the option to have their reward tokens "auto-staked" to their existing stake or sent to their wallet as unlocked tokens. The default setting shall be "auto-staked". ## Distribution to Delegates The protocol will automatically distribute a Functional Gateway's shared rewards with its delegates. The distribution will consider the gateway's total reward for the period (including observation rewards), the gateway's "Delegate Reward Share Ratio", and each delegate's stake proportional to the total delegation. Each individual delegate reward is calculated as: ```math DR_i = \text{Total Rewards} \times \text{Reward Share Ratio} \times \frac{\text{Delegate's Stake}}{\text{Total Delegated Stake}} ``` Unlike gateways, token reward distributions to delegated stakers will only be "auto-staked" in that they will be automatically added to the delegate's existing stake associated with the rewarded gateway. The delegated staker is then free to withdraw their staked rewards at any time (subject to withdrawal delays). ## Undistributed Rewards In cases where rewards are not distributed, either due to the inactivity or deficiency of gateways or observers, the allocated tokens shall remain in the protocol balance and carry forward to the next epoch. This mechanism is in place to discourage observers from frivolously marking their peers as offline in hopes of attaining a higher portion of the reward pool. Note that if a gateway (and its delegates) leaves the network or a delegate fully withdraws stake from a gateway, they become ineligible to receive rewards within the corresponding epoch and the earmarked rewards will not be distributed. ## Handling Deficient Gateways To maintain network efficiency and reduce contract state bloat, gateways that are marked as deficient, and thus fail to receive rewards, for **thirty (30)** consecutive epochs will automatically trigger a "Network Leave" action and be subject to the associated stake withdrawal durations for both gateway stake and any delegated stake. In addition, the gateway shall have its **minimum network-join stake slashed by 100%**. The slashed stake shall be immediately sent to the protocol balance. --- ## Next Steps Congratulations! You now understand the complete OIP system. Ready to learn more? - **Explore Gateways** → [Gateway Documentation](/learn/gateways/) for technical details - **Learn about ArNS** → [ArNS Documentation](/learn/arns/) for naming system details - **Back to Introduction** → [OIP Introduction](/learn/oip/) to review the basics # Add to Wander (/learn/token/add-to-wander) ## Adding ARIO Token to Wander Wander (formerly ArConnect) is the primary wallet for the Arweave ecosystem and provides native support for AO tokens like ARIO. Follow this guide to add ARIO to your wallet and start viewing your token balance. ## Prerequisites Before adding ARIO to your Wander wallet, ensure you have: - **Wander Wallet Installed**: Download from [wander.app](https://wander.app) for desktop or mobile - **Wallet Setup Complete**: Your wallet should be created and secured with a backup phrase - **Active Internet Connection**: Required for token import and balance queries ## Step-by-Step Workflow ### Open Wander Wallet Launch your Wander wallet application: - **Desktop**: Open the Wander desktop application - **Mobile**: Tap the Wander app icon on your device ### Access Settings Menu Navigate to the settings section of your wallet: **For Mobile Users:** 1. Tap the **3 vertical dots** (⋮) in the top right corner of the screen 2. Select **"Settings"** from the dropdown menu **For Desktop Users:** 1. Click the **hamburger menu icon** (☰) in the bottom right corner 2. Navigate to the settings section ### Navigate to Token Management 1. In the Settings menu, select **"Tokens"** 2. This opens the token management interface where you can view and add tokens ### Import New Token 1. Click the **"Import Token"** button 2. You'll see a form for adding new token details ### Configure Token Type (Desktop Only) **For Desktop Users:** - Ensure the **"Asset/Collectible"** dropdown is set to **"Asset"** - This tells Wander that you're adding a fungible token, not an NFT ### Enter ARIO Token Details 1. In the **Process ID** field, enter the ARIO token process ID: ``` qNvAoz0TgcH7DMg8BCVn8jF32QH5L6T29VjHxhHqqGE ``` 2. Once you enter the Process ID, Wander will automatically populate: - **Token Ticker**: "ARIO" - **Token Name**: "AR.IO Network" 3. Verify that the auto-populated information is correct ### Complete the Import 1. Click **"Add Token"** to complete the import process 2. Wander will add ARIO to your token list and begin querying your balance ### Verify Token Addition After successful import, you should see: - ARIO listed in your wallet's token section - Your current ARIO balance (if you hold any tokens) - The ARIO token logo and ticker ## Viewing Your ARIO Balance Once ARIO is added to your Wander wallet: ### Main Wallet View - Your total ARIO balance appears alongside other tokens - Balances update automatically when you receive or send tokens - Tap/click on ARIO to view detailed transaction history ### Token Details - **Balance**: Current ARIO token holdings - **Value**: Estimated value (if price data is available) - **Transactions**: Recent ARIO transaction history - **Actions**: Send, receive, and manage tokens ## Managing ARIO Tokens ### Sending ARIO 1. Select ARIO from your token list 2. Click **"Send"** 3. Enter recipient address and amount 4. Confirm transaction details and send ### Receiving ARIO 1. Select ARIO from your token list 2. Click **"Receive"** 3. Share your wallet address or QR code 4. Incoming tokens will appear automatically ### Transaction History - View all ARIO transactions in the token detail view - Check transaction status and confirmations - Access transaction IDs for verification ## Troubleshooting ### Token Not Appearing If ARIO doesn't appear after import: - **Refresh**: Try refreshing the wallet or restarting the app - **Process ID**: Verify you entered the correct process ID - **Network**: Check your internet connection - **Support**: Contact Wander support if issues persist ### Balance Not Updating If your balance isn't showing correctly: - **Sync**: Allow time for the wallet to sync with the network - **Manual Refresh**: Use the refresh option in the token list - **Network Status**: Check if there are known network issues ### Import Errors If you encounter errors during import: - **Format Check**: Ensure the process ID is correctly formatted - **Network Connection**: Verify stable internet connectivity - **Wallet Version**: Update to the latest version of Wander - **Try Again**: Sometimes retrying the import process works ## Next Steps After successfully adding ARIO to Wander: 1. **Buy an ArNS Name**: Purchase an ArNS name directly in Wander and set it as your primary name for easy identification 2. **Join the Network**: Visit [https://gateways.ar.io](https://gateways.ar.io) to join as a gateway operator or delegate your tokens to existing operators 3. **Stay Connected**: Join the [Discord community](https://discord.gg/cuCqBb5v) to learn more about network updates and participate in discussions Your Wander wallet is now configured to manage ARIO tokens, giving you full access to the AR.IO ecosystem's financial features and services. # Architecture (/learn/token/architecture) ## ARIO Contract Architecture The $ARIO token operates through a smart contract built on AO Computer. The system is composed of several interconnected components that work together to provide a comprehensive network infrastructure. ```mermaid graph LR subgraph ARIO["ARIO Smart Contract"] direction LR BAL[Balances] GW_REG[Gateway Registry] VAULTS[Vaults] ARNS_REG[ArNS Registry] end subgraph EXT["External AO Processes"] direction TB ANT_REGISTRY[ANT Registry] ANT1[ANT Process: alice] end %% Invisible positioning link ARIO --- EXT linkStyle 0 opacity:0; ANT1 -.-> |ownership changes| ANT_REGISTRY ARNS_REG -.->|alice| ANT1 classDef smartContract fill:#e3f2fd classDef antProcess fill:#f3e5f5 classDef registry fill:#e8f5e8 classDef hidden display:none; classDef dashedGroup stroke-dasharray: 5 5, fill: transparent; class BAL,GW_REG,ARNS_REG,VAULTS smartContract class ANT1 antProcess class ANT_REGISTRY registry class EXT dashedGroup ``` ## Core Components ### Balances The Balances component manages the fundamental token accounting for the ARIO ecosystem: - **Token Holdings**: Tracks ARIO token balances for all network participants - **Transfer Logic**: Handles secure token transfers between addresses - **Paginated Queries**: Provides efficient balance lookups with cursor-based pagination - **Integration Layer**: Connects with all other components for balance updates ### Gateway Registry The Gateway Registry manages the network's infrastructure providers and all delegation relationships: - **Gateway Management**: Handles gateway registration, settings updates, and network participation - **Operator Stakes**: Manages gateway operator stakes and minimum staking requirements - **Delegated Stakes**: Coordinates delegated stake from token holders to gateway operators - **Performance Tracking**: Monitors gateway performance metrics and eligibility for rewards ### ArNS Registry The ArNS (Arweave Name System) Registry provides decentralized domain name services: - **Name Registration**: Manages the purchase and registration of friendly names - **Lease Management**: Handles name renewals and lease extensions - **Primary Names**: Allows users to set primary names for their addresses - **ANT Integration**: Links registered names to their corresponding ANT processes ### Vaults The Vaults component implements token time-locking mechanisms for various ecosystem purposes: - **Multi-Purpose Locking**: Locks tokens for RFPs, bug bounties, investors, and core team members - **Flexible Terms**: Supports various lock periods and amounts based on purpose and requirements - **Extension Options**: Allows participants to extend vault lock periods when needed - **Withdrawal Logic**: Manages secure token release after lock expiration or completion of terms ## System Processes ### ANT Registry Process A utility process that facilitates ANT discovery and management: - **Discovery Service**: Makes it easy to find ANTs owned by specific wallet addresses - **Ownership Tracking**: Provides efficient lookup of ANT ownership relationships - **Integration Support**: Connects with wallets and dApps for seamless ANT management - **Query Interface**: Enables paginated queries for ANT discovery ### ArNS Name Tokens (ANTs) Transferable token processes that represent ownership and control of ArNS names: - **Name Ownership**: Each ANT process controls a specific ArNS name - **Record Management**: ANT holders manage DNS-like records for their names - **Undername Control**: Support for creating and managing subdomains (undernames) - **Transferable Rights**: ANTs can be bought, sold, and transferred as independent tokens - **Process-Based**: Each ANT is its own AO process with autonomous functionality ## Security Model The architecture implements multiple layers of security: ### Economic Security - **Stake Requirements**: Minimum stakes ensure operator commitment and skin in the game - **Performance-Based Removal**: Gateways that fail observation for 30 consecutive epochs are removed from the network - **Complete Stake Slashing**: 100% of stake is returned to the protocol balance when gateways are removed for poor performance - **Observation Consensus**: Peer-to-peer monitoring ensures no single point of failure in performance evaluation ### Technical Security - **AO Computer**: Leverages Arweave's permanent and decentralized compute layer - **Process Isolation**: Separate processes for different system functions - **Cryptographic Verification**: All transactions and state changes are cryptographically secured ### Governance Security - **Current Ownership**: Currently owned by a multisig with intentions to make ownership immutable - **Path to Immutability**: Plans to transition to fully immutable protocol without governance control - **Transparent Operations**: All system state is publicly verifiable on Arweave - **Consensus-Based Evaluation**: Gateway performance determined by peer consensus rather than centralized authority # Get the Token (/learn/token/get-the-token) ## Acquiring ARIO Tokens There are several ways to acquire ARIO tokens, depending on your needs and preferences. Here are the primary methods available: ## Exchanges and Trading Platforms ### Centralized Exchanges ARIO tokens are available on centralized exchanges: - **[Gate.io](https://gate.io)**: Trade ARIO with various cryptocurrency pairs - Verify exchange security and reputation before trading - Consider factors like trading fees, liquidity, and withdrawal limits ### Decentralized Exchanges (DEXs) Trade ARIO on decentralized platforms within the AO ecosystem: - **[Dexi](https://dexi.ar.io)**: Native AO-based decentralized exchange - **[Botega](https://botega.ar.io/#/swap?from=0syT13r0s0tgPmIed95bJnuSqaD29HQNN8D3ElLSrsc&to=qNvAoz0TgcH7DMg8BCVn8jF32QH5L6T29VjHxhHqqGE)**: AO ecosystem trading platform - **[Vento](https://ventoswap.com/?tab=swap)**: Decentralized exchange for AO tokens ### Wallet Integration - **[Wander App](https://wander.app)**: Mobile wallet with built-in ARIO exchange and swap functionality - **[Wander Extension](https://wander.app)**: Browser extension wallet for seamless ARIO transactions **Recommended:** We recommend using Wander for the easiest ARIO acquisition experience. Wander provides the most user-friendly way to acquire, store, and manage your ARIO tokens with integrated exchange functionality. ## Network Participation ### Gateway Operation Earn ARIO tokens by operating network infrastructure: 1. **Set up a Gateway**: Deploy and maintain an AR.IO gateway 2. **Stake Initial Tokens**: Meet minimum staking requirements 3. **Provide Services**: Offer reliable data storage and retrieval 4. **Earn Rewards**: Receive ARIO tokens for network participation ### Token Delegation Earn rewards by supporting existing gateway operators: 1. **Choose an Operator**: Research and select a trusted gateway 2. **Delegate Tokens**: Stake your ARIO with the chosen operator 3. **Earn Passively**: Receive a portion of the operator's rewards 4. **Maintain Flexibility**: Undelegate tokens when needed ## Community Programs ### Grants and Bounties Participate in ecosystem development programs: - **Developer Grants**: Build applications and tools for the AR.IO ecosystem - **Bug Bounties**: Help secure the network by finding and reporting vulnerabilities - **Community Initiatives**: Contribute to documentation, education, and outreach ### Ecosystem Participation Earn tokens through various community activities: - **Governance Participation**: Engage in network decision-making processes - **Content Creation**: Produce educational content and tutorials - **Community Building**: Help grow and support the AR.IO community Remember that cryptocurrency investments carry risk, and you should only invest what you can afford to lose. Always do your own research and consider consulting with financial advisors when making investment decisions. # Token (/learn/token) ## Overview ARIO is the multifunction [AO Computer](/glossary) based token that powers the AR.IO Network and its suite of permanent cloud applications. Built on AO, ARIO leverages the computational power and permanence of the Arweave ecosystem to create a robust, decentralized infrastructure token. ### Key Features - **Native AO Token**: ARIO is built directly on AO Computer, utilizing its decentralized compute capabilities for smart contract execution and network operations - **Staking-Based Infrastructure**: The network operates on a staking-based incentive system where gateway operators secure infrastructure services through token commitment - **Multi-utility Design**: ARIO serves multiple functions within the ecosystem, from network participation to governance and payments ## Incentive Mechanisms ARIO's design creates powerful incentives for network participants through multiple reward streams: ### Gateway Operator Incentives - **Staking Rewards**: Gateway operators earn rewards for maintaining network infrastructure and providing reliable data services - **Performance Bonuses**: Additional rewards for gateways that demonstrate high uptime and fast response times - **Network Growth Rewards**: Operators benefit as the network scales and generates more activity ### Delegator Rewards - **Delegated Staking**: Token holders can delegate their ARIO to trusted gateway operators to participate in network operations - **Increased Operator Stake**: Delegation enhances gateway operator visibility and influence within the network - **Shared Risk and Rewards**: Delegators participate in the risk and rewards of gateway operations - **Withdrawal Flexibility**: Delegated tokens can be withdrawn following the same vault lock period rules ### Ecosystem Participation - **ArNS Revenue Sharing**: Name registration fees flow back to network participants through the reward distribution mechanism - **Protocol Growth**: As network usage increases, token utility and value proposition strengthen - **Community Incentives**: Active participation in governance and ecosystem development is rewarded ## Staking Architecture The AR.IO Network implements a robust staking-based incentive system that ensures infrastructure reliability and network participation: ### Gateway Operator Requirements - Gateway operators must stake a minimum amount of ARIO tokens to join the network - Staking demonstrates commitment to network objectives and promotes infrastructure reliability - Only gateways that pass [Observation and Incentive Protocol](/learn/oip) evaluations receive rewards - Staked tokens remain locked until withdrawal is initiated or vault period expires ### Network Quality Assurance - **Non-Inflationary Design**: Fixed supply of 1 billion ARIO tokens with no minting mechanism - **Immutable Protocol**: No governance control or special write access for upgrades - **Infrastructure Focus**: Staking secures gateway infrastructure for permanent cloud services - **Peer Monitoring**: Gateways serve as observers, testing and evaluating each other's performance ### Economic Incentives - Gateway operators earn rewards only when they pass observation evaluations (≥50% consensus) - Staking creates opportunity cost that aligns operator incentives with network health - Observer gateways receive additional rewards for monitoring network quality - Delegated staking allows broader community participation in successful gateway operations ## Built on AO Computer ARIO's foundation on AO Computer provides unique advantages: ### Computational Permanence - All token operations and smart contracts benefit from Arweave's permanent storage - Network history and token transactions are immutably recorded - Computational results are verifiable and permanent ### Decentralized Execution - Token logic runs across distributed AO processes, eliminating single points of failure - Smart contract upgrades follow community governance processes - Network operations scale with AO's computational capacity ### Ecosystem Integration - Native compatibility with other AO-based applications and tokens - Seamless interaction with Arweave's data storage layer - Built-in interoperability with the broader permaweb ecosystem ## Explore the Token } /> } /> } /> } /> # Staking (/learn/token/staking) ## Overview Staking tokens within the AR.IO Network serves a dual primary purpose: it signifies a public commitment by gateway operators and qualifies them and their delegates for reward distributions. In the AR.IO ecosystem, "staking" refers to the process of locking a specified amount of ARIO tokens into a protocol-controlled vault. This act signifies an opportunity cost for the staker, acting both as a motivator and a public pledge to uphold the network's collective interests. Once staked, tokens remain locked until the staker initiates an 'unstake / withdraw' action or reaches the end of the vault’s lock period. It is important to note that the ARIO Token is non-inflationary, distinguishing the AR.IO Network's staking mechanism from yield-generation tools found in other protocols. Staking in this context is about eligibility for potential rewards rather than direct token yield. By staking tokens, gateway operators (and their delegates) demonstrate their commitment to the network, thereby gaining eligibility for protocol-driven rewards and access to the network’s shared resources. ## Gateway Staking A gateway operator must stake tokens to join their gateway to the network, which not only makes them eligible for protocol rewards but also promotes network reliability. This staking requirement reassures users and developers of the gateway's commitment to the network’s objectives, and gateways that adhere to or surpass network performance standards become eligible for these rewards. Gateway operators may increase their stake above the minimum, known as excess stake. A gateway’s total stake is impacted the following epoch once excess stake is added or removed. ## Delegated Staking To promote participation from a wider audience, the network allows anyone with available ARIO tokens to partake in delegated staking. Users can choose to take part in the risk and rewards of gateway operations by staking their tokens with an active gateway (or multiple gateways) through an act known as delegating. Delegators can select which gateways to stake with in gateways.ar.io - maximize their potential rewards based on operator performance, stakes, and weights ### How Delegated Staking Works **Delegated staking allows you to participate in the AR.IO Network's reward system without running your own gateway.** By staking your ARIO tokens on existing gateways, you can earn rewards while supporting network infrastructure. When you delegate stake to a gateway, you're essentially lending your tokens to increase that gateway's total stake. This increases the gateway's chances of being selected as an observer, which means more potential rewards for both the gateway operator and you as a delegator. ### Benefits - **Passive Income**: Earn rewards without running infrastructure - **Network Participation**: Support the AR.IO Network's growth - **Flexibility**: Redelegate to different gateways as conditions change - **Low Barrier to Entry**: No technical expertise required - **Transparent Rewards**: Clear visibility into reward distribution ### Getting Started **Get ARIO Tokens** You'll need ARIO tokens to delegate. See our comprehensive guide on [How to Get ARIO Tokens](/learn/token/get-the-token) for detailed information about acquiring tokens through exchanges, swaps, and network participation. **Choose a Gateway** Research gateways on the [Gateway Portal](https://gateways.ar.io/#/staking) to find one that matches your preferences for reward sharing and performance. Look for gateways with strong uptime, competitive reward sharing percentages, and reliable operation history. **Delegate Your Stake** Use the [Gateway Portal](https://gateways.ar.io/#/staking) to delegate your tokens. The process is straightforward and your tokens remain secure throughout. **Monitor Your Rewards** Track your delegation performance and rewards through the portal's dashboard. Receive your share of the gateway's rewards based on the percentage set by the gateway operator. ### Important Considerations - **Gateway Performance**: Your rewards depend on the gateway's performance and observer selection - **Reward Sharing**: Gateway operators set the percentage of rewards shared with delegators - **Redelegation**: You can move your stake between gateways as network conditions change - **Withdrawal Delays**: There may be delays when withdrawing your delegated stake ## Stake Redelegation This feature enables existing stakers to reallocate their staked tokens between gateways, known as redelegation. Both delegated stakers and gateway operators with excess stake (stake above the minimum network-join requirement) can take advantage of this feature. Redelegation is intended to offer users flexibility and the ability to respond to changing network conditions. ## Redeeming Delegated Stake for ArNS Staked tokens generally have restricted liquidity to maintain a healthy degree of stability in the network. However, an exception to these restrictions allows delegated stakers to use their staked tokens for specific ArNS-related services. By leveraging their staking rewards, delegates can further engage with ArNS, strengthening the name system’s utilization and impact across the network. ## Expedited Withdrawal Fees Gateway operators and delegated stakers can shorten the standard withdrawal delay period after initiating a withdrawal (or being placed into an automatic withdrawal by protocol mechanisms); this action is subject to a dynamic fee. At any point during the delay, users can choose to expedite access to their pending withdrawal tokens by paying a fee to the protocol balance, calculated based on how much sooner they want to receive their funds. Once triggered, the tokens are returned immediately to the user’s wallet. ## Explore Staking } /> } /> } /> } /> # Wayfinder Protocol (/learn/wayfinder) ## The Problem: Centralized Gateway Reliance Today, most Arweave content is accessed through a single gateway: `arweave.net`. This creates a critical centralization risk: - **Single point of failure** - If arweave.net goes down, content becomes inaccessible - **Censorship vulnerability** - A single gateway can block or filter content - **Performance bottlenecks** - All traffic flows through one gateway - **No content verification** - Users must trust the gateway to serve authentic content ## What is Wayfinder? The Wayfinder protocol solves these problems by enabling **decentralized access** to Arweave content through any gateway in the AR.IO network. It's a [URI scheme](https://wikipedia.org/wiki/Uniform_Resource_Identifier) that transforms centralized URLs like `https://arweave.net/txid` into decentralized `ar://` URLs that can be resolved by any participating gateway. Key capabilities: - **Multi-gateway routing** - Access content through any AR.IO gateway - **Built-in verification** - Verify content authenticity regardless of which gateway serves it - **Automatic failover** - If one gateway is down, requests route to another - **User control** - Choose routing strategies based on speed, trust, or randomization ## How Wayfinder Works The Wayfinder protocol consists of three core components that work together to resolve and serve Arweave content: ```mermaid graph TB subgraph "Centralized Access" U1[Users] --> AW[arweave.net] AW --> AR1[Arweave] style AW fill:#ffcccc,stroke:#ff0000,stroke-width:3px,color:#333 style U1 fill:#fff,stroke:#333,stroke-width:2px,color:#333 style AR1 fill:#fff,stroke:#333,stroke-width:2px,color:#333 end ``` vs. ```mermaid graph TB subgraph "Decentralized Access" U2[Users] --> WF[ar:// Protocol] WF --> G1[Gateway 1] WF --> G2[Gateway 2] WF --> G3[Gateway 3] WF --> GN[Gateway N] G1 --> AR2[Arweave] G2 --> AR2 G3 --> AR2 GN --> AR2 end style U2 fill:#fff,stroke:#333,stroke-width:2px,color:#333 style AR2 fill:#fff,stroke:#333,stroke-width:2px,color:#333 style WF fill:#b3d9ff,stroke:#333,stroke-width:3px,color:#333 style G1 fill:#b3d9ff,stroke:#333,stroke-width:2px,color:#333 style G2 fill:#b3d9ff,stroke:#333,stroke-width:2px,color:#333 style G3 fill:#b3d9ff,stroke:#333,stroke-width:2px,color:#333 style GN fill:#b3d9ff,stroke:#333,stroke-width:2px,color:#333 ``` Wayfinder enables: 1. **Decentralized Routing**: Select from multiple gateways instead of relying on arweave.net 2. **Redundant Retrieval**: If one gateway fails, automatically failover to another 3. **Trust-minimized Verification**: Verify content authenticity regardless of which gateway serves it ### Transaction ID Resolution To access content tied to an Arweave Transaction ID (TxId), simply append the TxId to `ar://`: ``` ar://qI19W6spw-kzOGl4qUMNp2gwFH2EBfDXOFsjkcNyK9A ``` Inputting this into a WayFinder-equipped browser will route your request through the right AR.IO Gateway, translating it as per your `Routing Method` settings. ### ArNS Name Resolution Fetching content via an Arweave Name System (ArNS) name is straightforward. Attach the ArNS name to `ar://`: ``` ar://good-morning ``` The Wayfinder protocol, along with the WayFinder App, discerns between TxIds and ArNS names. Once the suitable `https://` request is formulated, the chosen gateway translates the ArNS name based on the ArNS aoComputer contract. ## Detailed Flow ```mermaid sequenceDiagram participant User participant Wayfinder participant Gateway as AR.IO Gateway participant Arweave User->>Wayfinder: ar://ardrive or ar://txid Wayfinder->>Wayfinder: Select a gateway alt ArNS Name (ar://ardrive) Wayfinder->>Gateway: Request content via ArNS name Gateway->>Gateway: Resolve ArNS to TxID Gateway->>Gateway: Check cache for content alt Content not cached Gateway->>Arweave: Fetch from network Arweave-->>Gateway: Return content end Gateway-->>Wayfinder: Return content else Transaction ID (ar://txid) Wayfinder->>Gateway: Request content via TxID Gateway->>Gateway: Check cache for content alt Content not cached Gateway->>Arweave: Fetch from network Arweave-->>Gateway: Return content end Gateway-->>Wayfinder: Return content end Wayfinder->>Wayfinder: Verify content integrity Wayfinder-->>User: Deliver verified content ``` ## Why Decentralized Access Matters ### Resilience Against Censorship With centralized gateways like arweave.net, content can be blocked or filtered at a single point. Wayfinder distributes access across multiple independent gateways, making censorship significantly more difficult. ### Always-Available Content When arweave.net experiences downtime or congestion, all content becomes inaccessible. Wayfinder automatically routes around failed gateways, ensuring your content remains available. ### Trust Through Verification Centralized gateways require blind trust - you can't verify if the content served matches what's stored on Arweave. Wayfinder includes built-in verification capabilities, allowing clients to cryptographically verify content authenticity from any gateway. ### Performance Through Competition Multiple gateways create a competitive ecosystem where gateways optimize for speed and reliability. Users benefit from automatic routing to the fastest available gateway. ## Verification: Trust but Verify Wayfinder supports content verification at multiple levels: 1. **Transaction verification** - Verify that content matches the requested transaction ID 2. **Data integrity checks** - Ensure content hasn't been tampered with during transmission 3. **Manifest validation** - For bundled content, verify all components are authentic 4. **ArNS resolution verification** - Confirm ArNS names resolve to the correct transaction IDs This verification happens transparently, giving users confidence that they're receiving authentic Arweave content regardless of which gateway serves it. ## Explore Wayfinder } /> } /> } /> } /> # Integration (/learn/wayfinder/integration) ## Getting Started **Get the Extension** The easiest way to use Wayfinder is the [Wayfinder Extension](https://chromewebstore.google.com/detail/ario-wayfinder/hnhmeknhajanolcoihhkkaaimapnmgil?hl=en-US), available in the Chrome Web Store. ### Wayfinder Extension The wayfinder-extension is a simple Chrome extension that supports the ar:// routing protocol and allows you to: - **Navigate ar:// URLs directly** in your browser - **Configure routing strategies** - Choose how requests are routed to gateways - **Set verification preferences** - Control content verification levels - **Monitor gateway performance** - See which gateways are serving your requests No coding required - just install the extension and start browsing ar:// URLs! ## Developer Integration Options For developers who want to integrate Wayfinder into their applications: ### Wayfinder Core The [wayfinder-core](/sdks/wayfinder/wayfinder-core) library is the core protocol implementation that accepts various configuration options for setting up Wayfinder. It provides: - **Gateway selection strategies** - Choose how to route requests - **Content verification** - Optionally verify content authenticity - **Telemetry collection** - Understand performance vs arweave.net - **Custom configurations** - Fine-tune behavior for your use case ### Wayfinder React Web developers will likely be interested in [wayfinder-react](/sdks/wayfinder/wayfinder-react), which provides: - **React Context Provider** - Easy integration with React apps - **Custom hooks** - Simplified data fetching and state management - **Component library** - Pre-built UI components for common patterns - **TypeScript support** - Full type safety out of the box ## Common Integration Pattern: Preferred with Fallback For most builds, teams use the "preferred with fallback" pattern. This routing strategy prioritizes your preferred gateway but automatically falls back to other gateways in the network if needed: ```typescript const wayfinder = new createWayfinderClient({ ario: ARIO.mainnet(), verification: 'hash', routing: 'preferred', preferredGateway: 'arweave.net', }); ``` This pattern ensures: 1. **Primary traffic** goes to your preferred gateway (one you run or are delegated to) 2. **Automatic failover** if your gateway can't serve the data 3. **Network resilience** by finding another gateway that can serve the content For detailed routing strategy options, see the [routing strategies documentation](/sdks/wayfinder/wayfinder-core/preferredwithfallbackroutingstrategy). ## Verification: Optional but Encouraged While verification is optional, it's strongly encouraged when fetching from gateways: ```typescript const wayfinder = createWayfinderClient({ //...other settings, verification: 'hash', // hash based verification }); ``` Verification ensures you're receiving authentic content regardless of which gateway serves it. ## Telemetry for Performance Insights Enable telemetry to understand how Wayfinder performs relative to arweave.net: ```typescript const wayfinder = createWayfinderClient({ //...other settings, telemetry: { enabled: true, sampleRate: 0.1, // sample 10% of requests apiKey: 'your-api-key', // optional clientName: 'my-app', clientVersion: '1.0.0' } }); ``` This helps teams make data-driven decisions about gateway selection and optimization. ## React Integration Example Here's a complete example using wayfinder-react: ```tsx import { ARIO } from '@ar.io/sdk; // Configure Wayfinder const wayfinderConfig = { gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), }), routingSettings: { // use the fastest pinging strategy to select the fastest gateway for requests strategy: new FastestPingRoutingStrategy({ timeoutMs: 1000, }), } verificationSettings: { enabled: false } }; // Wrap your app function App() { return ( ); } // Use in components function WayfinderImage({ txId }: { txId: string }) { const { resolvedUrl, isLoading, error } = useWayfinderUrl({ txId }); if (error) { return Error resolving URL: {error.message}; } if (isLoading) { return Resolving URL...; } return ( ); } ``` # Use Cases (/learn/wayfinder/use-cases) ## Decentralized Web Hosting with Flexible Access With Wayfinder, not only can websites be hosted on the Arweave network, but their accessibility is also enhanced. By using the Wayfinder Protocol, web developers can ensure that if a specific AR.IO Gateway is down, the content can still be accessed through another gateway, offering a more reliable and resilient user experience. This is particularly valuable for: - **Personal websites** that need to remain accessible - **Documentation sites** that must be always available - **Portfolio sites** for professionals and creators ## Digital Archives and Preservation with Enhanced Sharing Digitally archiving public domain works, especially in light of events like ["banned books week"](https://www.youtube.com/watch?v=eMSCHXklULQ), becomes more efficient with Wayfinder. Historical institutions or enthusiasts can easily share specific Wayfinder links to documents or media. Unlike hardcoded links which might break if a specific gateway goes offline, Wayfinder ensures that the content remains consistently accessible. This is ideal for: - **Historical documents** and public domain works - **Academic research** and scholarly articles - **Cultural preservation** projects - **Legal documents** that need permanent access ## Media Sharing Platforms with Consistent Content Delivery For platforms hosting user-generated content, the Wayfinder Protocol provides not just decentralized hosting but also a guarantee of content delivery. Even if a content piece becomes viral and one gateway gets congested, Wayfinder ensures that users can still access the content through another gateway, providing a seamless experience. Perfect for: - **Social media platforms** with user-generated content - **Video sharing sites** with viral content - **Image galleries** and art platforms - **Podcast hosting** and audio content ## Decentralized Applications (DApps) with Reliable Front-End Accessibility DApps, while benefiting from Arweave's permanent hosting, can further ensure their front-end remains consistently accessible to users by using Wayfinder. If a DApp's front-end is accessed frequently, causing strain on one gateway, Wayfinder can help ensure the load is distributed, and the DApp remains online and functional. This is essential for: - **DeFi applications** that need high availability - **NFT marketplaces** with high traffic - **Gaming platforms** with real-time requirements - **Collaborative tools** and productivity apps ## Branded Content Access Companies and individuals can brand their permaweb content, making it accessible through their domain, enhancing brand visibility and user trust. This is achieved through DNS TXT records that link domain names to Arweave content. ## Dynamic Content Updates Domain owners can easily update what Permaweb content their `ar://` URL resolves to, which is ideal for frequently updated resources like documents, blogs, and application interfaces. ## Educational and Informational Resources Educational institutions and information providers can make their resources permanently available on the permaweb, accessible through simple, memorable URLs. ## Next Steps Ready to get started with Wayfinder? Explore [Integration Methods](/learn/wayfinder/integration) to see how to implement Wayfinder in your applications, or go back to the [Overview](/learn/wayfinder) to review the basics. # Husky (Developers Only) (/sdks/(clis)/ardrive-cli/(build-and-run-from-source)/husky-developers-only) We use husky 6.x to manage the git commit hooks that help to improve the quality of our commits. Please run: ```shell yarn husky install ``` to enable git hooks for your local checkout. Without doing so, you risk committing non-compliant code to the repository. # Install Yarn 3 (/sdks/(clis)/ardrive-cli/(build-and-run-from-source)/install-yarn-3) Both the ArDrive CLI and ArDrive Core JS use Yarn 3 to manage dependencies and initiate workflows, so follow the [yarn installation instructions][yarn-install] in order to get the latest version. In most cases: ```shell brew install yarn npm install -g yarn ``` # Installing and Starting the CLI From Source (/sdks/(clis)/ardrive-cli/(build-and-run-from-source)/installing-and-starting-the-cli-from-source) Now that your runtime and/or development environment is set up, to install the package simply run: ```shell yarn && yarn build ``` And then start the CLI (always from the root of this repository): ```shell yarn ardrive ``` For convenience in the **non-developer case**, you can install the CLI globally on your system by performing the following step: ```shell yarn pack npm install i -g /path/to/package.tgz ardrive ``` # Recommended Visual Studio Code extensions (Developers Only) (/sdks/(clis)/ardrive-cli/(build-and-run-from-source)/recommended-visual-studio-code-extensions-developers-only) To ensure your environment is compatible, we also recommend the following VSCode extensions: - [ES-Lint][eslint-vscode] - [Editor-Config][editor-config-vscode] - [Prettier][prettier-vscode] - [ZipFS][zipfs-vscode] # Using a custom ArDrive-Core-JS (Optional) (/sdks/(clis)/ardrive-cli/(build-and-run-from-source)/using-a-custom-ardrive-core-js-optional) To test a with a custom version of the `ardrive-core-js` library on your local system, change the `"ardrive-core-js"` line in `package.json` to the root of your local `ardrive-core-js` repo: ```diff - "ardrive-core-js": "1.0.0" + "ardrive-core-js": "../ardrive-core-js/" ``` # Dealing With Network Congestion (/sdks/(clis)/ardrive-cli/(other-utility-operations)/dealing-with-network-congestion) Currently, Arweave blocks hold up to 1000 transactions per block. The "mempool", where pending transactions reside until they've been included into a block, will only hold a transaction for 50 blocks (~100-150 minutes) before it's discarded by the network resulting in no fees or data being transacted. During periods of network congestion (i.e. those where the mempool contains 1000 or more pending transactions), it may make sense to either: a) wait for congestion to dissipate before attempting your transactions. b) apply the fee boost multiplier to your transactions rewards with the --boost parameter during write operations in order to front-run some of the congestion. #### Check for network congestion before uploading ```shell ardrive get-mempool ardrive get-mempool | jq 'length' ``` #### Front-run Congestion By Boosting Miner Rewards ```shell ardrive upload-file --wallet-file /path/to/my/wallet.json --parent-folder-id "f0c58c11-430c-4383-8e54-4d864cc7e927" --local-path ./helloworld.txt --boost 1.5 ``` #### Send AR Transactions From a Cold Wallet The best cold wallet storage never exposes your seed phrase and/or private keys to the Internet or a compromised system interface. You can use the ArDrive CLI to facilitate cold storage and transfer of AR. If you need a new cold AR wallet, generate one from an air-gapped machine capable of running the ArDrive CLI by following the instructions in the [Wallet Operations](#wallet-operations) section. Fund your cold wallet from whatever external sources you'd like. NOTE: Your cold wallet won't appear on chain until it has received AR. The workflow to send the AR out from your cold wallet requires you to generate a signed transaction with your cold wallet on your air-gapped machine via the ArDrive CLI, and then to transfer the signed transaction (e.g. by a file on a clean thumb drive) to an Internet-connected machine and send the transaction to the network via the ArDrive CLI. You'll need two inputs from the Internet-connected machine: - the last transaction sent OUT from the cold wallet (or an empty string if none has ever been sent out) - the base fee for an Arweave transaction (i.e. a zero bye transaction). Note that this value could change if a sufficient amount of time passes between the time you fetch this value, create the transaction, and send the transaction. To get the last transaction sent from your cold wallet, use the `last-tx` command and specify your wallet address e.g.: ``` ardrive last-tx -a \ ``` To get the base transaction reward required for an AR transaction, use the `base-reward` function, optionally applying a reward boost multiple if you're looking to front-run network congestion: ``` ardrive base-reward --boost 1.5 ``` Write down or securely copy the values you derived from the Internet-connected machine and run the following commands on the airgapped machine, piping the outputted signed transaction data to a file in the process, e.g. `sendme.json` (if that's your signed transaction transfer medium preference): ``` ardrive create-tx -w /path/to/wallet/file.json -d \ -a \ --last-tx \ --reward "\" > sendme.json ``` Transport your signed transaction to the Internet-connected machine and run the following command to send your transaction to the Arweave network: ``` ardrive send-tx -x /path/to/sendme.json ``` # Monitoring Transactions (/sdks/(clis)/ardrive-cli/(other-utility-operations)/monitoring-transactions) Block time on Arweave is typically between 2-3 minutes in duration, so transactions can be mined within that time frame when [network congestion](#dealing-with-network-congestion) is low. Transactions, in the general case, proceed through the following set of states: - Pending: the transaction is waiting the "mempool" to be mined - Confirming: the transaction was mined on an Arweave Node, but has not yet been confirmed by at least 15 total nodes on the network - Confirmed: the transaction was mined on an Arweave Node and confirmed by at least 15 total nodes on the network - Not Found: the transaction is not available for any of the following reasons: - Insufficient reward to join the mempool - Insufficient reward to be mined within 50 blocks during a period of network congestion - Transaction is transitioning between states - Transaction ID is invalid Monitor any Arweave transaction's status via its transaction ID by performing: ```shell ardrive tx-status -t "ekSMckikdRJ8RGIkFa-X3xq3427tvM7J9adv8HP3Bzs" ``` Example output: ```shell ekSMckikdRJ8RGIkFa-X3xq3427tvM7J9adv8HP3Bzs: Mined at block height 775810 with 22439 confirmations ``` ```shell watch -n 10 ardrive tx-status -t "ekSMckikdRJ8RGIkFa-X3xq3427tvM7J9adv8HP3Bzs" ``` # Persistent Caching of ArFS Entity Metadata (/sdks/(clis)/ardrive-cli/(other-utility-operations)/persistent-caching-of-arfs-entity-metadata) To avoid redundant requests to the Arweave network for immutable ArFS entity metadata, a persistent file cache is created and maintained at: ``` Windows: /ardrive-caches/metadata Non-Windows: /.ardrive/caches/metadata ``` The `XDG_CACHE_HOME` environment variable is honored, where applicable, and will be used in place of `os.homedir()` in the scenarios described above. Metadata cache logging to stderr can be enabled by setting the `ARDRIVE_CACHE_LOG` environment variable to `1`. Cache performance is UNDEFINED for multi-process scenarios, but is presumed to be generally usable. The cache can be manually cleared safely at any time that any integrating app is not in operation. ```shell █████╗ ██████╗ ██████╗ ██████╗ ██╗██╗ ██╗███████╗ ██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║ ██║██╔════╝ ███████║██████╔╝██║ ██║██████╔╝██║██║ ██║█████╗ ██╔══██║██╔══██╗██║ ██║██╔══██╗██║╚██╗ ██╔╝██╔══╝ ██║ ██║██║ ██║██████╔╝██║ ██║██║ ╚████╔╝ ███████╗ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═══╝ ╚══════╝ ██████╗██╗ ██╗ ██╔════╝██║ ██║ ██║ ██║ ██║ ██║ ██║ ██║ ╚██████╗███████╗██║ ╚═════╝╚══════╝╚═╝ Write ArFS =========== create-drive create-folder upload-file create-manifest move-file move-folder retry-tx Read ArFS =========== file-info folder-info drive-info list-folder list-drive list-all-drives download-file download-folder download-drive Wallet Ops =========== generate-seedphrase generate-wallet get-address get-balance send-ar get-drive-key get-file-key last-tx Arweave Ops =========== base-reward get-mempool create-tx send-tx tx-status ardrive \ --help ``` [ArDrive Community Discord][ardrive-discord] [ardrive]: https://ardrive.io [arweave]: https://ardrive.io/what-is-arweave/ [ardrive-github]: https://github.com/ardriveapp/ [arfs]: https://ardrive.atlassian.net/l/c/m6P1vJDo [ardrive-web-app]: https://app.ardrive.io [ardrive-core]: https://github.com/ardriveapp/ardrive-core-js [yarn-install]: https://yarnpkg.com/getting-started/install [nvm-install]: https://github.com/nvm-sh/nvm#installing-and-updating [wsl-install]: https://code.visualstudio.com/docs/remote/wsl [editor-config-vscode]: https://marketplace.visualstudio.com/items?itemName=EditorConfig.EditorConfig [prettier-vscode]: https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode [zipfs-vscode]: https://marketplace.visualstudio.com/items?itemName=arcanis.vscode-zipfs [eslint-vscode]: https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint [viewblock blockchain explorer]: https://viewblock.io/arweave/ [ardrive-discord]: https://discord.com/invite/ya4hf2H [arconnect]: https://arconnect.io/ [kb-wallets]: https://ardrive.atlassian.net/l/c/FpK8FuoQ [arweave-manifests]: https://github.com/ArweaveTeam/arweave/wiki/Path-Manifests [example-manifest-webpage]: https://arweave.net/qozq9YIUPEHfZhoTp9DkBpJuA_KNULBnfLiMroj5pZI [arlocal]: https://github.com/textury/arlocal [mozilla-mime-types]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types [viewblock]: https://viewblock.io/arweave/ [tx_anchors]: https://docs.arweave.org/developers/server/http-api#field-definitions [gql-guide]: https://gql-guide.vercel.app/#owners [ardrive-turbo]: https://ardrive.io/turbo/ # Using a Custom Arweave Gateway (/sdks/(clis)/ardrive-cli/(other-utility-operations)/using-a-custom-arweave-gateway) On each command that uses a gateway, it is possible to supply your own custom Arweave gateway using the flag `--gateway` or by setting an environment variable named `ARWEAVE_GATEWAY`. For example, you could test out that your ArFS transactions are working as expected on a local test network such as [ArLocal] with this flow: ```shell npx arlocal curl http://localhost:1984/mint/{ your public wallet address }/99999999999999 ardrive create-drive --gateway http://127.0.0.1:1984 -w /path/to/wallet -n 'my-test-drive' curl "$ARWEAVE_GATEWAY/mine" ardrive upload-file -F { root folder id from create drive } -l /path/to/file -w /path/to/wallet curl "$ARWEAVE_GATEWAY/mine" ardrive list-drive -d { drive id from create drive } ardrive download-file -f { file id from upload file } ``` # Git (/sdks/(clis)/ardrive-cli/(prerequisites)/git) Some of ArDrive's dependencies are transitively installed via Git. Install it, if necessary, and ensure that it's available within your terminal environment: [Download Git](https://git-scm.com/downloads) # NVM (Optional - Recommended) (/sdks/(clis)/ardrive-cli/(prerequisites)/nvm-optional-recommended) This project uses the Node Version Manager (NVM) and an `.nvmrc` file to lock the recommended Node version used by the latest version of `ardrive-core-js`. **Note for Windows: We recommend using WSL for setting up NVM on Windows using the [instructions described here][wsl-install]** Follow these steps to get NVM up and running on your system: 1. Install NVM using [these installation instructions][nvm-install]. 2. Navigate to this project's root directory 3. Ensure that the correct version of Node is installed by performing: `nvm install` 4. Use the correct version of Node, by performing: `nvm use` **IT IS STRONGLY RECOMMENDED THAT YOU AVOID GENERATING WALLETS VIA SEED PHRASE WITH THE CLI USING ANY NODE VERSION OTHER THAN THE ONE SPECIFIED IN `.nvmrc`.** # Creating Drives (/sdks/(clis)/ardrive-cli/(working-with-drives)/creating-drives) ```shell ardrive create-drive --wallet-file /path/to/my/wallet.json --drive-name "My Public Archive" ardrive create-drive --wallet-file /path/to/my/wallet.json --drive-name "Teenage Love Poetry" -P ``` # List Drive Pipeline Examples (/sdks/(clis)/ardrive-cli/(working-with-drives)/list-drive-pipeline-examples) You can utilize `jq` and the list commands to reshape the commands' output data into useful forms and stats for many use cases. Here are a few examples: ```shell ardrive list-drive -d a44482fd-592e-45fa-a08a-e526c31b87f1 | jq '.[] | select(.entityType == "file") | "https://app.ardrive.io/#/file/" + .entityId + "/view"' ``` Example output: ```shell "https://app.ardrive.io/#/file/1337babe-f000-dead-beef-ffffffffffff/view" "https://app.ardrive.io/#/file/cdbc9ddd-1cab-41d9-acbd-fd4328929de3/view" "https://app.ardrive.io/#/file/f19bc712-b57a-4e0d-8e5c-b7f1786b34a1/view" "https://app.ardrive.io/#/file/4f8e081b-42f2-442d-be41-57f6f906e1c8/view" "https://app.ardrive.io/#/file/0e02d254-c853-4ff0-9b6e-c4d23d2a95f5/view" "https://app.ardrive.io/#/file/c098b869-29d1-4a86-960f-a9e10433f0b0/view" "https://app.ardrive.io/#/file/4afc8cdf-4d27-408a-bfb9-0a2ec21eebf8/view" "https://app.ardrive.io/#/file/85fe488d-fcf7-48ca-9df8-2b39958bbf15/view" ... ``` ```shell ardrive list-drive -d 13c3c232-6687-4d11-8ac1-35284102c7db | jq ' map(select(.entityType == "file") | .size) | add' ``` ```shell ardrive list-drive -d 01ea6ba3-9e58-42e7-899d-622fd110211c | jq '[ .[] | select(.entityType == "file") ] | length' ``` # Listing Drives for an Address (/sdks/(clis)/ardrive-cli/(working-with-drives)/listing-drives-for-an-address) You can list all the drives associated with any Arweave wallet address, though the details of private drives will be obfuscated from you unless you provide the necessary decryption data. ```shell ardrive list-all-drives -w /path/to/my/wallet.json -P ardrive list-all-drives --address "HTTn8F92tR32N8wuo-NIDkjmqPknrbl10JWo5MZ9x2k" ``` # Listing Every Entity in a Drive (/sdks/(clis)/ardrive-cli/(working-with-drives)/listing-every-entity-in-a-drive) Useful notes on listing the contents of drives: - Listing a drive is effectively the same as listing its root folder. - You can control the tree depth of the data returned. - path, txPath, and entityIdPath properties on entities can provide useful handholds for other forms of data navigation ```shell ardrive list-drive -d "c7f87712-b54e-4491-bc96-1c5fa7b1da50" -w /path/to/my/wallet.json -P ardrive list-drive -d "c7f87712-b54e-4491-bc96-1c5fa7b1da50" -w /path/to/my/wallet.json -P --with-keys ardrive list-drive -d "c7f87712-b54e-4491-bc96-1c5fa7b1da50" --max-depth 2 ``` # Managing Drive Passwords (/sdks/(clis)/ardrive-cli/(working-with-drives)/managing-drive-passwords) The ArDrive CLI's private drive and folder functions all require either a drive password OR a drive key. Private file functions require either the drive password or the file key. **Keys and passwords are sensitive data, so manage the entry, display, storage, and transmission of them very carefully.** Drive passwords are the most portable, and fundamental, encryption facet, so a few options are available during private drive operations for supplying them: - Environment Variable - STDIN - Secure Prompt #### Supplying Your Password: Environment Variable ```shell read -rs TMP_ARDRIVE_PW ardrive \ -w /path/to/wallet.json -P ``` #### Supplying Your Password: STDIN ```shell cat /path/to/my/drive/password.txt | ardrive \ -w /path/to/wallet.json -P ardrive \ -w /path/to/wallet.json -P -w /path/to/wallet.json -P ? Enter drive password: › ******** ``` # Understanding Drive and File Keys (/sdks/(clis)/ardrive-cli/(working-with-drives)/understanding-drive-and-file-keys) Private Drives achieve privacy via end-to-end encryption facilitated by hash-derived "Keys". Drive Keys encrypt/decrypt Drive and Folder data, and File Keys encrypt/decrypt File Data. The relationships among your data and their keys is as follows: - Drive Key = functionOf(Wallet Signature, Randomly Generated Drive ID, User-specified Drive Password) - File Key = functionOf(Randomly Generated File ID, Drive Key) When you create private entities, the returned JSON data from the ArDrive CLI will contain the keys needed to decrypt the encrypted representation of your entity that is now securely and permanently stored on the blockweave. To derive the drive key again for a drive, perform the following: ```shell ardrive get-drive-key -w /path/to/my/wallet.json -d "6939b9e0-cc98-42cb-bae0-5888eca78885" -P ``` To derive the file key again for a file, perform the following: ```shell ardrive get-file-key --file-id "bd2ce978-6ede-4b0d-8f79-2d7bc235a0e0" --drive-id "6939b9e0-cc98-42cb-bae0-5888eca78885" --drive-key "yHdCjpCK3EcuhQcKNx2d/NN5ReEjoKfZVqKunlCnPEo" ``` # Understanding Drive Hierarchies (/sdks/(clis)/ardrive-cli/(working-with-drives)/understanding-drive-hierarchies) At the root of every data tree is a "Drive" entity. When a drive is created, a Root Folder is also created for it. The entity IDs for both are generated and returned when you create a new drive: ```shell ardrive create-drive --wallet-file /path/to/my/wallet.json --drive-name "Teenage Love Poetry" | tee created_drive.json | jq '[.created[] | del(.metadataTxId, .entityName, .bundledIn)]' [ { "type": "drive", "entityId": "6939b9e0-cc98-42cb-bae0-5888eca78885" } { "type": "folder", "entityId": "d1535126-fded-4990-809f-83a06f2a1118" } ] ``` The relationship between the drive and its root folder is clearly visible when retrieving the drive's info: ```shell ardrive drive-info -d "6939b9e0-cc98-42cb-bae0-5888eca78885" | jq '{driveId, rootFolderId}' { "driveId": "6939b9e0-cc98-42cb-bae0-5888eca78885", "rootFolderId": "d1535126-fded-4990-809f-83a06f2a1118" } ``` All file and folder entities in the drive will be anchored to it by a "Drive-ID" GQL Tag. And they'll each be anchored to a parent folder ID, tracked via the "Parent-Folder-ID" GQL tag, forming a tree structure whose base terminates at the Root Folder. # Dry Run (/sdks/(clis)/ardrive-cli/(working-with-entities)/dry-run) An important feature of the ArDrive CLI is the `--dry-run` flag. On each command that would write an ArFS entity, there is the option to run it as a "dry run". This will run all of the steps and print the outputs of a regular ArFS write, but will skip sending the actual transaction: ```shell ardrive \ \ --dry-run ``` This can be very useful for gathering price estimations or to confirm that you've copy-pasted your entity IDs correctly before committing to an upload. # Uploading to Turbo (BETA) (/sdks/(clis)/ardrive-cli/(working-with-entities)/uploading-to-turbo-beta) Users can optionally choose to send each ArFS entities created to [ArDrive Turbo][ardrive-turbo] using the `--turbo` flag. Instead of using AR from an Arweave wallet, you can use Turbo Credits or take advantage of free/discounted upload promotions. ```shell ardrive \ \ --turbo ``` This flag will skip any balance check on the CLI side. Turbo will check a user's balance and accept/reject a data item at the time of upload. The `--turbo` flag by default will send your files to `upload.ardrive.io` to be bundled. To change the Turbo destination, users can use the `--turbo-url` flag. # Download a Single file (BETA) (/sdks/(clis)/ardrive-cli/(working-with-files)/download-a-single-file-beta) By using the `download-file` command you can download a file on chain to a folder in your local storage specified by --local-path (or to your current working directory if not specified): ```shell ardrive download-file -w /path/to/wallet.json --file-id "ff450770-a9cb-46a5-9234-89cbd9796610" --local-path /my_ardrive_downloads/ ``` Specify a filename in the --local-path if you'd like to use a different name than the one that's used in your drive: ```shell ardrive download-file -w /path/to/wallet.json --file-id "ff450770-a9cb-46a5-9234-89cbd9796610" --local-path /my_ardrive_downloads/my_pic.png ``` # Downloading a Drive (/sdks/(clis)/ardrive-cli/(working-with-files)/downloading-a-drive) To download the whole drive you can use the `download-drive` command. ```shell ardrive download-drive -d "c0c8ba1c-efc5-420d-a07c-a755dc67f6b2" ``` This is equivalent to running the `download-folder` command against the root folder of the drive. # Downloading a Folder with Files (/sdks/(clis)/ardrive-cli/(working-with-files)/downloading-a-folder-with-files) You can download a folder from ArDrive to your local machine with the `download-folder` command. In the following examples, assume that a folder with ID "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" exists in your drive and is named "MyArDriveFolder". ```shell ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" ``` By specifying the `--local-path` option, you can choose the local parent folder into which the on-chain folder will be downloaded. When the parameter is omitted, its value defaults to the current working directory (i.e. `./`). ```shell ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --local-path /my_ardrive_downloads/ ``` The `--max-depth` parameter lets you to choose a custom folder depth to download. When omitted, the entire subtree of the folder will be downloaded. In the following example, only the immediate children of the folder will be downloaded: ```shell ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --max-depth 0 ``` The behaviors of `--local-path` are similar to those of `cp` and `mv` in Unix systems, e.g.: ```shell ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --local-path "/existing_folder" ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --local-path "/existing_folder/MyArDriveFolder" ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --local-path "/existing_folder/non_existent_folder" ardrive download-folder -f "47f5bde9-61ba-49c7-b409-1aa4a9e250f6" --local-path "/non_existent_folder_1/non_existent_folder_2" ``` # Fetching the Metadata of a File Entity (/sdks/(clis)/ardrive-cli/(working-with-files)/fetching-the-metadata-of-a-file-entity) Simply perform the file-info command to retrieve the metadata of a file: ```shell ardrive file-info --file-id "e5ebc14c-5b2d-4462-8f59-7f4a62e7770f" ``` Example output: ```shell { "appName": "ArDrive-Web", "appVersion": "0.1.0", "arFS": "0.11", "contentType": "application/json", "driveId": "51062487-2e8b-4af7-bd81-4345dc28ea5d", "entityType": "file", "name": "2_depth.png", "txId": "CZKdjqwnmxbWchGA1hjSO5ZH--4OYodIGWzI-FmX28U", "unixTime": 1633625081, "size": 41946, "lastModifiedDate": 1605157729000, "parentFolderId": "a2c8a0cb-0ca7-4dbb-8bf8-93f75f308e63", "entityId": "e5ebc14c-5b2d-4462-8f59-7f4a62e7770f", "fileId": "e5ebc14c-5b2d-4462-8f59-7f4a62e7770f", "dataTxId": "Jz0WsWyAGVc0aE3UzACo-YJqG8OPrN3UucmDdt8Fbjc", "dataContentType": "image/png" } ``` # IPFS CID Tagging (/sdks/(clis)/ardrive-cli/(working-with-files)/ipfs-cid-tagging) Certain nodes on the Arweave network may be running the [IPFS+Arweave bridge](https://arweave.medium.com/arweave-ipfs-persistence-for-the-interplanetary-file-system-9f12981c36c3). Tagging your file upload transaction with its IPFS v1 CID value in the 'IPFS-Add' tag may allow you to take advantage of this system. It can also be helpful for finding data on Arweave via GQL based on its CID. To include the CID tag on your **PUBLIC** file uploads, you may use the '--add-ipfs-tag' flag: ```shell ardrive upload-file --add-ipfs-tag --local-path /path/to/file.txt --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` # Moving Files (/sdks/(clis)/ardrive-cli/(working-with-files)/moving-files) Files can be moved from one folder to another within the same drive. Moving a file is simply the process of uploading a new file metadata revision with an updated File ID Parent Folder ID relationship. The following command will move a file from its current location in a public drive to a new parent folder in that drive: ```shell ardrive move-file --file-id "e5ebc14c-5b2d-4462-8f59-7f4a62e7770f" --parent-folder-id "a2c8a0cb-0ca7-4dbb-8bf8-93f75f308e63" ``` # Name Conflict Resolution on Upload (/sdks/(clis)/ardrive-cli/(working-with-files)/name-conflict-resolution-on-upload) By default, the `upload-file` command will use the upsert behavior if existing entities are encountered in the destination folder tree that would cause naming conflicts. Expect the behaviors from the following table for each of these resolution settings: | Source Type | Conflict at Dest | `skip` | `replace` | `upsert` (default) | | ----------- | ---------------- | ------ | --------- | ------------------ | | File | None | Insert | Insert | Insert | | File | Matching File | Skip | Update | Skip | | File | Different File | Skip | Update | Update | | File | Folder | Skip | Fail | Fail | | Folder | None | Insert | Insert | Insert | | Folder | File | Skip | Fail | Fail | | Folder | Folder | Re-use | Re-use | Re-use | The default upsert behavior will check the destination folder for a file with a conflicting name. If no conflicts are found, it will insert (upload) the file. In the case that there is a FILE to FILE name conflict found, it will only update it if necessary. To determine if an update is necessary, upsert will compare the last modified dates of conflicting file and the file being uploaded. When they are matching, the upload will be skipped. Otherwise the file will be updated as a new revision. To override the upsert behavior, use the `--replace` option to always make new revisions of a file or the `--skip` option to always skip the upload on name conflicts: ```shell ardrive upload-file --replace --local-path /path/to/file.txt --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` ```shell ardrive upload-file --skip --local-path /path/to/file.txt --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` Alternatively, the upload-file commands now also supports the `--ask` conflict resolution option. This setting will always provide an interactive prompt on name conflicts that allows users to decide how to resolve each conflict found: ```shell ardrive upload-file --ask --local-file-path /path/to/file.txt --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json Destination folder has a file to file name conflict! File name: 2.png File ID: efbc0370-b69f-44d9-812c-0d272b019027 This file has a DIFFERENT last modified date Please select how to proceed: › - Use arrow-keys. Return to submit. ❯ Replace as new file revision Upload with a different file name Skip this file upload ``` # Progress Logging of Transaction Uploads (/sdks/(clis)/ardrive-cli/(working-with-files)/progress-logging-of-transaction-uploads) Progress logging of transaction uploads to stderr can be enabled by setting the `ARDRIVE_PROGRESS_LOG` environment variable to `1`: ```shell Uploading file transaction 1 of total 2 transactions... Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 0% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 35% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 66% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 100% Uploading file transaction 2 of total 2 transactions... Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 0% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 13% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 28% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 42% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 60% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 76% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 91% Transaction nA1stCdTkuf290k0qsqvmJ78isEC0bwgrAi3D8Cl1LU Upload Progress: 100% ``` # Rename a Single File (/sdks/(clis)/ardrive-cli/(working-with-files)/rename-a-single-file) To rename an on-chain file you can make use of the `rename-file` command. The required parameters are the file ID and the new name, as well as the owner wallet or seed phrase. ```shell ardrive rename-file --file-id "290a3f9a-37b2-4f0f-a899-6fac983833b3" --file-name "My custom file name.txt" --wallet-file "wallet.json" ``` # Retrying a Failed File Data Transaction (Public Unbundled Files Only) (/sdks/(clis)/ardrive-cli/(working-with-files)/retrying-a-failed-file-data-transaction-public-unbundled-files-only) Arweave data upload transactions are split into two phases: transaction posting and chunks uploading. Once the transaction post phase has been completed, you've effectively "paid" the network for storage of the data chunks that you'll send in the next stage. If your system encounters an error while posting the transaction, you can retry posting the transaction for as long as your tx_anchor is valid ([learn more about tx_anchors here][tx_anchors]). You may retry and/or resume posting chunks at any time after your transaction has posted. The ArDrive CLI allows you to take advantage of this Arweave protocol capability. Using the CLI, when the transaction post has succeeded but the chunk upload step fails, the data transaction's ID could be lost. There are a few options to recover this ID. If the failed transaction is the most recent one sent from a wallet, the transaction ID can be recovered with the `ardrive last-tx -w /path/to/wallet` command AFTER the transaction's headers have been mined (It can take 5-10 minutes for the tx-id to become available with the last-tx approach). Other options for finding the partially uploaded transaction's ID include: - Using an Arweave gateway GQL http endpoint to search for transactions that belong to the wallet. See this [Arweave GQL Guide][gql-guide] for more info. - Browse the recent transactions associated with the wallet via a block explorer tool like [ViewBlock][viewblock]. In order to re-seed the chunks for an unbundled ArFS data transaction, a user must have the data transaction ID, the original file data, and either a destination folder ID or a valid file ID for the file. Supply that information to the `retry-tx` command like so: ```shell ardrive retry-tx --tx-id { Data Transaction ID } --parent-folder-id { Destination Folder ID } --local-path /path/to/file --wallet-file /path/to/wallet ``` **Note: Retry feature is currently only available for PUBLIC unbundled file transactions. It is also perfectly safe to mistakenly re-seed the chunks of a healthy transaction, the transaction will remain stable and the wallet balance will not be affected.** # Understanding Bundled Transactions (/sdks/(clis)/ardrive-cli/(working-with-files)/understanding-bundled-transactions) The ArDrive CLI currently uses two different methods for uploading transactions to the Arweave network: standard transactions and Direct to Network (D2N) bundled transactions. By default, the CLI will send a D2N bundled transaction for any action that would result in multiple transactions. This bundling functionality is currently used on the `upload-file` and `create-drive` commands. D2N bundled transactions come with several benefits and implications: - Bundling saves AR and enhances ArFS reliability by sending associated ArFS transactions up as one atomic bundle. - Bundled transactions are treated as a single data transaction by the Arweave network, but can be presented as separate transactions by the Arweave Gateway once they have been "unbundled". - Un-bundling can take anywhere from a few minutes up to an hour. During that time, the files in the bundle will neither appear in list- commands nor be downloadable. Similarly, they will not appear in the web app after syncs until un-bundling is complete. **This can negatively affect the accuracy of upsert operations**, so it's best to wait before retrying bulk uploads. - Bundling reliability on the gateway side degrades once bundles reach either 500 data items (or ~250 files) or 500 MiB, so the CLI will create and upload multiple bundles as necessary, or will send files that are simply too large for reliable bundling as unbundled txs. # Uploading a Custom Manifest (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-a-custom-manifest) Using the custom content type feature, it is possible for users to upload their own custom manifests. The Arweave gateways use this special content type in order to identify an uploaded file as a manifest: ```shell application/x.arweave-manifest+json ``` In addition to this content type, the manifest must also adhere to the [correct JSON structure](#manifest-json) of an Arweave manifest. A user can create their own manifest from scratch, or start by piping a generated manifest to a JSON file and editing it to their specifications: ```shell ardrive create-manifest -w /path/to/wallet -f "6c312b3e-4778-4a18-8243-f2b346f5e7cb" --dry-run | jq '{manifest}.manifest' > my-custom-manifest.json ``` After editing the generated manifest, simply perform an `upload-file` command with the custom Arweave manifest content type to any PUBLIC folder: ```shell ardrive upload-file --content-type "application/x.arweave-manifest+json" --local-path my-custom-manifest.json --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` The returned `dataTxId` field on the created `file` entity will be the endpoint that the manifest can be found on Arweave, just as explained in the [manifest sections](#uploading-manifests) above: ```shell https://arweave.net/{dataTxId} https://arweave.net/{dataTxId}/custom-file-1 https://arweave.net/{dataTxId}/custom-file-2 ``` # Uploading a Folder with Files (Bulk Upload) (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-a-folder-with-files-bulk-upload) Users can perform a bulk upload by using the upload-file command on a target folder. The command will reconstruct the folder hierarchy on local disk as ArFS folders on the permaweb and upload each file into their corresponding folders: ```shell ardrive upload-file --local-path /path/to/folder --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` # Uploading a Non-Bundled Transaction (NOT RECOMMENDED) (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-a-non-bundled-transaction-not-recommended) While not recommended, the CLI does provide the option to forcibly send all transactions as standard transactions rather than attempting to bundle them together. To do this, simply add the `--no-bundle` flag to the `upload-file` or `create-drive` command: ```shell ardrive upload-file --no-bundle --local-path /path/to/file --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` # Uploading a Single File (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-a-single-file) To upload a file, you'll need a parent folder id, the file to upload's file path, and the path to your wallet: ```shell ardrive upload-file --local-path /path/to/file.txt --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` Example output: ```shell { "created": [ { "type": "file", "entityName": "file.txt" "entityId": "6613395a-cf19-4420-846a-f88b7b765c05" "dataTxId": "l4iNWyBapfAIj7OU-nB8z9XrBhawyqzs5O9qhk-3EnI", "metadataTxId": "YfdDXUyerPCpBbGTm_gv_x5hR3tu5fnz8bM-jPL__JE", "bundledIn": "1zwdfZAIV8E26YjBs2ZQ4xjjP_1ewalvRgD_GyYw7f8", "sourceUri": "file:///path/to/file.txt" }, { "type": "bundle", "bundleTxId": "1zwdfZAIV8E26YjBs2ZQ4xjjP_1ewalvRgD_GyYw7f8" } ], "tips": [ { "txId": "1zwdfZAIV8E26YjBs2ZQ4xjjP_1ewalvRgD_GyYw7f8", "recipient": { "address": "3mxGJ4xLcQQNv6_TiKx0F0d5XVE0mNvONQI5GZXJXkt" }, "winston": "10000000" } ], "fees": { "1zwdfZAIV8E26YjBs2ZQ4xjjP_1ewalvRgD_GyYw7f8": 42819829 } } ``` NOTE: To upload to the root of a drive, specify its root folder ID as the parent folder ID for the upload destination. You can retrieve it like so: ```shell ardrive drive-info -d "c7f87712-b54e-4491-bc96-1c5fa7b1da50" | jq -r '.rootFolderId' ``` # Uploading Files with Custom MetaData (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-files-with-custom-metadata) ArDrive CLI has the capability of attaching custom metadata to ArFS File and Folder MetaData Transactions during the `upload-file` command. This metadata can be applied to either the GQL tags on the MetaData Transaction and/or into the MetaData Transaction's Data JSON. All custom metadata applied must ultimately adhere to the following JSON shapes: ```ts // GQL Tags type CustomMetaDataGqlTags = Record; // Data JSON Fields type CustomMetaDataJsonFields = Record; | string | number | boolean | null | { [member: string]: JsonSerializable } | JsonSerializable[]; ``` e.g: ```shell { IPFS-Add: 'MY_HASH' } { 'Custom Name': ['Val 1', 'Val 2'] } ``` When the custom metadata is attached to the MetaData Transaction's GQL tags, they will become visible on any Arweave GQL gateway and also third party tools that read GQL data. When these tags are added to the MetaData Transaction's Data JSON they can be read by downloading the JSON data directly from `https://arweave.net/METADATA_TX_ID`. To add this custom metadata to your file metadata transactions, CLI users can pass custom metadata these parameters: - `--metadata-file path/to/json/schema` - `--metadata-json '{"key": "val", "key-2": true, "key-3": 420, "key-4": ["more", 1337]}'` - `--metadata-gql-tags "Tag-Name" "Tag Val"` The `--metadata-file` will accept a file path to JSON file containing custom metadata: ```shell ardrive upload-file --metadata-file path/to/metadata/json # ... ``` This JSON schema object must contain instructions on where to put this metadata with the `metaDataJson` and `metaDataGqlTags` keys. e.g: ```json { "metaDataJson": { "Tag-Name": ["Value-1", "Value-2"] }, "metaDataGqlTags": { "GQL Tag Name": "Tag Value" } } ``` The `--metadata-gql-tags` parameter accepts an array of string values to be applied to the MetaData Tx GQL Tags. This method of CLI input does not support multiple tag values for a given tag name and the input must be an EVEN number of string values. (Known bug: String values starting with the `"-"` character are currently not supported. Use --metadata-file parameter instead.) e.g: ```shell upload-file --metadata-gql-tags "Custom Tag Name" "Custom Value" # ... ``` And the `--metadata-json` parameter will accept a stringified JSON input. It will apply all declared JSON fields directly to the MetaData Tx's Data JSON. e.g: ```shell upload-file --metadata-json ' { "json field": "value", "another fields": false } ' # ... ``` Custom metadata applied to files and/or folders during the `upload-file` command will be read back through all existing read commands. e.g: ```shell ardrive file-info -f 067c4008-9cbe-422e-b697-05442f73da2b { "appName": "ArDrive-CLI", "appVersion": "1.17.0", "arFS": "0.11", "contentType": "application/json", "driveId": "967215ca-a489-494b-97ec-0dd428d7be34", "entityType": "file", "name": "unique-name-9718", "txId": "sxg8bNu6_bbaHkJTxAINVVoz_F-LiFe6s7OnxzoJJk4", "unixTime": 1657655070, "size": 262148, "lastModifiedDate": 1655409872705, "dataTxId": "ublZcIff77ejl3m0uEA8lXEfnTWmSBOFoz-HibqKeyk", "dataContentType": "text/plain", "parentFolderId": "97bc4fb5-aca4-4ffe-938f-1285153d98ca", "entityId": "067c4008-9cbe-422e-b697-05442f73da2b", "fileId": "067c4008-9cbe-422e-b697-05442f73da2b", "IPFS-Add": "MY_HASH", "Tag-1": "Val", "Tag-2": "Val", "Tag-3": "Val", "Boost": "1.05" } ``` #### Applying Unique Custom MetaData During Bulk Workflows With some custom scripting and the `--metadata-file` parameter, the ArDrive CLI can be used to apply custom metadata to each file individually in a bulk workflow. For example, if you choose a numbered file naming pattern you can make use of a `for` loop: ```shell for i in {1..5} do ardrive upload-file -F f0c58c11-430c-4383-8e54-4d864cc7e927 --local-path "../uploads/test-file-$i.txt" -w "/path/to/wallet.json" --metadata-file "../custom/metadata-$i.json" --dry-run > "file-result-$i.json" done ``` # Uploading From a Remote URL (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-from-a-remote-url) You can upload a file from an existing url using the `--remote-path` flag. This must be used in conjunction with `--dest-file-name`. You can use a custom content type using the `--content-type` flag, but if this isn't used the app will use the content type from the response header of the request for the remote data. ```shell ardrive upload-file --remote-path "https://url/to/file" --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -d "example.jpg" -w /path/to/wallet.json ``` # Uploading Manifests (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-manifests) [Arweave Path Manifests][arweave-manifests] are are special `.json` files that instruct Arweave Gateways to map file data associated with specific, unique transaction IDs to customized, hosted paths relative to that of the manifest file itself. So if, for example, your manifest file had an arweave.net URL like: ```shell https://arweave.net/{manifest tx id} ``` Then, all the mapped transactions and paths in the manifest file would be addressable at URLs like: ```shell https://arweave.net/{manifest tx id}/foo.txt https://arweave.net/{manifest tx id}/bar/baz.png ``` ArDrive supports the creation of these Arweave manifests using any of your PUBLIC folders. The generated manifest paths will be links to each of the file entities within the specified folder. The manifest file entity will be created at the root of the folder. To create a manifest of an entire public drive, specify the root folder of that drive: ```shell ardrive create-manifest -f "bc9af866-6421-40f1-ac89-202bddb5c487" -w "/path/to/wallet" ``` You can also create a manifest of a folder's file entities at a custom depth by using the `--max-depth` option: ```shell ardrive create-manifest --max-depth 0 -f "867228d8-4413-4c0e-a499-e1decbf2ea38" -w "/path/to/wallet" ``` Creating a `.json` file of your manifest links output can be accomplished here with some `jq` parsing and piping to a file: ```shell ardrive create-manifest -w /path/to/wallet -f "6c312b3e-4778-4a18-8243-f2b346f5e7cb" | jq '{links}' > links.json ``` If you'd like to preview the contents of your manifest before uploading, you can perform a dry run and do some lightweight post processing to isolate the data: ```shell ardrive create-manifest -w /path/to/wallet -f "6c312b3e-4778-4a18-8243-f2b346f5e7cb" --dry-run | jq '{manifest}.manifest' ``` ```json { "manifest": "arweave/paths", "version": "0.1.0", "index": { "path": "index.html" }, "paths": { "hello_world.txt": { "id": "Y7GFF8r9y0MEU_oi1aZeD87vrmai97JdRQ2L0cbGJ68" }, "index.html": { "id": "pELonjVebHyBsdxVymvxbGTmHD96v9PuuUXj8GUHGoY" } } } ``` The manifest data transaction is tagged with a unique content-type, `application/x.arweave-manifest+json`, which tells the gateway to treat this file as a manifest. The manifest file itself is a `.json` file that holds the paths (the data transaction ids) to each file within the specified folder. When your folder is later changed by adding files or updating them with new revisions, the original manifest will NOT be updated on its own. A manifest is a permanent record of your files in their current state. However, creating a subsequent manifest with the same manifest name will create a new revision of that manifest in its new current state. Manifests follow the same name conflict resolution as outlined for files above (upsert by default). #### Hosting a Webpage with Manifest When creating a manifest, it is possible to host a webpage or web app. You can do this by creating a manifest on a folder that has an `index.html` file in its root. Using generated build folders from popular frameworks works as well. One requirement here to note is that the `href=` paths from your generated `index.html` file must not have leading a `/`. This means that the manifest will not resolve a path of `/dist/index.js` but it will resolve `dist/index.js` or `./dist/index.js`. As an example, here is a flow of creating a React app and hosting it with an ArDrive Manifest. First, generate a React app: ```shell yarn create react-app my-app ``` Next, add this field to the generated `package.json` so that the paths will resolve correctly: ```json "homepage": ".", ``` Then, create an optimized production build from within the app's directory: ```shell yarn build ``` Now, we can create and upload that produced build folder on ArDrive to any of your existing ArFS folder entities: ```shell ardrive upload-file -l "/build" -w "/path/to/wallet" --parent-folder-id "bc9af866-6421-40f1-ac89-202bddb5c487" ``` And finally, create the manifest using the generated Folder ID from the build folder creation: ```shell ardrive create-manifest -f "41759f05-614d-45ad-846b-63f3767504a4" -w "/path/to/wallet" ``` In the return output, the top link will be a link to the deployed web app: ```shell "links": [ "https://arweave.net/0MK68J8TqGhaaOpPe713Zn0jdpczMt2NGS2CtRYiuAg", "https://arweave.net/0MK68J8TqGhaaOpPe713Zn0jdpczMt2NGS2CtRYiuAg/asset-manifest.json", "https://arweave.net/0MK68J8TqGhaaOpPe713Zn0jdpczMt2NGS2CtRYiuAg/favicon.ico", "https://arweave.net/0MK68J8TqGhaaOpPe713Zn0jdpczMt2NGS2CtRYiuAg/index.html", # ... ``` This is effectively hosting a web app with ArDrive. Check out the ArDrive Price Calculator React App hosted as an [ArDrive Manifest][example-manifest-webpage]. # Uploading Multiple Files (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-multiple-files) To upload an arbitrary number of files or folders, pass a space-separated list of paths to `--local-paths`: ```shell ardrive upload-file -w wallet.json -F "6939b9e0-cc98-42cb-bae0-5888eca78885" --local-paths ./image.png ~/backups/ ../another_file.txt ardrive upload-file -w wallet.json -F "6939b9e0-cc98-42cb-bae0-5888eca78885" --local-paths ./*.json ``` # Uploading With a Custom Content Type (/sdks/(clis)/ardrive-cli/(working-with-files)/uploading-with-a-custom-content-type) Each file uploaded to the Arweave network receives a `"Content-Type"` GraphQL tag that contains the MIME type for the file. The gateway will use this content type to determine how to serve that file's data transaction at the `arweave.net/{data tx id}` endpoint. By default, the CLI will attempt to derive this content type from the file extension of the provided file. In most cases, the content type that is derived will be correct and the gateway will properly serve the file. The CLI also provides the option for users to upload files with a custom content type using the `--content-type` flag: ```shell ardrive upload-file --content-type "application/json" --local-path /path/to/file --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" -w /path/to/wallet.json ``` It is currently possible to set this value to any given string, but the gateway will still only serve valid content types. Check out this list of commonly used MIME types to ensure you're providing a valid content type: [Common MIME types][mozilla-mime-types]. Note: In the case of multi-file uploads or recursive folder uploads, setting this `--content-type` flag will set the provided custom content type on EVERY file entity within a given upload. # Creating Folders (/sdks/(clis)/ardrive-cli/(working-with-folders)/creating-folders) Creating folders manually is straightforward: ```shell ardrive create-folder --parent-folder-id "63153bb3-2ca9-4d42-9106-0ce82e793321" --folder-name "My Awesome Folder" -w /path/to/wallet.json ``` Example output: ```shell { "created": [ { "type": "folder", "metadataTxId": "AYFMBVmwqhbg9y5Fbj3Iasy5oxUqhauOW7PcS1sl4Dk", "entityId": "d1b7c514-fb12-4603-aad8-002cf63015d3", "key": "yHdCjpCKD2cuhQcKNx2d/XF5ReEjoKfZVqKunlCnPEk", "entityName": "My Awesome Folder" } ], "tips": [], "fees": { "AYFMBVmwqhbg9y5Fbj3Iasy5oxUqhauOW7PcS1sl4Dk": 1378052 } } ``` Note: Folders can also be created by supplying a folder as the --local-path of an upload-file command. In this case, the folder hierarchy on the local disk will be reconstructed on chain during the course of the recursive bulk upload. # Listing Contents of a Folder (/sdks/(clis)/ardrive-cli/(working-with-folders)/listing-contents-of-a-folder) Similar to drives, the `list-folder` command can be used to fetch the metadata of each entity within a folder. But by default, the command will fetch only the immediate children of that folder (`--max-depth 0`): ```shell ardrive list-folder --parent-folder-id "29850ab7-56d4-4e1f-a5be-cb86d5513940" ``` Example output: ```shell [ { "appName": "ArDrive-CLI", "appVersion": "2.0", "arFS": "0.11", "contentType": "application/json", "driveId": "01ea6ba3-9e58-42e7-899d-622fd110211a", "entityType": "folder", "name": "mytestfolder", "txId": "HYiKyfLwY7PT9NleTQoTiM_-qPVUwf4ClDhx1sjUAEU", "unixTime": 1635102772, "parentFolderId": "29850ab7-56d4-4e1f-a5be-cb86d5513940", "entityId": "03df2929-1440-4ab4-bbf0-9dc776e1ed96", "path": "/My Public Folder/mytestfolder", "txIdPath": "/09_x0X2eZ3flXXLS72WdTDq6uaa5g2LjsT-QH1m0zhU/HYiKyfLwY7PT9NleTQoTiM_-qPVUwf4ClDhx1sjUAEU", "entityIdPath": "/29850ab7-56d4-4e1f-a5be-cb86d5513940/03df2929-1440-4ab4-bbf0-9dc776e1ed96" }, { "appName": "ArDrive-CLI", "appVersion": "2.0", "arFS": "0.11", "contentType": "application/json", "driveId": "01ea6ba3-9e58-42e7-899d-622fd110211a", "entityType": "folder", "name": "Super sonic public folder", "txId": "VUk1B_vo1va2-EHLtqjsotzy0Rdn6lU4hQo3RD2xoTI", "unixTime": 1631283259, "parentFolderId": "29850ab7-56d4-4e1f-a5be-cb86d5513940", "entityId": "452c6aec-43dc-4015-9abd-20083068d432", "path": "/My Public Folder/Super sonic sub folder", "txIdPath": "/09_x0X2eZ3flXXLS72WdTDq6uaa5g2LjsT-QH1m0zhU/VUk1B_vo1va2-EHLtqjsotzy0Rdn6lU4hQo3RD2xoTI", "entityIdPath": "/29850ab7-56d4-4e1f-a5be-cb86d5513940/452c6aec-43dc-4015-9abd-20083068d432" }, { "appName": "ArDrive-CLI", "appVersion": "2.0", "arFS": "0.11", "contentType": "application/json", "driveId": "01ea6ba3-9e58-42e7-899d-622fd110211a", "entityType": "file", "name": "test-number-twelve.txt", "txId": "429zBqnd7ZBNzgukaix26RYz3g5SeXCCo_oIY6CPZLg", "unixTime": 1631722234, "size": 47, "lastModifiedDate": 1631722217028, "dataTxId": "vA-BxAS7I6n90cH4Fzsk4cWS3EOPb1KOhj8yeI88dj0", "dataContentType": "text/plain", "parentFolderId": "29850ab7-56d4-4e1f-a5be-cb86d5513940", "entityId": "e5948327-d6de-4acf-a6fe-e091ecf78d71", "path": "/My Public Folder/test-number-twelve.txt", "txIdPath": "/09_x0X2eZ3flXXLS72WdTDq6uaa5g2LjsT-QH1m0zhU/429zBqnd7ZBNzgukaix26RYz3g5SeXCCo_oIY6CPZLg", "entityIdPath": "/29850ab7-56d4-4e1f-a5be-cb86d5513940/e5948327-d6de-4acf-a6fe-e091ecf78d71" }, { "appName": "ArDrive-CLI", "appVersion": "2.0", "arFS": "0.11", "contentType": "application/json", "driveId": "01ea6ba3-9e58-42e7-899d-622fd110211a", "entityType": "file", "name": "wonderful-test-file.txt", "txId": "6CokwlzB81Fx7dq-lB654VM0XQykdU6eYohDmEJ2gk4", "unixTime": 1631671275, "size": 23, "lastModifiedDate": 1631283389232, "dataTxId": "UP8THwA_1gvyRqNRqYmTpWvU4-UzNWBN7SiX_AIihg4", "dataContentType": "text/plain", "parentFolderId": "29850ab7-56d4-4e1f-a5be-cb86d5513940", "entityId": "3274dae9-3487-41eb-94d5-8d5d3d8bc343", "path": "/My Public Folder/wonderful-test-file.txt", "txIdPath": "/09_x0X2eZ3flXXLS72WdTDq6uaa5g2LjsT-QH1m0zhU/6CokwlzB81Fx7dq-lB654VM0XQykdU6eYohDmEJ2gk4", "entityIdPath": "/29850ab7-56d4-4e1f-a5be-cb86d5513940/3274dae9-3487-41eb-94d5-8d5d3d8bc343" } ] ``` To list further than the immediate children, you can make use of the flags: `--all` and `--max-depth`. ```shell ardrive list-folder --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" --all ardrive list-folder --parent-folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" --max-depth 2 ``` In the case of private entitites, the `--with-keys` flag will make the command to include the keys in the output. ```shell ardrive list-folder --parent-folder-id "1b027047-4cfc-4eee-88a8-9af694f660c0" -w /my/wallet.json --with-keys ``` # Moving Folders (/sdks/(clis)/ardrive-cli/(working-with-folders)/moving-folders) Moving a folder is as simple as supplying a new parent folder ID. Note that naming collisions among entities within a folder are not allowed. ```shell ardrive move-folder --folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" --parent-folder-id "29850ab7-56d4-4e1f-a5be-cb86d5513921" -w /path/to/wallet.json ``` # Renaming Folders (/sdks/(clis)/ardrive-cli/(working-with-folders)/renaming-folders) In order to rename a folder you must provide a name different from its current one, and it must not create naming conflicts with its sibling entities. ```shell ardrive rename-folder --folder-id "568d5eba-dbf3-4a49-8129-1c58f7fd35bc" --folder-name "Folder with cool stuff" -w "./wallet.json" ``` # Viewing Folder Metadata (/sdks/(clis)/ardrive-cli/(working-with-folders)/viewing-folder-metadata) To view the metadata of a folder, users can use the `folder-info` command: ```shell ardrive folder-info --folder-id "9af694f6-4cfc-4eee-88a8-1b02704760c0" ``` # ArFS (/sdks/(clis)/ardrive-cli/arfs) [ArFS] is a data modeling, storage, and retrieval protocol designed to emulate common file system operations and to provide aspects of mutability to your data hierarchy on [Arweave]'s otherwise permanent, immutable data storage blockweave. # CLI Help (/sdks/(clis)/ardrive-cli/cli-help) Learn to use any command: ```shell ardrive --help ``` # CLI Version (/sdks/(clis)/ardrive-cli/cli-version) You can print out the version by running any of: ```shell ardrive --version ardrive -V ``` # Data Portability (/sdks/(clis)/ardrive-cli/data-portability) Data uploaded via the ArDrive CLI, once indexed by Arweave's Gateways and sufficiently seeded across enough nodes on the network, can be accessed via all other ArDrive applications including the [ArDrive Web application][ardrive-web-app] at https://app.ardrive.io. All transactions successfully executed by ArDrive can always be inspected in the [Viewblock blockchain explorer]. # ArDrive CLI (/sdks/(clis)/ardrive-cli) **For AI and LLM users**: Access the complete ArDrive CLI documentation in plain text format at [llm.txt](/sdks/(clis)/llm.txt) for easy consumption by AI agents and language models. # ArDrive CLI Please refer to the [source code](https://github.com/ardriveapp/ardrive-cli) for SDK details. # Intended Audience (/sdks/(clis)/ardrive-cli/intended-audience) This tool is intended for use by: - ArDrive power users with advanced workflows and resource efficiency in mind: bulk uploaders, those with larger storage demand, game developers, nft creators, storage/db admins, etc. - Automation tools - Services - Terminal aficionados - Extant and aspiring cypherpunks For deeper integrations with the [ArDrive] platform, consider using the [ArDrive Core][ardrive-core] (Node) library's configurable and intuitive class interfaces directly within your application. To simply install the latest version of the CLI to your local system and get started, follow the [Quick Start](#quick-start) instructions. To build and/or develop the CLI from source, follow the [Build and Run from Source](#build-and-run-from-source) instructions. In either case, be sure to satisfy the requirements in the [Prerequisites](#prerequisites) section. # Limitations (/sdks/(clis)/ardrive-cli/limitations) **Number of files in a bulk upload:** Theoretically unlimited **Max individual file size**: 2GB (Node.js limitation) **Max file name length**: 255 bytes **Max ANS-104 bundled transaction size:** 500 MiB per bundle. App will handle creating multiple bundles. **Max ANS-104 data item counts per bundled transaction:** 250 Files per bundle (500 Data Items). # Using the CLI # Wallet Operations (/sdks/(clis)/ardrive-cli/wallet-operations) Browsing of ArDrive public data is possible without the need for an [Arweave wallet][kb-wallets]. However, for all write operations, or read operations without encryption/decryption keys, you'll need a wallet. As you utilize the CLI, you can use either your wallet file or your seed phrase interchangeably. Consider the security implications of each approach for your particular use case carefully. If at any time you'd like to generate a new wallet altogether, start by generating a new seed phase. And if you'd like to use that seed phrase in the form of a wallet file, or if you'd like to recover an existing wallet via its seed phrase, use either or both of the following commands: ```shell ardrive generate-seedphrase "this is an example twelve word seed phrase that you could use" ardrive generate-wallet -s "this is an example twelve word seed phrase that you could use" > /path/to/wallet/file.json ``` Public attributes of Arweave wallets can be retrieved via their 43-character Arweave wallet address. You can retrieve the wallet address associated with [your wallet file or 12-word seed phrase][kb-wallets] (e.g. wallets generated by [ArConnect][arconnect]) like so: ```shell ardrive get-address -w /path/to/wallet/file.json ardrive get-address -s "this is an example twelve word seed phrase that you could use" HTTn8F92tR32N8wuo-NIDkjmqPknrbl10JWo5MZ9x2k ``` You'll need AR in your wallet for any write operations you perform in ArDrive. You can always check your wallet balance (in both AR and Winston units) by performing: ```shell ardrive get-balance -w /path/to/wallet/file.json ardrive get-balance -a "HTTn8F92tR32N8wuo-NIDkjmqPknrbl10JWo5MZ9x2k" 1500000000000 Winston 1.5 AR ``` If, at any time, you need to send AR out of your wallet to another wallet address, you may perform: ```shell ardrive send-ar -w /path/to/wallet/file.json --dest-address "HTTn8F92tR32N8wuo-NIDkjmqPknrbl10JWo5MZ9x2k" --ar-amount 2.12345 ``` # ARIO Integrations (/sdks/ar-io-sdk/(ant-contracts)/ario-integrations) #### releaseName() Releases a name from the current owner and makes it available for purchase on the ARIO contract. The name must be permanently owned by the releasing wallet. If purchased within the recently returned name period (14 epochs), 50% of the purchase amount will be distributed to the ANT owner at the time of release. If no purchases in the recently returned name period, the name can be reregistered by anyone for the normal fee. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.releaseName({ name: 'permalink', arioProcessId: ARIO_MAINNET_PROCESS_ID, // releases the name owned by the ANT and sends it to recently returned names on the ARIO contract }); ``` #### reassignName() Reassigns a name to a new ANT. This can only be done by the current owner of the ANT. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.reassignName({ name: 'ardrive', arioProcessId: ARIO_MAINNET_PROCESS_ID, antProcessId: NEW_ANT_PROCESS_ID, // the new ANT process id that will take over ownership of the name }); ``` #### approvePrimaryNameRequest() Approves a primary name request for a given name or address. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.approvePrimaryNameRequest({ name: 'arns', address: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', // must match the request initiator address arioProcessId: ARIO_MAINNET_PROCESS_ID, // the ARIO process id to use for the request }); ``` #### removePrimaryNames() Removes primary names from the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.removePrimaryNames({ names: ['arns', 'test_arns'], // any primary names associated with a base name controlled by this ANT will be removed arioProcessId: ARIO_MAINNET_PROCESS_ID, notifyOwners: true, // if true, the owners of the removed names will be send AO messages to notify them of the removal }); ``` # Balances (/sdks/ar-io-sdk/(ant-contracts)/balances) #### getBalances() Returns all token balances for the ANT. ```typescript const balances = await ant.getBalances(); ``` **Output:** ```json { "ccp3blG__gKUvG3hsGC2u06aDmqv4CuhuDJGOIg0jw4": 1, "aGzM_yjralacHIUo8_nQXMbh9l1cy0aksiL_x9M359f": 0 } ``` #### getBalance() Returns the balance of a specific address. ```typescript const balance = await ant.getBalance({ address: 'ccp3blG__gKUvG3hsGC2u06aDmqv4CuhuDJGOIg0jw4', }); ``` **Output:** ```json 1 ``` # Controllers (/sdks/ar-io-sdk/(ant-contracts)/controllers) #### addController() Adds a new controller to the list of approved controllers on the ANT. Controllers can set records and change the ticker and name of the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.addController( { controller: 'aGzM_yjralacHIUo8_nQXMbh9l1cy0aksiL_x9M359f' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### removeController() Removes a controller from the list of approved controllers on the ANT. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.removeController( { controller: 'aGzM_yjralacHIUo8_nQXMbh9l1cy0aksiL_x9M359f' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` # Initialize (/sdks/ar-io-sdk/(ant-contracts)/initialize) #### init() Factory function to that creates a read-only or writeable client. By providing a `signer` additional write APIs that require signing, like `setRecord` and `transfer` are available. By default, a read-only client is returned and no write APIs are available. ```typescript // in a browser environment with ArConnect const ant = ANT.init({ signer: new ArConnectSigner(window.arweaveWallet, Arweave.init({})), processId: 'bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM' }); // in a node environment const ant = ANT.init({ signer: new ArweaveSigner(JWK), processId: 'bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM' }); ``` # Metadata (/sdks/ar-io-sdk/(ant-contracts)/metadata) #### setName() Sets the name of the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.setName( { name: 'My ANT' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### setTicker() Sets the ticker of the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.setTicker( { ticker: 'ANT-NEW-TICKER' }, // optional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### setDescription() Sets the description of the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.setDescription( { description: 'A friendly description of this ANT' }, // optional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### setKeywords() Sets the keywords of the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.setKeywords( { keywords: ['Game', 'FPS', 'AO'] }, // optional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### getLogo() Returns the TX ID of the logo set for the ANT. ```typescript const logoTxId = await ant.getLogo(); ``` #### setLogo() Sets the Logo of the ANT - logo should be an Arweave transaction ID. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.setLogo( { txId: 'U7RXcpaVShG4u9nIcPVmm2FJSM5Gru9gQCIiRaIPV7f' }, // optional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` # Records (/sdks/ar-io-sdk/(ant-contracts)/records) #### setBaseNameRecord() Adds or updates the base name record for the ANT. This is the top level name of the ANT (e.g. ardrive.ar.io). Supports undername ownership delegation and metadata. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript // get the ant for the base name const arnsRecord = await ario.getArNSRecord({ name: 'ardrive' }); const ant = await ANT.init({ processId: arnsName.processId }); // Basic usage const { id: txId } = await ant.setBaseNameRecord({ transactionId: '432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM', ttlSeconds: 3600, }); // With ownership delegation and metadata const { id: txId } = await ant.setBaseNameRecord({ transactionId: '432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM', ttlSeconds: 3600, owner: 'user-wallet-address-123...', // delegate ownership to another address displayName: 'ArDrive', // display name logo: 'logo-tx-id-123...', // logo transaction ID description: 'Decentralized storage application', keywords: ['storage', 'decentralized', 'web3'], }); // ardrive.ar.io will now resolve to the provided transaction id and include metadata ``` #### setUndernameRecord() Adds or updates an undername record for the ANT. An undername is appended to the base name of the ANT (e.g. dapp_ardrive.ar.io). Supports undername ownership delegation and metadata. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ > Records, or `undernames` are configured with the `transactionId` - the arweave transaction id the record resolves - and `ttlSeconds`, the Time To Live in the cache of client applications. ```typescript const arnsRecord = await ario.getArNSRecord({ name: 'ardrive' }); const ant = await ANT.init({ processId: arnsName.processId }); // Basic usage const { id: txId } = await ant.setUndernameRecord( { undername: 'dapp', transactionId: '432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM', ttlSeconds: 900, }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); // With ownership delegation and metadata const { id: txId } = await ant.setUndernameRecord( { undername: 'alice', transactionId: '432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM', ttlSeconds: 900, owner: 'alice-wallet-address-123...', // delegate ownership to Alice displayName: "Alice's Site", // display name logo: 'avatar-tx-id-123...', // avatar/logo transaction ID description: 'Personal portfolio and blog', keywords: ['portfolio', 'personal', 'blog'], }, { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); // dapp_ardrive.ar.io will now resolve to the provided transaction id // alice_ardrive.ar.io will be owned by Alice and include metadata ``` #### removeUndernameRecord() Removes an undername record from the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.removeUndernameRecord( { undername: 'dapp' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); // dapp_ardrive.ar.io will no longer resolve to the provided transaction id ``` #### setRecord() Deprecated: Use `setBaseNameRecord` or `setUndernameRecord` instead. Adds or updates a record for the ANT process. The `undername` parameter is used to specify the record name. Use `@` for the base name record. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ > Records, or `undernames` are configured with the `transactionId` - the arweave transaction id the record resolves - and `ttlSeconds`, the Time To Live in the cache of client applications. ```typescript const { id: txId } = await ant.setRecord( { undername: '@', transactionId: '432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM' ttlSeconds: 3600 }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### removeRecord() Deprecated: Use `removeUndernameRecord` instead. Removes a record from the ANT process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const arnsRecord = await ario.getArNSRecord({ name: 'ardrive' }); const ant = await ANT.init({ processId: arnsName.processId }); const { id: txId } = await ant.removeRecord( { undername: 'dapp' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); // dapp_ardrive.ar.io will no longer resolve to the provided transaction id ``` # Spawn (/sdks/ar-io-sdk/(ant-contracts)/spawn) #### spawn() Spawns a new ANT (Arweave Name Token) process. This static function creates a new ANT process on the AO network and returns the process ID. _Note: Requires `signer` to be provided to sign the spawn transaction._ ```typescript const processId = await ANT.spawn({ signer: new ArweaveSigner(jwk), state: { name: 'My ANT', ticker: 'MYANT', description: 'My custom ANT token', }, }); // Using a custom module ID const processId = await ANT.spawn({ signer: new ArweaveSigner(jwk), module: 'FKtQtOOtlcWCW2pXrwWFiCSlnuewMZOHCzhulVkyqBE', // Custom module ID state: { name: 'My Custom Module ANT', ticker: 'CUSTOM', description: 'ANT using a specific module version', }, }); ``` **CLI Usage:** ```bash ar.io spawn-ant --wallet-file wallet.json --name "My ANT" --ticker "MYANT" ar.io spawn-ant --wallet-file wallet.json --module FKtQtOOtlcWCW2pXrwWFiCSlnuewMZOHCzhulVkyqBE --name "My Custom ANT" --ticker "CUSTOM" ``` **Parameters:** - `signer: AoSigner` - The signer used to authenticate the spawn transaction - `module?: string` - Optional module ID to use; if not provided, gets latest from ANT registry - `ao?: AoClient` - Optional AO client instance (defaults to legacy mode connection) - `scheduler?: string` - Optional scheduler ID - `state?: SpawnANTState` - Optional initial state for the ANT including name, ticker, description, etc. - `antRegistryId?: string` - Optional ANT registry ID - `logger?: Logger` - Optional logger instance - `authority?: string` - Optional authority **Returns:** `Promise\` - The process ID of the newly spawned ANT # State (/sdks/ar-io-sdk/(ant-contracts)/state) #### getInfo() Retrieves the information of the ANT process. ```typescript const info = await ant.getInfo(); ``` **Output:** ```json { "name": "ArDrive", "ticker": "ANT-ARDRIVE", "description": "This is the ANT for the ArDrive decentralized web app.", "keywords": ["File-sharing", "Publishing", "dApp"], "owner": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ" } ``` #### getHandlers() Retrieves the handlers supported on the ANT ```typescript const handlers = await ant.getHandlers(); ``` **Output:** ```json [ "_eval", "_default", "transfer", "balance", "balances", "totalSupply", "info", "addController", "removeController", "controllers", "setRecord", "removeRecord", "record", "records", "setName", "setTicker", "initializeState", "state" ] ``` #### getState() Retrieves the state of the ANT process. ```typescript const state = await ant.getState(); ``` **Output:** ```json { "TotalSupply": 1, "Balances": { "98O1_xqDLrBKRfQPWjF5p7xZ4Jx6GM8P5PeJn26xwUY": 1 }, "Controllers": [], "Records": { "v1-0-0_whitepaper": { "transactionId": "lNjWn3LpyhKC95Kqe-x8X2qgju0j98MhucdDKK85vc4", "ttlSeconds": 900 }, "@": { "transactionId": "2rMLb2uHAyEt7jSu6bXtKx8e-jOfIf7E-DOgQnm8EtU", "ttlSeconds": 3600 }, "alice": { "transactionId": "kMk95k_3R8x_7d3wB9tEOiL5v6n8QhR_VnFCh3aeE3f", "ttlSeconds": 900, "owner": "alice-wallet-address-123...", "displayName": "Alice's Portfolio", "logo": "avatar-tx-id-456...", "description": "Personal portfolio and blog", "keywords": ["portfolio", "personal", "blog"] }, "whitepaper": { "transactionId": "lNjWn3LpyhKC95Kqe-x8X2qgju0j98MhucdDKK85vc4", "ttlSeconds": 900 } }, "Initialized": true, "Ticker": "ANT-AR-IO", "Description": "A friendly description for this ANT.", "Keywords": ["keyword1", "keyword2", "keyword3"], "Logo": "Sie_26dvgyok0PZD_-iQAFOhOd5YxDTkczOLoqTTL_A", "Denomination": 0, "Name": "AR.IO Foundation", "Owner": "98O1_xqDLrBKRfQPWjF5p7xZ4Jx6GM8P5PeJn26xwUY" } ``` #### getOwner() Returns the owner of the configured ANT process. ```typescript const owner = await ant.getOwner(); ``` **Output:** ```json "ccp3blG__gKUvG3hsGC2u06aDmqv4CuhuDJGOIg0jw4" ``` #### getName() Returns the name of the ANT (not the same as ArNS name). ```typescript const name = await ant.getName(); ``` **Output:** ```json "ArDrive" ``` #### getTicker() Returns the ticker symbol of the ANT. ```typescript const ticker = await ant.getTicker(); ``` **Output:** ```json "ANT-ARDRIVE" ``` #### getControllers() Returns the controllers of the configured ANT process. ```typescript const controllers = await ant.getControllers(); ``` **Output:** ```json ["ccp3blG__gKUvG3hsGC2u06aDmqv4CuhuDJGOIg0jw4"] ``` #### getRecords() Returns all records on the configured ANT process, including the required `@` record that resolve connected ArNS names. ```typescript const records = await ant.getRecords(); ``` **Output:** ```json { "@": { "transactionId": "UyC5P5qKPZaltMmmZAWdakhlDXsBF6qmyrbWYFchRTk", "ttlSeconds": 3600 }, "alice": { "transactionId": "kMk95k_3R8x_7d3wB9tEOiL5v6n8QhR_VnFCh3aeE3f", "ttlSeconds": 900, "owner": "alice-wallet-address-123...", "displayName": "Alice's Portfolio", "logo": "avatar-tx-id-456...", "description": "Personal portfolio and blog", "keywords": ["portfolio", "personal", "blog"] }, "zed": { "transactionId": "-k7t8xMoB8hW482609Z9F4bTFMC3MnuW8bTvTyT8pFI", "ttlSeconds": 900 }, "ardrive": { "transactionId": "-cucucachoodwedwedoiwepodiwpodiwpoidpwoiedp", "ttlSeconds": 900 } } ``` #### getRecord() Returns a specific record by its undername. ```typescript const record = await ant.getRecord({ undername: 'dapp' }); ``` **Output:** ```json { "transactionId": "432l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "ttlSeconds": 900, "owner": "alice-wallet-address-123...", "displayName": "Alice's Site", "logo": "avatar-tx-id-456...", "description": "Personal portfolio and blog", "keywords": ["portfolio", "personal", "blog"] } ``` # Static Methods (/sdks/ar-io-sdk/(ant-contracts)/static-methods) #### ANT.fork() Forks an existing ANT process to create a new one with the same state but potentially a different module. This is used for upgrading ANTs to new versions. ```typescript const newProcessId = await ANT.fork({ signer: new ArweaveSigner(jwk), antProcessId: 'existing-ant-process-id', // Optional: specify a specific module ID, defaults to latest from registry module: 'new-module-id', onSigningProgress: (event, payload) => { console.log(`Fork progress: ${event}`); }, }); console.log(`Forked ANT to new process: ${newProcessId}`); ``` #### ANT.upgrade() Static method to upgrade an ANT by forking it to the latest version and reassigning names. ```typescript // Upgrade and reassign all affiliated names const result = await ANT.upgrade({ signer: new ArweaveSigner(jwk), antProcessId: 'existing-ant-process-id', reassignAffiliatedNames: true, arioProcessId: ARIO_MAINNET_PROCESS_ID }); // Upgrade and reassign specific names const result = await ANT.upgrade({ signer: new ArweaveSigner(jwk), antProcessId: 'existing-ant-process-id', names: ['ardrive', 'example'], reassignAffiliatedNames: false, arioProcessId: ARIO_MAINNET_PROCESS_ID }); console.log(`Upgraded to process: ${result.forkedProcessId}`); console.log(`Successfully reassigned names: ${Object.keys(result.reassignedNames)}`); console.log(`Failed reassignments: ${Object.keys(result.failedReassignedNames)}`); ``` # Transfer (/sdks/ar-io-sdk/(ant-contracts)/transfer) #### transfer() Transfers ownership of the ANT to a new target address. Target MUST be an Arweave address. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.transfer( { target: 'aGzM_yjralacHIUo8_nQXMbh9l1cy0aksiL_x9M359f' }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` # Undername Ownership (/sdks/ar-io-sdk/(ant-contracts)/undername-ownership) NTs support ownership of undernames: 1. **ANT Owner** - Has full control over the ANT and all records 2. **Controllers** - Can manage records but cannot transfer ANT ownership 3. **Record Owners** - Can only update their specific delegated records When a record owner updates their own record, they **MUST** include their own address in the `owner` field. If the `owner` field is omitted or set to a different address, the record ownership will be transferred or renounced. #### transferRecord() Transfers ownership of a specific record (undername) to another address. This enables delegation of control for individual records within an ANT while maintaining the ANT owner's ultimate authority. The current record owner or ANT owner/controllers can transfer ownership. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript const { id: txId } = await ant.transferRecord({ undername: 'alice', // the subdomain/record to transfer recipient: 'new-owner-address-123...', // address of the new owner }); // alice_ardrive.ar.io is now owned by the new owner address // The new owner can update the record but not other records in the ANT ``` **CLI Usage:** ```bash ar.io transfer-record \ --process-id "ANT_PROCESS_ID" \ --undername "alice" \ --recipient "new-owner-address-123..." \ --wallet-file "path/to/wallet.json" ``` #### Record Owner Workflow Examples **Checking Record Ownership:** ```typescript const record = await ant.getRecord({ undername: 'alice' }); console.log(`Record owner: ${record.owner}`); console.log(`Transaction ID: ${record.transactionId}`); ``` **Record Owner Updating Their Own Record:** ```typescript // Alice (record owner) updating her own record const aliceAnt = ANT.init({ processId: 'ANT_PROCESS_ID', signer: new ArweaveSigner(aliceJwk), // Alice's wallet }); // ✅ CORRECT: Alice includes her own address as owner const { id: txId } = await aliceAnt.setUndernameRecord({ undername: 'alice', transactionId: 'new-content-tx-id-456...', ttlSeconds: 1800, owner: 'alice-wallet-address-123...', // MUST be Alice's own address displayName: 'Alice Updated Portfolio', description: 'Updated personal portfolio and blog', }); // ❌ WRONG: Omitting owner field will renounce ownership const badUpdate = await aliceAnt.setUndernameRecord({ undername: 'alice', transactionId: 'new-content-tx-id-456...', ttlSeconds: 1800, // Missing owner field - this will renounce ownership! }); // ❌ WRONG: Setting different owner will transfer ownership const badTransfer = await aliceAnt.setUndernameRecord({ undername: 'alice', transactionId: 'new-content-tx-id-456...', ttlSeconds: 1800, owner: 'someone-else-address-789...', // This transfers ownership to someone else! }); ``` **What Happens When Record Ownership is Renounced:** If a record owner updates their record without including the `owner` field, the record becomes owned by the ANT owner/controllers again: ```typescript // Before: alice record is owned by alice-wallet-address-123... const recordBefore = await ant.getRecord({ undername: 'alice' }); console.log(recordBefore.owner); // "alice-wallet-address-123..." // Alice updates without owner field await aliceAnt.setUndernameRecord({ undername: 'alice', transactionId: 'new-tx-id...', ttlSeconds: 900, // No owner field = renounces ownership }); // After: record ownership reverts to ANT owner const recordAfter = await ant.getRecord({ undername: 'alice' }); console.log(recordAfter.owner); // undefined (controlled by ANT owner again) ``` # Upgrade (/sdks/ar-io-sdk/(ant-contracts)/upgrade) #### upgrade() Upgrades an ANT by forking it to the latest version from the ANT registry and optionally reassigning ArNS names to the new process. This function first checks the version of the existing ANT, creates a new ANT using `.fork()` to the latest version, and then reassigns the ArNS names affiliated with this process to the new process. _Note: Requires `signer` to be provided on `ANT.init` to sign the transaction._ ```typescript // Upgrade ANT and reassign all affiliated ArNS names to the new process const result = await ant.upgrade(); // Upgrade ANT and reassign specific ArNS names to the new process const result = await ant.upgrade({ names: ['ardrive', 'example'], }); // with callbacks const result = await ant.upgrade({ names: ['ardrive', 'example'], onSigningProgress: (event, payload) => { console.log(`${event}:`, payload); if (event === 'checking-version') { console.log(`Checking version: ${payload.antProcessId}`); } if (event === 'fetching-affiliated-names') { console.log(`Fetching affiliated names: ${payload.arioProcessId}`); } if (event === 'reassigning-name') { console.log(`Reassigning name: ${payload.name}`); } if (event === 'validating-names') { console.log(`Validating names: ${payload.names}`); } // other callback events... }, }); console.log(`Upgraded to process: ${result.forkedProcessId}`); console.log(`Successfully reassigned names: ${result.reassignedNames}`); console.log(`Failed to reassign names: ${result.failedReassignedNames}`); ``` **Parameters:** - `reassignAffiliatedNames?: boolean` - If true, reassigns all names associated with this process to the new forked process (defaults to true when names is empty) - `names?: string[]` - Optional array of specific names to reassign (cannot be used with `reassignAffiliatedNames: true`). These names must be affiliated with this ANT on the provided ARIO process. - `arioProcessId?: string` - Optional ARIO process ID (defaults to mainnet) - `antRegistryId?: string` - Optional ANT registry process ID used to resolve the latest version (defaults to mainnet registry) - `skipVersionCheck?: boolean` - Skip checking if ANT is already latest version (defaults to false) - `onSigningProgress?: Function` - Optional progress callback for tracking upgrade steps **Returns:** `Promise, failedReassignedNames: Record }>` # Versions (/sdks/ar-io-sdk/(ant-contracts)/versions) #### getModuleId() Gets the module ID of the current ANT process by querying its spawn transaction tags. Results are cached after the first successful fetch. ```typescript const moduleId = await ant.getModuleId(); console.log(`ANT was spawned with module: ${moduleId}`); // With custom GraphQL URL and retries const moduleId = await ant.getModuleId({ graphqlUrl: 'https://arweave.net/graphql', retries: 5 }); ``` **Output:** ```json "FKtQtOOtlcWCW2pXrwWFiCSlnuewMZOHCzhulVkyqBE" ``` #### getVersion() Gets the version string of the current ANT by matching its module ID with versions from the ANT registry. ```typescript const version = await ant.getVersion(); console.log(`ANT is running version: ${version}`); // With custom ANT registry const version = await ant.getVersion({ antRegistryId: 'custom-ant-registry-id' }); ``` **Output:** ```json "23" ``` #### isLatestVersion() Checks if the current ANT version is the latest according to the ANT registry. ```typescript const isLatest = await ant.isLatestVersion(); if (!isLatest) { console.log('ANT can be upgraded to the latest version'); } ``` **Output:** ```json true ``` #### getANTVersions Static method that returns the full array of available ANT versions and the latest version from the ANT registry. ```typescript // Get all available ANT versions const antVersions = ANT.versions; const versions = await antVersions.getANTVersions(); ``` Result: ```json { [ { "moduleId": "FKtQtOOtlcWCW2pXrwWFiCSlnuewMZOHCzhulVkyqBE", "version": "23", "releaseNotes": "Initial release of the ANT module.", "releaseDate": 1700000000000 } // ...other versions ], } ``` #### getLatestANTVersion() Static method that returns the latest ANT version from the ANT registry. ```typescript // Get the latest ANT version // Get all available ANT versions const antVersions = ANT.versions; const versions = await antVersions.getANTVersions(); const latestVersion = await antVersions.getLatestANTVersion(); ``` Result: ```json { "moduleId": "FKtQtOOtlcWCW2pXrwWFiCSlnuewMZOHCzhulVkyqBE", "version": "23", "releaseNotes": "Initial release of the ANT module.", "releaseDate": 1700000000000 } ``` # Arweave Name System (ArNS) (/sdks/ar-io-sdk/(ario-contract)/arweave-name-system-arns) #### resolveArNSName() Resolves an ArNS name to the underlying data id stored on the names corresponding ANT id. ##### Resolving a base name ```typescript const ario = ARIO.mainnet(); const record = await ario.resolveArNSName({ name: 'ardrive' }); ``` **Output:** ```json { "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "txId": "kvhEUsIY5bXe0Wu2-YUFz20O078uYFzmQIO-7brv8qw", "type": "lease", "recordIndex": 0, "undernameLimit": 100, "owner": "t4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3", "name": "ardrive" } ``` ##### Resolving an undername ```typescript const ario = ARIO.mainnet(); const record = await ario.resolveArNSName({ name: 'logo_ardrive' }); ``` **Output:** ```json { "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "txId": "kvhEUsIY5bXe0Wu2-YUFz20O078uYFzmQIO-7brv8qw", "type": "lease", "recordIndex": 1, "undernameLimit": 100, "owner": "t4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3", "name": "ardrive" } ``` #### buyRecord() Purchases a new ArNS record with the specified name, type, processId, and duration. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ **Arguments:** - `name` - _required_: the name of the ArNS record to purchase - `type` - _required_: the type of ArNS record to purchase - `processId` - _optional_: the process id of an existing ANT process. If not provided, a new ANT process using the provided `signer` will be spawned, and the ArNS record will be assigned to that process. - `years` - _optional_: the duration of the ArNS record in years. If not provided and `type` is `lease`, the record will be leased for 1 year. If not provided and `type` is `permabuy`, the record will be permanently registered. - `referrer` - _optional_: track purchase referrals for analytics (e.g. `my-app.com`) ```typescript const ario = ARIO.mainnet({ signer }); const record = await ario.buyRecord( { name: 'ardrive', type: 'lease', years: 1, processId: 'bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM', // optional: assign to existing ANT process referrer: 'my-app.com', // optional: track purchase referrals for analytics }, { // optional tags tags: [{ name: 'App-Name', value: 'ArNS-App' }], onSigningProgress: (step, event) => { console.log(`Signing progress: ${step}`); if (step === 'spawning-ant') { console.log('Spawning ant:', event); } if (step === 'registering-ant') { console.log('Registering ant:', event); } if (step === 'verifying-state') { console.log('Verifying state:', event); } if (step === 'buying-name') { console.log('Buying name:', event); } }, }, ); ``` #### upgradeRecord() Upgrades an existing leased ArNS record to a permanent ownership. The record must be currently owned by the caller and be of type "lease". _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer }); const record = await ario.upgradeRecord( { name: 'ardrive', referrer: 'my-app.com', // optional: track purchase referrals for analytics }, { // optional tags tags: [{ name: 'App-Name', value: 'ArNS-App' }], }, ); ``` #### getArNSRecord() Retrieves the record info of the specified ArNS name. ```typescript const ario = ARIO.mainnet(); const record = await ario.getArNSRecord({ name: 'ardrive' }); ``` **Output:** ```json { "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "endTimestamp": 1752256702026, "startTimestamp": 1720720819969, "type": "lease", "undernameLimit": 100 } ``` #### getArNSRecords() Retrieves all registered ArNS records of the ARIO process, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last ArNS name from the previous request. ```typescript const ario = ARIO.mainnet(); // get the newest 100 names const records = await ario.getArNSRecords({ limit: 100, sortBy: 'startTimestamp', sortOrder: 'desc', }); ``` Available `sortBy` options are any of the keys on the record object, e.g. `name`, `processId`, `endTimestamp`, `startTimestamp`, `type`, `undernames`. **Output:** ```json { "items": [ { "name": "ao", "processId": "eNey-H9RB9uCdoJUvPULb35qhZVXZcEXv8xds4aHhkQ", "purchasePrice": 75541282285, "startTimestamp": 1720720621424, "endTimestamp": 1752256702026, "type": "permabuy", "undernameLimit": 10 }, { "name": "ardrive", "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "endTimestamp": 1720720819969, "startTimestamp": 1720720620813, "purchasePrice": 75541282285, "type": "lease", "undernameLimit": 100 }, { "name": "arweave", "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "endTimestamp": 1720720819969, "startTimestamp": 1720720620800, "purchasePrice": 75541282285, "type": "lease", "undernameLimit": 100 }, { "name": "ar-io", "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "endTimestamp": 1720720819969, "startTimestamp": 1720720619000, "purchasePrice": 75541282285, "type": "lease", "undernameLimit": 100 }, { "name": "fwd", "processId": "bh9l1cy0aksiL_x9M359faGzM_yjralacHIUo8_nQXM", "endTimestamp": 1720720819969, "startTimestamp": 1720720220811, "purchasePrice": 75541282285, "type": "lease", "undernameLimit": 100 } // ...95 other records ], "hasMore": true, "nextCursor": "fwdresearch", "totalItems": 21740, "sortBy": "startTimestamp", "sortOrder": "desc" } ``` #### getArNSRecordsForAddress() Retrieves all registered ArNS records of the specified address according to the `ANTRegistry` access control list, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last ArNS name from the previous request. ```typescript const ario = ARIO.mainnet(); const records = await ario.getArNSRecordsForAddress({ address: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', limit: 100, sortBy: 'startTimestamp', sortOrder: 'desc', }); ``` Available `sortBy` options are any of the keys on the record object, e.g. `name`, `processId`, `endTimestamp`, `startTimestamp`, `type`, `undernames`. **Output:** ```json { "limit": 1, "totalItems": 31, "hasMore": true, "nextCursor": "ardrive", "items": [ { "startTimestamp": 1740009600000, "name": "ardrive", "endTimestamp": 1777328018367, "type": "permabuy", "purchasePrice": 0, "undernameLimit": 100, "processId": "hpF0HdijWlBLFePjWX6u_-Lg3Z2E_PrP_AoaXDVs0bA" } ], "sortOrder": "desc", "sortBy": "startTimestamp" } ``` #### increaseUndernameLimit() Increases the undername support of a domain up to a maximum of 10k. Domains, by default, support up to 10 undernames. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.increaseUndernameLimit( { name: 'ar-io', qty: 420, referrer: 'my-app.com', // optional: track purchase referrals for analytics }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### extendLease() Extends the lease of a registered ArNS domain, with an extension of 1-5 years depending on grace period status. Permanently registered domains cannot be extended. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.extendLease( { name: 'ar-io', years: 1, referrer: 'my-app.com', // optional: track purchase referrals for analytics }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### getTokenCost() Calculates the price in mARIO to perform the interaction in question, eg a 'Buy-Name' interaction, where args are the specific params for that interaction. ```typescript const price = await ario .getTokenCost({ intent: 'Buy-Name', name: 'ar-io', type: 'permabuy', }) .then((p) => new mARIOToken(p).toARIO()); // convert to ARIO for readability ``` **Output:** ```json 1642.34 ``` #### getCostDetails() Calculates the expanded cost details for the interaction in question, e.g a 'Buy-Name' interaction, where args are the specific params for that interaction. The fromAddress is the address that would be charged for the interaction, and fundFrom is where the funds would be taken from, either `balance`, `stakes`, or `any`. ```typescript const costDetails = await ario.getCostDetails({ intent: 'Buy-Name', fromAddress: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', fundFrom: 'stakes', name: 'ar-io', type: 'permabuy', }); ``` **Output:** ```json { "tokenCost": 2384252273, "fundingPlan": { "address": "t4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3", "balance": 0, "stakes": { "Rc80LG6h27Y3p9TN6J5hwDeG5M51cu671YwZpU9uAVE": { "vaults": [], "delegatedStake": 2384252273 } }, "shortfall": 0 }, "discounts": [] } ``` #### getDemandFactor() Retrieves the current demand factor of the network. The demand factor is a multiplier applied to the cost of ArNS interactions based on the current network demand. ```typescript const ario = ARIO.mainnet(); const demandFactor = await ario.getDemandFactor(); ``` **Output:** ```json 1.05256 ``` #### getArNSReturnedNames() Retrieves all active returned names of the ARIO process, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last returned name from the previous request. ```typescript const ario = ARIO.mainnet(); const returnedNames = await ario.getArNSReturnedNames({ limit: 100, sortBy: 'endTimestamp', sortOrder: 'asc', // return the returned names ending soonest first }); ``` **Output:** ```json { "items": [ { "name": "permalink", "endTimestamp": 1730985241349, "startTimestamp": 1729775641349, "baseFee": 250000000, "demandFactor": 1.05256, "initiator": "GaQrvEMKBpkjofgnBi_B3IgIDmY_XYelVLB6GcRGrHc", "settings": { "durationMs": 1209600000, "decayRate": 0.000000000016847809193121693, "scalingExponent": 190, "startPriceMultiplier": 50 } } ], "hasMore": false, "totalItems": 1, "sortBy": "endTimestamp", "sortOrder": "asc" } ``` #### getArNSReturnedName() Retrieves the returned name data for the specified returned name. ```typescript const ario = ARIO.mainnet(); const returnedName = await ario.getArNSReturnedName({ name: 'permalink' }); ``` **Output:** ```json { "name": "permalink", "endTimestamp": 1730985241349, "startTimestamp": 1729775641349, "baseFee": 250000000, "demandFactor": 1.05256, "initiator": "GaQrvEMKBpkjofgnBi_B3IgIDmY_XYelVLB6GcRGrHc", "settings": { "durationMs": 1209600000, "decayRate": 0.000000000016847809193121693, "scalingExponent": 190, "startPriceMultiplier": 50 } } ``` # Configuration (/sdks/ar-io-sdk/(ario-contract)/configuration) The ARIO client class exposes APIs relevant to the ar.io process. It can be configured to use any AO Process ID that adheres to the [ARIO Network Spec]. By default, it will use the current [ARIO Testnet Process]. Refer to [AO Connect] for more information on how to configure an ARIO process to use specific AO infrastructure. ```typescript // provide a custom ao infrastructure and process id const ario = ARIO.mainnet({ process: new AOProcess({ processId: 'ARIO_PROCESS_ID' ao: connect({ MODE: 'legacy', MU_URL: 'https://mu-testnet.xyz', CU_URL: 'https://cu-testnet.xyz', GRAPHQL_URL: 'https://arweave.net/graphql', GATEWAY_URL: 'https://arweave.net', }) }) }); ``` # Epochs (/sdks/ar-io-sdk/(ario-contract)/epochs) #### getCurrentEpoch() Returns the current epoch data. ```typescript const ario = ARIO.mainnet(); const epoch = await ario.getCurrentEpoch(); ``` **Output:** ```json { "epochIndex": 0, "startTimestamp": 1720720621424, "endTimestamp": 1752256702026, "startHeight": 1350700, "distributionTimestamp": 1711122739, "observations": { "failureSummaries": { "-Tk2DDk8k4zkwtppp_XFKKI5oUgh6IEHygAoN7mD-w8": [ "Ie2wEEUDKoU26c7IuckHNn3vMFdNQnMvfPBrFzAb3NA" ] }, "reports": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": "B6UUjKWjjEWDBvDSMXWNmymfwvgR9EN27z5FTkEVlX4" } }, "prescribedNames": ["ardrive", "ar-io", "arweave", "fwd", "ao"], "prescribedObservers": [ { "gatewayAddress": "2Fk8lCmDegPg6jjprl57-UCpKmNgYiKwyhkU4vMNDnE", "observerAddress": "2Fk8lCmDegPg6jjprl57-UCpKmNgYiKwyhkU4vMNDnE", "stake": 10000000000, "start": 1292450, "stakeWeight": 1, "tenureWeight": 0.4494598765432099, "gatewayPerformanceRatio": 1, "observerRewardRatioWeight": 1, "compositeWeight": 0.4494598765432099, "normalizedCompositeWeight": 0.002057032496835938 } ], "distributions": { "distributedTimestamp": 1711122739, "totalEligibleRewards": 100000000, "rewards": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": 100000000 } } } ``` #### getEpoch() Returns the epoch data for the specified block height. If no epoch index is provided, the current epoch is used. ```typescript const ario = ARIO.mainnet(); const epoch = await ario.getEpoch({ epochIndex: 0 }); ``` **Output:** ```json { "epochIndex": 0, "startTimestamp": 1720720620813, "endTimestamp": 1752256702026, "startHeight": 1350700, "distributionTimestamp": 1752256702026, "observations": { "failureSummaries": { "-Tk2DDk8k4zkwtppp_XFKKI5oUgh6IEHygAoN7mD-w8": [ "Ie2wEEUDKoU26c7IuckHNn3vMFdNQnMvfPBrFzAb3NA" ] }, "reports": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": "B6UUjKWjjEWDBvDSMXWNmymfwvgR9EN27z5FTkEVlX4" } }, "prescribedNames": ["ardrive", "ar-io", "arweave", "fwd", "ao"], "prescribedObservers": [ { "gatewayAddress": "2Fk8lCmDegPg6jjprl57-UCpKmNgYiKwyhkU4vMNDnE", "observerAddress": "2Fk8lCmDegPg6jjprl57-UCpKmNgYiKwyhkU4vMNDnE", "stake": 10000000000, // value in mARIO "startTimestamp": 1720720620813, "stakeWeight": 1, "tenureWeight": 0.4494598765432099, "gatewayPerformanceRatio": 1, "observerRewardRatioWeight": 1, "compositeWeight": 0.4494598765432099, "normalizedCompositeWeight": 0.002057032496835938 } ], "distributions": { "totalEligibleGateways": 1, "totalEligibleRewards": 100000000, "totalEligibleObserverReward": 100000000, "totalEligibleGatewayReward": 100000000, "totalDistributedRewards": 100000000, "distributedTimestamp": 1720720621424, "rewards": { "distributed": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": 100000000 } } } } ``` #### getEligibleEpochRewards() Returns the eligible epoch rewards for the specified block height. If no epoch index is provided, the current epoch is used. ```typescript const ario = ARIO.mainnet(); const rewards = await ario.getEligibleEpochRewards({ epochIndex: 0 }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": true, "totalItems": 37, "limit": 1, "sortBy": "cursorId", "items": [ { "cursorId": "xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0_xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0", "recipient": "xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0", "gatewayAddress": "xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0", "eligibleReward": 2627618704, "type": "operatorReward" } ], "nextCursor": "xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0_xN_aVln30LmoCffwmk5_kRkcyQZyZWy1o_TNtM_CTm0" } ``` #### getObservations() Returns the epoch-indexed observation list. If no epoch index is provided, the current epoch is used. ```typescript const ario = ARIO.mainnet(); const observations = await ario.getObservations(); ``` **Output:** ```json { "0": { "failureSummaries": { "-Tk2DDk8k4zkwtppp_XFKKI5oUgh6IEHygAoN7mD-w8": [ "Ie2wEEUDKoU26c7IuckHNn3vMFdNQnMvfPBrFzAb3NA", "Ie2wEEUDKoU26c7IuckHNn3vMFdNQnMvfPBrFzAb3NA" ] }, "reports": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": "B6UUjKWjjEWDBvDSMXWNmymfwvgR9EN27z5FTkEVlX4", "Ie2wEEUDKoU26c7IuckHNn3vMFdNQnMvfPBrFzAb3NA": "7tKsiQ2fxv0D8ZVN_QEv29fZ8hwFIgHoEDrpeEG0DIs", "osZP4D9cqeDvbVFBaEfjIxwc1QLIvRxUBRAxDIX9je8": "aatgznEvC_UPcxp1v0uw_RqydhIfKm4wtt1KCpONBB0", "qZ90I67XG68BYIAFVNfm9PUdM7v1XtFTn7u-EOZFAtk": "Bd8SmFK9-ktJRmwIungS8ur6JM-JtpxrvMtjt5JkB1M" } } } ``` #### getDistributions() Returns the current rewards distribution information. If no epoch index is provided, the current epoch is used. ```typescript const ario = ARIO.mainnet(); const distributions = await ario.getDistributions({ epochIndex: 0 }); ``` **Output:** ```json { "totalEligibleGateways": 1, "totalEligibleRewards": 100000000, "totalEligibleObserverReward": 100000000, "totalEligibleGatewayReward": 100000000, "totalDistributedRewards": 100000000, "distributedTimestamp": 1720720621424, "rewards": { "eligible": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": { "operatorReward": 100000000, "delegateRewards": {} } }, "distributed": { "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs": 100000000 } } } ``` #### saveObservations() Saves the observations of the current epoch. Requires `signer` to be provided on `ARIO.init` to sign the transaction. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.saveObservations( { reportTxId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', failedGateways: ['t4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3'], }, { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }], }, ); ``` #### getPrescribedObservers() Retrieves the prescribed observers of the ARIO process. To fetch prescribed observers for a previous epoch set the `epochIndex` to the desired epoch index. ```typescript const ario = ARIO.mainnet(); const observers = await ario.getPrescribedObservers({ epochIndex: 0 }); ``` **Output:** ```json [ { "gatewayAddress": "BpQlyhREz4lNGS-y3rSS1WxADfxPpAuing9Lgfdrj2U", "observerAddress": "2Fk8lCmDegPg6jjprl57-UCpKmNgYiKwyhkU4vMNDnE", "stake": 10000000000, // value in mARIO "start": 1296976, "stakeWeight": 1, "tenureWeight": 0.41453703703703704, "gatewayPerformanceRatio": 1, "observerRewardRatioWeight": 1, "compositeWeight": 0.41453703703703704, "normalizedCompositeWeight": 0.0018972019546783507 } ] ``` # Gateways (/sdks/ar-io-sdk/(ario-contract)/gateways) #### getGateway() Retrieves a gateway's info by its staking wallet address. ```typescript const ario = ARIO.mainnet(); const gateway = await ario.getGateway({ address: '-7vXsQZQDk8TMDlpiSLy3CnLi5PDPlAaN2DaynORpck', }); ``` **Output:** ```json { "observerAddress": "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs", "operatorStake": 250000000000, "settings": { "fqdn": "ar-io.dev", "label": "AR.IO Test", "note": "Test Gateway operated by PDS for the AR.IO ecosystem.", "port": 443, "properties": "raJgvbFU-YAnku-WsupIdbTsqqGLQiYpGzoqk9SCVgY", "protocol": "https" }, "startTimestamp": 1720720620813, "stats": { "failedConsecutiveEpochs": 0, "passedEpochCount": 30, "submittedEpochCount": 30, "totalEpochCount": 31, "totalEpochsPrescribedCount": 31 }, "status": "joined", "vaults": {}, "weights": { "compositeWeight": 0.97688888893556, "gatewayPerformanceRatio": 1, "tenureWeight": 0.19444444444444, "observerRewardRatioWeight": 1, "normalizedCompositeWeight": 0.19247316211083, "stakeWeight": 5.02400000024 } } ``` #### getGateways() Retrieves registered gateways of the ARIO process, using pagination and sorting by the specified criteria. The `cursor` used for pagination is the last gateway address from the previous request. ```typescript const ario = ARIO.mainnet(); const gateways = await ario.getGateways({ limit: 100, sortOrder: 'desc', sortBy: 'operatorStake', }); ``` Available `sortBy` options are any of the keys on the gateway object, e.g. `operatorStake`, `start`, `status`, `settings.fqdn`, `settings.label`, `settings.note`, `settings.port`, `settings.protocol`, `stats.failedConsecutiveEpochs`, `stats.passedConsecutiveEpochs`, etc. **Output:** ```json { "items": [ { "gatewayAddress": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "observerAddress": "IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs", "operatorStake": 250000000000, "settings": { "fqdn": "ar-io.dev", "label": "AR.IO Test", "note": "Test Gateway operated by PDS for the AR.IO ecosystem.", "port": 443, "properties": "raJgvbFU-YAnku-WsupIdbTsqqGLQiYpGzoqk9SCVgY", "protocol": "https" }, "startTimestamp": 1720720620813, "stats": { "failedConsecutiveEpochs": 0, "passedEpochCount": 30, "submittedEpochCount": 30, "totalEpochCount": 31, "totalEpochsPrescribedCount": 31 }, "status": "joined", "vaults": {}, "weights": { "compositeWeight": 0.97688888893556, "gatewayPerformanceRatio": 1, "tenureWeight": 0.19444444444444, "observerRewardRatioWeight": 1, "normalizedCompositeWeight": 0.19247316211083, "stakeWeight": 5.02400000024 } } ], "hasMore": true, "nextCursor": "-4xgjroXENKYhTWqrBo57HQwvDL51mMdfsdsxJy6Y2Z_sA", "totalItems": 316, "sortBy": "operatorStake", "sortOrder": "desc" } ``` #### getGatewayDelegates() Retrieves all delegates for a specific gateway, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last delegate address from the previous request. ```typescript const ario = ARIO.mainnet(); const delegates = await ario.getGatewayDelegates({ address: 'QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ', limit: 3, sortBy: 'startTimestamp', sortOrder: 'desc', }); ``` **Output:** ```json { "nextCursor": "ScEtph9-vfY7lgqlUWwUwOmm99ySeZGQhOX0MFAyFEs", "limit": 3, "sortBy": "startTimestamp", "totalItems": 32, "sortOrder": "desc", "hasMore": true, "items": [ { "delegatedStake": 600000000, "address": "qD5VLaMYyIHlT6vH59TgYIs6g3EFlVjlPqljo6kqVxk", "startTimestamp": 1732716956301 }, { "delegatedStake": 508999038, "address": "KG8TlcWk-8pvroCjiLD2J5zkG9rqC6yYaBuZNqHEyY4", "startTimestamp": 1731828123742 }, { "delegatedStake": 510926479, "address": "ScEtph9-vfY7lgqlUWwUwOmm99ySeZGQhOX0MFAyFEs", "startTimestamp": 1731689356040 } ] } ``` #### joinNetwork() Joins a gateway to the ar.io network via its associated wallet. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.joinNetwork( { qty: new ARIOToken(10_000).toMARIO(), // minimum operator stake allowed autoStake: true, // auto-stake operator rewards to the gateway allowDelegatedStaking: true, // allows delegated staking minDelegatedStake: new ARIOToken(100).toMARIO(), // minimum delegated stake allowed delegateRewardShareRatio: 10, // percentage of rewards to share with delegates (e.g. 10%) label: 'john smith', // min 1, max 64 characters note: 'The example gateway', // max 256 characters properties: 'FH1aVetOoulPGqgYukj0VE0wIhDy90WiQoV3U2PeY44', // Arweave transaction ID containing additional properties of the Gateway observerWallet: '0VE0wIhDy90WiQoV3U2PeY44FH1aVetOoulPGqgYukj', // wallet address of the observer, must match OBSERVER_WALLET on the observer fqdn: 'example.com', // fully qualified domain name - note: you must own the domain and set the OBSERVER_WALLET on your gateway to match `observerWallet` port: 443, // port number protocol: 'https', // only 'https' is supported }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### leaveNetwork() Sets the gateway as `leaving` on the ar.io network. Requires `signer` to be provided on `ARIO.init` to sign the transaction. The gateways operator and delegate stakes are vaulted and will be returned after leave periods. The gateway will be removed from the network after the leave period. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.leaveNetwork( // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### updateGatewaySettings() Writes new gateway settings to the callers gateway configuration. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.updateGatewaySettings( { // any other settings you want to update minDelegatedStake: new ARIOToken(100).toMARIO(), }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### increaseDelegateStake() Increases the callers stake on the target gateway. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.increaseDelegateStake( { target: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', qty: new ARIOToken(100).toMARIO(), }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### decreaseDelegateStake() Decreases the callers stake on the target gateway. Can instantly decrease stake by setting instant to `true`. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.decreaseDelegateStake( { target: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', qty: new ARIOToken(100).toMARIO(), }, { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }], }, ); ``` Pay the early withdrawal fee and withdraw instantly. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.decreaseDelegateStake({ target: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', qty: new ARIOToken(100).toMARIO(), instant: true, // Immediately withdraw this stake and pay the instant withdrawal fee }); ``` #### getDelegations() Retrieves all active and vaulted stakes across all gateways for a specific address, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last delegationId (concatenated gateway and startTimestamp of the delgation) from the previous request. ```typescript const ario = ARIO.mainnet(); const vaults = await ario.getDelegations({ address: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', cursor: 'QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ_123456789', limit: 2, sortBy: 'startTimestamp', sortOrder: 'asc', }); ``` **Output:** ```json { "sortOrder": "asc", "hasMore": true, "totalItems": 95, "limit": 2, "sortBy": "startTimestamp", "items": [ { "type": "stake", "startTimestamp": 1727815440632, "gatewayAddress": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "delegationId": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ_1727815440632", "balance": 1383212512 }, { "type": "vault", "startTimestamp": 1730996691117, "gatewayAddress": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "delegationId": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ_1730996691117", "vaultId": "_sGDS7X1hyLCVpfe40GWioH9BSOb7f0XWbhHBa1q4-g", "balance": 50000000, "endTimestamp": 1733588691117 } ], "nextCursor": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ_1730996691117" } ``` #### instantWithdrawal() Instantly withdraws an existing vault on a gateway. If no `gatewayAddress` is provided, the signer's address will be used. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); // removes a delegated vault from a gateway const { id: txId } = await ario.instantWithdrawal( { // gateway address where delegate vault exists gatewayAddress: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', // delegated vault id to cancel vaultId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }], }, ); // removes an operator vault from a gateway const { id: txId } = await ario.instantWithdrawal( { vaultId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', }, ); ``` #### cancelWithdrawal() Cancels an existing vault on a gateway. The vaulted stake will be returned to the callers stake. If no `gatewayAddress` is provided, the signer's address will be used. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); // cancels a delegated vault from a gateway const { id: txId } = await ario.cancelWithdrawal( { // gateway address where vault exists gatewayAddress: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', // vault id to cancel vaultId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); // cancels an operator vault from a gateway const { id: txId } = await ario.cancelWithdrawal( { // operator vault id to cancel vaultId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', }, ); ``` #### getAllowedDelegates() Retrieves all allowed delegates for a specific address. The `cursor` used for pagination is the last address from the previous request. ```typescript const ario = ARIO.mainnet(); const allowedDelegates = await ario.getAllowedDelegates({ address: 'QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ', }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": false, "totalItems": 4, "limit": 100, "items": [ "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM", "N4h8M9A9hasa3tF47qQyNvcKjm4APBKuFs7vqUVm-SI", "JcC4ZLUY76vmWha5y6RwKsFqYTrMZhbockl8iM9p5lQ", "31LPFYoow2G7j-eSSsrIh8OlNaARZ84-80J-8ba68d8" ] } ``` #### getGatewayVaults() Retrieves all vaults across all gateways for a specific address, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last vaultId from the previous request. ```typescript const ario = ARIO.mainnet(); const vaults = await ario.getGatewayVaults({ address: '"PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM', }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": false, "totalItems": 1, "limit": 100, "sortBy": "endTimestamp", "items": [ { "cursorId": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM_1728067635857", "startTimestamp": 1728067635857, "balance": 50000000000, "vaultId": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM", "endTimestamp": 1735843635857 } ] } ``` #### getAllGatewayVaults() Retrieves all vaults across all gateways, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last vaultId from the previous request. ```typescript const ario = ARIO.mainnet(); const vaults = await ario.getAllGatewayVaults({ limit: 1, sortBy: 'endTimestamp', sortOrder: 'desc', }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": true, "totalItems": 95, "limit": 1, "sortBy": "endTimestamp", "items": [ { "cursorId": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM_E-QVU3dta36Wia2uQw6tQLjQk7Qw5uN0Z6fUzsoqzUc", "gatewayAddress": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM", "startTimestamp": 1728067635857, "balance": 50000000000, "vaultId": "E-QVU3dta36Wia2uQw6tQLjQk7Qw5uN0Z6fUzsoqzUc", "endTimestamp": 1735843635857 } ], "nextCursor": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM_E-QVU3dta36Wia2uQw6tQLjQk7Qw5uN0Z6fUzsoqzUc" } ``` #### increaseOperatorStake() Increases the callers operator stake. Must be executed with a wallet registered as a gateway operator. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.increaseOperatorStake( { qty: new ARIOToken(100).toMARIO(), }, { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }], }, ); ``` #### decreaseOperatorStake() Decreases the callers operator stake. Must be executed with a wallet registered as a gateway operator. Requires `signer` to be provided on `ARIO.init` to sign the transaction. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.decreaseOperatorStake( { qty: new ARIOToken(100).toMARIO(), }, { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }], }, ); ``` #### redelegateStake() Redelegates the stake of a specific address to a new gateway. Vault ID may be optionally included in order to redelegate from an existing withdrawal vault. The redelegation fee is calculated based on the fee rate and the stake amount. Users are allowed one free redelegation every seven epochs. Each additional redelegation beyond the free redelegation will increase the fee by 10%, capping at a 60% redelegation fee. e.g: If 1000 mARIO is redelegated and the fee rate is 10%, the fee will be 100 mARIO. Resulting in 900 mARIO being redelegated to the new gateway and 100 mARIO being deducted back to the protocol balance. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.redelegateStake({ target: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', source: 'HwFceQaMQnOBgKDpnFqCqgwKwEU5LBme1oXRuQOWSRA', stakeQty: new ARIOToken(1000).toMARIO(), vaultId: 'fDrr0_J4Iurt7caNST02cMotaz2FIbWQ4Kcj616RHl3', }); ``` #### getRedelegationFee() Retrieves the fee rate as percentage required to redelegate the stake of a specific address. Fee rate ranges from 0% to 60% based on the number of redelegations since the last fee reset. ```typescript const ario = ARIO.mainnet(); const fee = await ario.getRedelegationFee({ address: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', }); ``` **Output:** ```json { "redelegationFeeRate": 10, "feeResetTimestamp": 1730996691117 } ``` #### getAllDelegates() Retrieves all delegates across all gateways, paginated and sorted by the specified criteria. The `cursor` used for pagination is a `cursorId` derived from delegate address and the gatewayAddress from the previous request. e.g `address_gatewayAddress`. ```typescript const ario = ARIO.mainnet(); const delegates = await ario.getAllDelegates({ limit: 2, sortBy: 'startTimestamp', sortOrder: 'desc', }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": true, "totalItems": 95, "limit": 2, "sortBy": "startTimestamp", "items": [ { "startTimestamp": 1734709397622, "cursorId": "9jfM0uzGNc9Mkhjo1ixGoqM7ygSem9wx_EokiVgi0Bs_E-QVU3dta36Wia2uQw6tQLjQk7Qw5uN0Z6fUzsoqzUc", "gatewayAddress": "E-QVU3dta36Wia2uQw6tQLjQk7Qw5uN0Z6fUzsoqzUc", "address": "9jfM0uzGNc9Mkhjo1ixGoqM7ygSem9wx_EokiVgi0Bs", "delegatedStake": 2521349108, "vaultedStake": 0 }, { "startTimestamp": 1734593229454, "cursorId": "LtV0aSqgK3YI7c5FmfvZd-wG95TJ9sezj_a4syaLMS8_M0WP8KSzCvKpzC-HPF1WcddLgGaL9J4DGi76iMnhrN4", "gatewayAddress": "M0WP8KSzCvKpzC-HPF1WcddLgGaL9J4DGi76iMnhrN4", "address": "LtV0aSqgK3YI7c5FmfvZd-wG95TJ9sezj_a4syaLMS8", "delegatedStake": 1685148110, "vaultedStake": 10000000 } ], "nextCursor": "PZ5vIhHf8VY969TxBPQN-rYY9CNFP9ggNsMBqlWUzWM_QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ" } ``` # General (/sdks/ar-io-sdk/(ario-contract)/general) #### init() Factory function to that creates a read-only or writeable client. By providing a `signer` additional write APIs that require signing, like `joinNetwork` and `delegateStake` are available. By default, a read-only client is returned and no write APIs are available. ```typescript // read-only client const ario = ARIO.init(); // read-write client for browser environments const ario = ARIO.init({ signer: new ArConnectSigner(window.arweaveWallet, Arweave.init({}))}); // read-write client for node environments const ario = ARIO.init({ signer: new ArweaveSigner(JWK) }); ``` #### getInfo() Retrieves the information of the ARIO process. ```typescript const ario = ARIO.mainnet(); const info = await ario.getInfo(); ``` **Output:** ```json { "Name": "ARIO", "Ticker": "ARIO", "Owner": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "Denomination": 6, "Handlers": ["_eval", "_default_"], // full list of handlers, useful for debugging "LastCreatedEpochIndex": 31, // epoch index of the last tick "LastDistributedEpochIndex": 31 // epoch index of the last distribution } ``` #### getTokenSupply() Retrieves the total supply of tokens, returned in mARIO. The total supply includes the following: - `total` - the total supply of all tokens - `circulating` - the total supply minus locked, withdrawn, delegated, and staked - `locked` - tokens that are locked in the protocol (a.k.a. vaulted) - `withdrawn` - tokens that have been withdrawn from the protocol by operators and delegators - `delegated` - tokens that have been delegated to gateways - `staked` - tokens that are staked in the protocol by gateway operators - `protocolBalance` - tokens that are held in the protocol's treasury. This is included in the circulating supply. ```typescript const ario = ARIO.mainnet(); const supply = await ario.getTokenSupply(); ``` **Output:** ```json { "total": 1000000000000000000, "circulating": 998094653842520, "locked": 0, "withdrawn": 560563387278, "delegated": 1750000000, "staked": 1343032770199, "protocolBalance": 46317263683761 } ``` #### getBalance() Retrieves the balance of the specified wallet address. ```typescript const ario = ARIO.mainnet(); // the balance will be returned in mARIO as a value const balance = await ario .getBalance({ address: 'QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ', }) .then((balance: number) => new mARIOToken(balance).toARIO()); // convert it to ARIO for readability ``` **Output:** ```json 100000 ``` #### getBalances() Retrieves the balances of the ARIO process in `mARIO`, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last wallet address from the previous request. ```typescript const ario = ARIO.mainnet(); const balances = await ario.getBalances({ cursor: '-4xgjroXENKYhTWqrBo57HQwvDL51mMdfsdsxJy6Y2Z_sA', limit: 100, sortBy: 'balance', sortOrder: 'desc', }); ``` **Output:** ```json { "items": [ { "address": "-4xgjroXENKYhTWqrBo57HQwvDL51mMvSxJy6Y2Z_sA", "balance": 1000000 }, { "address": "-7vXsQZQDk8TMDlpiSLy3CnLi5PDPlAaN2DaynORpck", "balance": 1000000 } // ...98 other balances ], "hasMore": true, "nextCursor": "-7vXsQZQDk8TMDlpiSLy3CnLi5PDPlAaN2DaynORpck", "totalItems": 1789, "sortBy": "balance", "sortOrder": "desc" } ``` #### transfer() Transfers `mARIO` to the designated `target` recipient address. Requires `signer` to be provided on `ARIO.init` to sign the transaction. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk), }); const { id: txId } = await ario.transfer( { target: '-5dV7nk7waR8v4STuwPnTck1zFVkQqJh5K9q9Zik4Y5', qty: new ARIOToken(1000).toMARIO(), }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` # Networks (/sdks/ar-io-sdk/(ario-contract)/networks) The SDK provides the following process IDs for the mainnet and testnet environments: - `ARIO_MAINNET_PROCESS_ID` - Mainnet ARIO process ID (production) - `ARIO_TESTNET_PROCESS_ID` - Testnet ARIO process ID (testing and development) - `ARIO_DEVNET_PROCESS_ID` - Devnet ARIO process ID (development) As of `v3.8.1` the SDK defaults all API interactions to **mainnet**. To use the **testnet** or **devnet** provide the appropriate `ARIO_TESTNET_PROCESS_ID` or `ARIO_DEVNET_PROCESS_ID` when initializing the client. #### Mainnet As of `v3.8.1` the SDK defaults all API interactions to **mainnet**. To use the **testnet** or **devnet** provide the appropriate `ARIO_TESTNET_PROCESS_ID` or `ARIO_DEVNET_PROCESS_ID` when initializing the client. ```typescript const ario = ARIO.mainnet(); // or ARIO.init() ``` #### Testnet ```typescript const testnet = ARIO.testnet(); // or ARIO.init({ processId: ARIO_TESTNET_PROCESS_ID }) ``` ##### Faucet The SDK provides APIs for claiming tokens via a faucet on the AR.IO Testnet process (`tARIO`) via the [ar-io-testnet-faucet] service. All token requests require a captcha to be solved, and the faucet is rate limited to prevent abuse. To claim testnet tokens from the testnet token faucet, you can use one of the following methods: 1. Visit [faucet.ar.io](https://faucet.ar.io) - the easiest way to quickly get tokens for testing for a single address. 2. Programmatically via the SDK - useful if you need to claim tokens for multiple addresses or dynamically within your application. - `ARIO.testnet().faucet.captchaUrl()` - returns the captcha URL for the testnet faucet. Open this URL in a new browser window and listen for the `ario-jwt-success` event to be emitted. - `ARIO.testnet().faucet.claimWithAuthToken({ authToken, recipient, quantity })` - claims tokens for the specified recipient address using the provided auth token. - `ARIO.testnet().faucet.verifyAuthToken({ authToken })` - verifies if the provided auth token is still valid. Example client-side code for claiming tokens ```typescript const testnet = ARIO.testnet(); const captchaUrl = await ario.faucet.captchaUrl(); // open the captcha URL in the browser, and listen for the auth token event const captchaWindow = window.open( captchaUrl.captchaUrl, '_blank', 'width=600,height=600', ); /** * The captcha URL includes a window.parent.postMessage event that is used to send the auth token to the parent window. * You can store the auth token in localStorage and use it to claim tokens for the duration of the auth token's expiration (default 1 hour). */ window.parent.addEventListener('message', async (event) => { if (event.data.type === 'ario-jwt-success') { localStorage.setItem('ario-jwt', event.data.token); localStorage.setItem('ario-jwt-expires-at', event.data.expiresAt); // close our captcha window captchaWindow?.close(); // claim the tokens using the JWT token const res = await testnet.faucet .claimWithAuthToken({ authToken: event.data.token, recipient: await window.arweaveWallet.getActiveAddress(), quantity: new ARIOToken(100).toMARIO().valueOf(), // 100 ARIO }) .then((res) => { alert( 'Successfully claimed 100 ARIO tokens! Transaction ID: ' + res.id, ); }) .catch((err) => { alert(`Failed to claim tokens: ${err}`); }); } }); /** * Once you have a valid JWT, you can check if it is still valid and use it for subsequent requests without having to open the captcha again. */ if ( localStorage.getItem('ario-jwt-expires-at') && Date.now() # Primary Names (/sdks/ar-io-sdk/(ario-contract)/primary-names) #### getPrimaryNames() Retrieves all primary names paginated and sorted by the specified criteria. The `cursor` used for pagination is the last name from the previous request. ```typescript const ario = ARIO.mainnet(); const names = await ario.getPrimaryNames({ cursor: 'ao', // this is the last name from the previous request limit: 1, sortBy: 'startTimestamp', sortOrder: 'desc', }); ``` **Output:** ```json { "sortOrder": "desc", "hasMore": true, "totalItems": 100, "limit": 1, "sortBy": "startTimestamp", "cursor": "arns", "items": [ { "owner": "HwFceQaMQnOBgKDpnFqCqgwKwEU5LBme1oXRuQOWSRA", "startTimestamp": 1719356032297, "name": "arns" } ] } ``` #### getPrimaryName() Retrieves the primary name for a given name or address. ```typescript const ario = ARIO.mainnet(); const name = await ario.getPrimaryName({ name: 'arns', }); // or const name = await ario.getPrimaryName({ address: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', }); ``` **Output:** ```json { "owner": "HwFceQaMQnOBgKDpnFqCqgwKwEU5LBme1oXRuQOWSRA", "startTimestamp": 1719356032297, "name": "arns" } ``` #### setPrimaryName() Sets an ArNS name already owned by the `signer` as their primary name. Note: `signer` must be the owner of the `processId` that is assigned to the name. If not, the transaction will fail. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const signer = new ArweaveSigner(jwk); const ario = ARIO.mainnet({ signer }); await ario.setPrimaryName({ name: 'my-arns-name' }); // the caller must already have purchased the name my-arns-name and be assigned as the owner of the processId that is assigned to the name ``` #### requestPrimaryName() Requests a primary name for the `signer`'s address. The request must be approved by the new owner of the requested name via the `approvePrimaryNameRequest`[#approveprimarynamerequest-name-address-] API. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.requestPrimaryName({ name: 'arns', }); ``` #### getPrimaryNameRequest() Retrieves the primary name request for a a wallet address. ```typescript const ario = ARIO.mainnet(); const request = await ario.getPrimaryNameRequest({ initiator: 't4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3', }); ``` **Output:** ```json { "initiator": "t4Xr0_J4Iurt7caNST02cMotaz2FIbWQ4Kbj616RHl3", "name": "arns", "startTimestamp": 1728067635857, "endTimestamp": 1735843635857 } ``` # Vaults (/sdks/ar-io-sdk/(ario-contract)/vaults) #### getVault() Retrieves the locked-balance user vault of the ARIO process by the specified wallet address and vault ID. ```typescript const ario = ARIO.mainnet(); const vault = await ario.getVault({ address: 'QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ', vaultId: 'vaultIdOne', }); ``` **Output:** ```json { "balance": 1000000, "startTimestamp": 123, "endTimestamp": 4567 } ``` #### getVaults() Retrieves all locked-balance user vaults of the ARIO process, paginated and sorted by the specified criteria. The `cursor` used for pagination is the last wallet address from the previous request. ```typescript const ario = ARIO.mainnet(); const vaults = await ario.getVaults({ cursor: '0', limit: 100, sortBy: 'balance', sortOrder: 'desc', }); ``` **Output:** ```json { "items": [ { "address": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "vaultId": "vaultIdOne", "balance": 1000000, "startTimestamp": 123, "endTimestamp": 4567 }, { "address": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "vaultId": "vaultIdTwo", "balance": 1000000, "startTimestamp": 123, "endTimestamp": 4567 } // ...98 other addresses with vaults ], "hasMore": true, "nextCursor": "QGWqtJdLLgm2ehFWiiPzMaoFLD50CnGuzZIPEdoDRGQ", "totalItems": 1789, "sortBy": "balance", "sortOrder": "desc" } ``` #### vaultedTransfer() Transfers `mARIO` to the designated `recipient` address and locks the balance for the specified `lockLengthMs` milliseconds. The `revokable` flag determines if the vaulted transfer can be revoked by the sender. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.vaultedTransfer( { recipient: '-5dV7nk7waR8v4STuwPnTck1zFVkQqJh5K9q9Zik4Y5', quantity: new ARIOToken(1000).toMARIO(), lockLengthMs: 1000 * 60 * 60 * 24 * 365, // 1 year revokable: true, }, // optional additional tags { tags: [{ name: 'App-Name', value: 'My-Awesome-App' }] }, ); ``` #### revokeVault() Revokes a vaulted transfer by the recipient address and vault ID. Only the sender of the vaulted transfer can revoke it. _Note: Requires `signer` to be provided on `ARIO.init` to sign the transaction._ ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.revokeVault({ recipient: '-5dV7nk7waR8v4STuwPnTck1zFVkQqJh5K9q9Zik4Y5', vaultId: 'IPdwa3Mb_9pDD8c2IaJx6aad51Ss-_TfStVwBuhtXMs', }); ``` #### createVault() Creates a vault for the specified `quantity` of mARIO from the signer's balance and locks it for the specified `lockLengthMs` milliseconds. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.createVault({ lockLengthMs: 1000 * 60 * 60 * 24 * 365, // 1 year quantity: new ARIOToken(1000).toMARIO(), }); ``` #### extendVault() Extends the lock length of a signer's vault by the specified `extendLengthMs` milliseconds. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.extendVault({ vaultId: 'vaultIdOne', extendLengthMs: 1000 * 60 * 60 * 24 * 365, // 1 year }); ``` #### increaseVault() Increases the balance of a signer's vault by the specified `quantity` of mARIO. ```typescript const ario = ARIO.mainnet({ signer: new ArweaveSigner(jwk) }); const { id: txId } = await ario.increaseVault({ vaultId: 'vaultIdOne', quantity: new ARIOToken(1000).toMARIO(), }); ``` # AR.IO SDK (/sdks/ar-io-sdk) **For AI and LLM users**: Access the complete AR.IO SDK documentation in plain text format at [llm.txt](/sdks/ar-io-sdk/llm.txt) for easy consumption by AI agents and language models. # AR.IO SDK Please refer to the [source code](https://github.com/ar-io/ar-io-sdk) for SDK details. # Logging (/sdks/ar-io-sdk/logging) The library uses a lightweight console logger by default for both Node.js and web environments. The logger outputs structured JSON logs with timestamps. You can configure the log level via `setLogLevel()` API or provide a custom logger that satisfies the `ILogger` interface. #### Default Logger ```typescript // set the log level Logger.default.setLogLevel('debug'); // Create a new logger instance with a specific level const logger = new Logger({ level: 'debug' }); ``` #### Custom Logger Implementation You can provide any custom logger that implements the `ILogger` interface: ```typescript // Custom logger example const customLogger: ILogger = { info: (message, ...args) => console.log(`[INFO] ${message}`, ...args), warn: (message, ...args) => console.warn(`[WARN] ${message}`, ...args), error: (message, ...args) => console.error(`[ERROR] ${message}`, ...args), debug: (message, ...args) => console.debug(`[DEBUG] ${message}`, ...args), setLogLevel: (level) => { /* implement level filtering */ }, }; // Use custom logger with any class const ario = ARIO.mainnet({ logger: customLogger }); // or set it as the default logger in the entire SDK Logger.default = customLogger; ``` #### Winston Logger (Optional) For advanced logging features, you can optionally install Winston and use the provided Winston logger adapter: ```bash yarn add winston ``` ```typescript // Create Winston logger with custom configuration const winstonLogger = new WinstonLogger({ level: 'debug', }); // Use with any class that accepts a logger const ario = ARIO.mainnet({ logger: winstonLogger }); // or set it as the default logger in the entire SDK Logger.default = winstonLogger; ``` #### Other Popular Loggers You can easily integrate other popular logging libraries: ```typescript // Bunyan example const bunyanLogger = bunyan.createLogger({ name: 'ar-io-sdk' }); const adapter: ILogger = { info: (message, ...args) => bunyanLogger.info({ args }, message), warn: (message, ...args) => bunyanLogger.warn({ args }, message), error: (message, ...args) => bunyanLogger.error({ args }, message), debug: (message, ...args) => bunyanLogger.debug({ args }, message), setLogLevel: (level) => bunyanLogger.level(level), }; const ario = ARIO.mainnet({ logger: adapter }); // or set it as the default logger in the entire SDK Logger.default = adapter; ``` # Pagination (/sdks/ar-io-sdk/pagination) #### Overview Certain APIs that could return a large amount of data are paginated using cursors. The SDK uses the `cursor` pattern (as opposed to pages) to better protect against changing data while paginating through a list of items. For more information on pagination strategies refer to [this article](https://www.getknit.dev/blog/api-pagination-best-practices#api-pagination-techniques-). Paginated results include the following properties: - `items`: the list of items on the current request, defaulted to 100 items. - `nextCursor`: the cursor to use for the next batch of items. This is `undefined` if there are no more items to fetch. - `hasMore`: a boolean indicating if there are more items to fetch. This is `false` if there are no more items to fetch. - `totalItems`: the total number of items available. This may change as new items are added to the list, only use this for informational purposes. - `sortBy`: the field used to sort the items, by default this is `startTimestamp`. - `sortOrder`: the order used to sort the items, by default this is `desc`. To request all the items in a list, you can iterate through the list using the `nextCursor` until `hasMore` is `false`. ```typescript let hasMore = true; let cursor: string | undefined; const gateaways = []; while (hasMore) { const page = await ario.getGateways({ limit: 100, cursor }); gateaways.push(...items); cursor = page.nextCursor; hasMore = page.hasMore; } ``` #### Filtering Paginated APIs also support filtering by providing a `filters` parameter. Filters can be applied to any field in the response. When multiple keys are provided, they are treated as AND conditions (all conditions must match). When multiple values are provided for a single key (as an array), they are treated as OR conditions (any value can match). Example: ```typescript const records = await ario.getArNSRecords({ filters: { type: 'lease', processId: [ 'ZkgLfyHALs5koxzojpcsEFAKA8fbpzP7l-tbM7wmQNM', 'r61rbOjyXx3u644nGl9bkwLWlWmArMEzQgxBo2R-Vu0', ], }, }); ``` In the example above, the query will return ArNS records where: - The type is "lease" AND - The processId is EITHER "ZkgLfyHALs5koxzojpcsEFAKA8fbpzP7l-tbM7wmQNM" OR "r61rbOjyXx3u644nGl9bkwLWlWmArMEzQgxBo2R-Vu0" # Token Conversion (/sdks/ar-io-sdk/token-conversion) The ARIO process stores all values as mARIO (milli-ARIO) to avoid floating-point arithmetic issues. The SDK provides an `ARIOToken` and `mARIOToken` classes to handle the conversion between ARIO and mARIO, along with rounding logic for precision. **All process interactions expect values in mARIO. If numbers are provided as inputs, they are assumed to be in raw mARIO values.** #### Converting ARIO to mARIO ```typescript const arioValue = 1; const mARIOValue = new ARIOToken(arioValue).toMARIO(); const mARIOValue = 1_000_000; const arioValue = new mARIOToken(mARIOValue).toARIO(); ``` # Anonymous Operations (/sdks/ardrive-core-js/(advanced-features)/anonymous-operations) Use ArDrive without a wallet for read-only operations: ```typescript const anonymousArDrive = arDriveAnonymousFactory({}); // Read public data const publicFile = await anonymousArDrive.getPublicFile({ fileId }); const folderContents = await anonymousArDrive.listPublicFolder({ folderId }); ``` # Bundle Support (/sdks/ardrive-core-js/(advanced-features)/bundle-support) Large uploads are automatically bundled for efficiency: ```typescript // Bundling happens automatically for multiple files const bulkResult = await arDrive.uploadAllEntities({ entitiesToUpload: manyFiles, // Bundling is handled internally }); ``` # Caching (/sdks/ardrive-core-js/(advanced-features)/caching) ArDrive Core maintains a metadata cache for improved performance: ```shell Windows: /ardrive-caches/metadata Non-Windows: /.ardrive/caches/metadata ``` Enable cache logging: ```bash ``` # Community Features (/sdks/ardrive-core-js/(advanced-features)/community-features) Send tips to the ArDrive community: ```typescript // Send community tip await arDrive.sendCommunityTip({ tokenAmount: new Winston(1000000000000), // 1 AR walletAddress, communityWalletAddress }); ``` # Manifest Creation (/sdks/ardrive-core-js/(advanced-features)/manifest-creation) Create Arweave manifests for web hosting: ```typescript // Create a manifest for a folder const manifest = await arDrive.uploadPublicManifest({ folderId, destManifestName: 'index.html', conflictResolution: 'upsert' }); // Access: https://arweave.net/{manifestId} ``` # Progress Tracking (/sdks/ardrive-core-js/(advanced-features)/progress-tracking) Enable upload progress logging: ```bash ``` Progress will be logged to stderr: ``` Uploading file transaction 1 of total 2 transactions... Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 0% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 35% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 66% Transaction _GKQasQX194a364Hph8Oe-oku1AdfHwxWOw9_JC1yjc Upload Progress: 100% ``` # Turbo Integration (/sdks/ardrive-core-js/(advanced-features)/turbo-integration) Enable Turbo for optimized uploads: ```typescript // Enable Turbo const arDriveWithTurbo = arDriveFactory({ wallet: myWallet, turboSettings: {} }); // Uploads will automatically use Turbo const result = await arDriveWithTurbo.uploadAllEntities({ entitiesToUpload: [{ wrappedEntity, destFolderId }] }); ``` # Bulk Operations (/sdks/ardrive-core-js/(api-reference)/bulk-operations) #### Upload Multiple Files and Folders ```typescript // Prepare entities for upload const folder1 = wrapFileOrFolder('/path/to/folder1'); const folder2 = wrapFileOrFolder('/path/to/folder2'); const file1 = wrapFileOrFolder('/path/to/file1.txt'); // Upload everything in one operation const bulkUpload = await arDrive.uploadAllEntities({ entitiesToUpload: [ // Public folder { wrappedEntity: folder1, destFolderId: rootFolderId }, // Private folder { wrappedEntity: folder2, destFolderId: rootFolderId, driveKey: privateDriveKey }, // Public file { wrappedEntity: file1, destFolderId: someFolderId } ], conflictResolution: 'upsert' }); // Results include all created entities console.log('Created folders:', bulkUpload.created.length); console.log('Total cost:', bulkUpload.totalCost.toString()); ``` #### Create Folder and Upload Contents ```typescript // Create folder and upload all children const folderWithContents = await arDrive.createPublicFolderAndUploadChildren({ parentFolderId, wrappedFolder: wrapFileOrFolder('/path/to/folder'), conflictResolution: 'skip' }); ``` # Conflict Resolution (/sdks/ardrive-core-js/(api-reference)/conflict-resolution) Available strategies when uploading files/folders that already exist: ```typescript // Skip existing files await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'skip' }); // Replace all existing files await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'replace' }); // Update only if content differs (default) await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'upsert' }); // Rename conflicting files await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'rename' }); // Throw error on conflicts await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'error' }); // Interactive prompt (CLI only) await arDrive.uploadAllEntities({ entitiesToUpload: [...], conflictResolution: 'ask' }); ``` # Custom Metadata (/sdks/ardrive-core-js/(api-reference)/custom-metadata) Attach custom metadata to files: ```typescript const fileWithMetadata = wrapFileOrFolder( '/path/to/file.txt', 'text/plain', { metaDataJson: { 'Custom-Field': 'Custom Value', 'Version': '1.0' }, metaDataGqlTags: { 'App-Name': ['MyApp'], 'App-Version': ['1.0.0'] }, dataGqlTags: { 'Content-Type': ['text/plain'] } } ); // Upload with custom metadata await arDrive.uploadPublicFile({ parentFolderId, wrappedFile: fileWithMetadata }); ``` # Download Operations (/sdks/ardrive-core-js/(api-reference)/download-operations) #### Download Files ```typescript // Download public file const publicData = await arDrive.downloadPublicFile({ fileId }); // publicData is a Buffer/Uint8Array // Download private file (automatically decrypted) const privateData = await arDrive.downloadPrivateFile({ fileId, driveKey }); ``` #### Download Folders ```typescript // Download entire folder const folderData = await arDrive.downloadPublicFolder({ folderId, destFolderPath: '/local/download/path' }); // Download private folder const privateFolderData = await arDrive.downloadPrivateFolder({ folderId, driveKey, destFolderPath: '/local/download/path' }); ``` # Drive Operations (/sdks/ardrive-core-js/(api-reference)/drive-operations) #### Creating Drives ```typescript // Public drive const publicDrive = await arDrive.createPublicDrive({ driveName: 'My Public Drive' }); // Private drive with password const privateDrive = await arDrive.createPrivateDrive({ driveName: 'My Private Drive', drivePassword: 'mySecretPassword' }); ``` #### Reading Drive Information ```typescript // Get public drive const publicDriveInfo = await arDrive.getPublicDrive({ driveId }); // Get private drive (requires drive key) const privateDriveInfo = await arDrive.getPrivateDrive({ driveId, driveKey }); // Get all drives for an address const allDrives = await arDrive.getAllDrivesForAddress({ address: walletAddress, privateKeyData: wallet.getPrivateKey() }); ``` #### Renaming Drives ```typescript // Rename public drive await arDrive.renamePublicDrive({ driveId, newName: 'Updated Drive Name' }); // Rename private drive await arDrive.renamePrivateDrive({ driveId, driveKey, newName: 'Updated Private Name' }); ``` # Encryption & Security (/sdks/ardrive-core-js/(api-reference)/encryption-security) #### Key Derivation ```typescript // Derive drive key from password const driveKey = await deriveDriveKey( 'myPassword', driveId.toString(), JSON.stringify(wallet.getPrivateKey()) ); // File keys are automatically derived from drive keys const fileKey = await deriveFileKey(driveKey, fileId); ``` #### Manual Encryption/Decryption ```typescript // Encrypt data const { cipher, cipherIV } = await driveEncrypt(driveKey, data); // Decrypt data const decrypted = await driveDecrypt(cipherIV, driveKey, cipher); ``` # File Operations (/sdks/ardrive-core-js/(api-reference)/file-operations) #### Uploading Files ```typescript // Wrap file for upload const wrappedFile = wrapFileOrFolder('/path/to/file.pdf'); // Upload public file const publicUpload = await arDrive.uploadPublicFile({ parentFolderId, wrappedFile, conflictResolution: 'upsert' // skip, replace, upsert, or error }); // Upload private file const privateUpload = await arDrive.uploadPrivateFile({ parentFolderId, driveKey, wrappedFile }); ``` #### Reading File Information ```typescript // Get public file metadata const publicFile = await arDrive.getPublicFile({ fileId }); // Get private file metadata const privateFile = await arDrive.getPrivateFile({ fileId, driveKey }); ``` #### Moving and Renaming Files ```typescript // Move file await arDrive.movePublicFile({ fileId, newParentFolderId }); // Rename file await arDrive.renamePublicFile({ fileId, newName: 'renamed-file.pdf' }); ``` # Folder Operations (/sdks/ardrive-core-js/(api-reference)/folder-operations) #### Creating Folders ```typescript // Public folder const publicFolder = await arDrive.createPublicFolder({ folderName: 'Documents', driveId, parentFolderId }); // Private folder const privateFolder = await arDrive.createPrivateFolder({ folderName: 'Secret Documents', driveId, driveKey, parentFolderId }); ``` #### Listing Folder Contents ```typescript // List public folder const publicContents = await arDrive.listPublicFolder({ folderId, maxDepth: 2, // Optional: limit recursion depth includeRoot: true // Optional: include root folder in results }); // List private folder const privateContents = await arDrive.listPrivateFolder({ folderId, driveKey, maxDepth: 1 }); ``` #### Moving and Renaming Folders ```typescript // Move folder await arDrive.movePublicFolder({ folderId, newParentFolderId }); // Rename folder await arDrive.renamePublicFolder({ folderId, newName: 'New Folder Name' }); ``` # Pricing & Cost Estimation (/sdks/ardrive-core-js/(api-reference)/pricing-cost-estimation) ```typescript // Get price estimator const priceEstimator = arDrive.getArDataPriceEstimator(); // Estimate cost for data size const cost = await priceEstimator.getARPriceForByteCount( new ByteCount(1024 * 1024) // 1MB ); // Get base Winston price (without tips) const basePrice = await priceEstimator.getBaseWinstonPriceForByteCount( new ByteCount(5 * 1024 * 1024) // 5MB ); ``` # Entity IDs (/sdks/ardrive-core-js/(core-concepts)/entity-ids) Use the type-safe entity ID constructors: ```typescript // Generic entity ID const entityId = EID('10108b54a-eb5e-4134-8ae2-a3946a428ec7'); // Specific entity IDs const driveId = new DriveID('12345674a-eb5e-4134-8ae2-a3946a428ec7'); const folderId = new FolderID('47162534a-eb5e-4134-8ae2-a3946a428ec7'); const fileId = new FileID('98765432a-eb5e-4134-8ae2-a3946a428ec7'); ``` # Entity Types (/sdks/ardrive-core-js/(core-concepts)/entity-types) ArDrive uses a hierarchical structure: - **Drives**: Top-level containers (public or private) - **Folders**: Organize files within drives - **Files**: Individual files stored on Arweave Each entity has a unique ID (`DriveID`, `FolderID`, `FileID`) and can be either public (unencrypted) or private (encrypted). # Wallet Management (/sdks/ardrive-core-js/(core-concepts)/wallet-management) ```typescript // Create wallet from JWK const wallet = new JWKWallet(jwkKey); // Check wallet balance const balance = await wallet.getBalance(); ``` # Building (/sdks/ardrive-core-js/(development)/building) ```shell yarn build yarn dev ``` # Environment Setup (/sdks/ardrive-core-js/(development)/environment-setup) We use nvm and Yarn for development: 1. Install nvm [using their instructions][nvm-install] 2. Install correct Node version: `nvm install && nvm use` 3. Install Yarn 3.x: Follow [Yarn installation][yarn-install] 4. Enable git hooks: `yarn husky install` 5. Install dependencies: `yarn install --check-cache` # Linting and Formatting (/sdks/ardrive-core-js/(development)/linting-and-formatting) ```shell yarn lint yarn lintfix yarn format yarn typecheck ``` # Recommended VS Code Extensions (/sdks/ardrive-core-js/(development)/recommended-vs-code-extensions) - [ESLint][eslint-vscode] - [EditorConfig][editor-config-vscode] - [Prettier][prettier-vscode] - [ZipFS][zipfs-vscode] # ArLocal Testing (/sdks/ardrive-core-js/(testing)/arlocal-testing) For integration testing with a local Arweave instance: ```shell yarn arlocal-docker-test ``` # Running Tests (/sdks/ardrive-core-js/(testing)/running-tests) ```shell yarn test yarn test -g 'My specific test' yarn coverage yarn power-assert -g 'My test case' ``` # Test Organization (/sdks/ardrive-core-js/(testing)/test-organization) - Unit tests: Located next to source files (`*.test.ts`) - Integration tests: Located in `/tests` directory # Contributing (/sdks/ardrive-core-js/contributing) 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add some amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request # ArDrive Core JS (/sdks/ardrive-core-js) **For AI and LLM users**: Access the complete ArDrive Core JS documentation in plain text format at [llm.txt](/sdks/ardrive-core-js/llm.txt) for easy consumption by AI agents and language models. # ArDrive Core JS Please refer to the [source code](https://github.com/ardriveapp/ardrive-core-js) for SDK details. # License (/sdks/ardrive-core-js/license) AGPL-3.0-or-later # Support (/sdks/ardrive-core-js/support) - [Discord Community](https://discord.gg/7RuTBckX) - [GitHub Issues](https://github.com/ardriveapp/ardrive-core-js/issues) - [ArDrive Website](https://ardrive.io) [yarn-install]: https://yarnpkg.com/getting-started/install [nvm-install]: https://github.com/nvm-sh/nvm#installing-and-updating [editor-config-vscode]: https://marketplace.visualstudio.com/items?itemName=EditorConfig.EditorConfig [prettier-vscode]: https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode [zipfs-vscode]: https://marketplace.visualstudio.com/items?itemName=arcanis.vscode-zipfs [eslint-vscode]: https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint [mocha]: https://github.com/mochajs/mocha [chai]: https://github.com/chaijs/chai [sinon]: https://github.com/sinonjs/sinon # Introduction (/sdks) Build powerful applications with our comprehensive suite of SDKs designed for the AR.IO ecosystem. } title="Upload data with the Turbo SDK" description="High-performance data upload service for Arweave with instant confirmation and transparent pricing" href="/sdks/turbo-sdk" /> } title="Interact with the AR.IO Network using the AR.IO SDK" description="Access AR.IO Network protocols, manage ArNS names, interact with ANTs, and integrate gateway services" href="/sdks/ar-io-sdk" /> } title="Decentralized access with Wayfinder SDK" description="Robust, censorship-resistant access to Arweave data through the distributed AR.IO gateway network" href="/sdks/wayfinder" /> ## Choose Your SDK Each SDK serves a specific purpose in the AR.IO ecosystem: - **Turbo SDK** - For applications that need fast, reliable data uploads to Arweave - **AR.IO SDK** - For interacting with AR.IO Network smart contracts and services - **Wayfinder SDK** - For decentralized data access with built-in verification and gateway routing All SDKs are available for both Node.js and browser environments, with TypeScript support included. ## Next Steps # TurboAuthenticatedClient (/sdks/turbo-sdk/(apis)/turboauthenticatedclient) #### getBalance() Issues a signed request to get the credit balance of a wallet measured in AR (measured in Winston Credits, or winc). ```typescript const { winc: balance } = await turbo.getBalance(); ``` #### signer.getNativeAddress() Returns the [native address][docs/native-address] of the connected signer. ```typescript const address = await turbo.signer.getNativeAddress(); ``` #### getWincForFiat() Returns the current amount of Winston Credits including all adjustments for the provided fiat currency, amount, and optional promo codes. ```typescript const { winc, paymentAmount, quotedPaymentAmount, adjustments } = await turbo.getWincForFiat({ amount: USD(100), promoCodes: ['MY_PROMO_CODE'], // promo codes require an authenticated client }); ``` #### createCheckoutSession() Creates a Stripe checkout session for a Turbo Top Up with the provided amount, currency, owner, and optional promo codes. The returned URL can be opened in the browser, all payments are processed by Stripe. Promo codes require an authenticated client. ```typescript const { url, winc, paymentAmount, quotedPaymentAmount, adjustments } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicArweaveAddress, promoCodes: ['MY_PROMO_CODE'], // promo codes require an authenticated client }); // open checkout session in a browser window.open(url, '_blank'); ``` #### upload() The easiest way to upload data to Turbo. The `signal` is an optional [AbortSignal] that can be used to cancel the upload or timeout the request. `dataItemOpts` is an optional object that can be used to configure tags, target, and anchor for the data item upload. ```typescript const uploadResult = await turbo.upload({ data: 'The contents of my file!', signal: AbortSignal.timeout(10_000), // cancel the upload after 10 seconds dataItemOpts: { // optional }, events: { // optional }, }); ``` #### uploadFile() Signs and uploads a raw file. There are two ways to provide the file to the SDK: 1. Using a `file` parameter 2. Using a `fileStreamFactory` and `fileSizeFactory` ##### Using file` In Web with a file input: ```typescript const selectedFile = e.target.files[0]; const uploadResult = await turbo.uploadFile({ file: selectedFile, dataItemOpts: { tags: [{ name: 'Content-Type', value: 'text/plain' }], }, events: { onUploadProgress: ({ totalBytes, processedBytes }) => { console.log('Upload progress:', { totalBytes, processedBytes }); }, onUploadError: (error) => { console.log('Upload error:', { error }); }, onUploadSuccess: () => { console.log('Upload success!'); }, }, }); ``` In NodeJS with a file path: ```typescript const filePath = path.join(__dirname, './my-unsigned-file.txt'); const fileSize = fs.stateSync(filePath).size; const uploadResult = await turbo.uploadFile({ file: filePath, dataItemOpts: { tags: [{ name: 'Content-Type', value: 'text/plain' }], }, }); ``` ##### Using fileStreamFactory` and `fileSizeFactory` Note: The provided `fileStreamFactory` should produce a NEW file data stream each time it is invoked. The `fileSizeFactory` is a function that returns the size of the file. The `signal` is an optional [AbortSignal] that can be used to cancel the upload or timeout the request. `dataItemOpts` is an optional object that can be used to configure tags, target, and anchor for the data item upload. ```typescript const filePath = path.join(__dirname, './my-unsigned-file.txt'); const fileSize = fs.stateSync(filePath).size; const uploadResult = await turbo.uploadFile({ fileStreamFactory: () => fs.createReadStream(filePath), fileSizeFactory: () => fileSize, }); ``` ##### Customize Multi-Part Upload Behavior By default, the Turbo upload methods will split files that are larger than 10 MiB into chunks and send them to the upload service multi-part endpoints. This behavior can be customized with the following inputs: - `chunkByteCount`: The maximum size in bytes for each chunk. Must be between 5 MiB and 500 MiB. Defaults to 5 MiB. - `maxChunkConcurrency`: The maximum number of chunks to upload concurrently. Defaults to 5. Reducing concurrency will slow down uploads, but reduce memory utilization and serialize network calls. Increasing it will upload faster, but can strain available resources. - `chunkingMode`: The chunking mode to use. Can be 'auto', 'force', or 'disabled'. Defaults to 'auto'. Auto behavior means chunking is enabled if the file would be split into at least three chunks. - `maxFinalizeMs`: The maximum time in milliseconds to wait for the finalization of all chunks after the last chunk is uploaded. Defaults to 1 minute per GiB of the total file size. ```typescript // Customize chunking behavior await turbo.upload({ ...params, chunkByteCount: 1024 * 1024 * 500, // Max chunk size maxChunkConcurrency: 1, // Minimize concurrency }); ``` ```typescript // Disable chunking behavior await turbo.upload({ ...params, chunkingMode: 'disabled', }); ``` ```typescript // Force chunking behavior await turbo.upload({ ...params, chunkingMode: 'force', }); ``` #### On Demand Uploads With the upload methods, you can choose to Top Up with selected crypto token on demand if the connected wallet does not have enough credits to complete the upload. This is done by providing the `OnDemandFunding` class to the `fundingMode` parameter on upload methods. The `maxTokenAmount` (optional) is the maximum amount of tokens in the token type's smallest unit value (e.g: Winston for arweave token type) to fund the wallet with. The `topUpBufferMultiplier` (optional) is the multiplier to apply to the estimated top-up amount to avoid underpayment during on-demand top-ups due to price fluctuations on longer uploads. Defaults to 1.1, meaning a 10% buffer. Note: On demand API currently only available for $ARIO (`ario`), $SOL (`solana`), and $ETH on Base Network (`base-eth`) token types. ```typescript const turbo = TurboFactory.authenticated({ signer: arweaveSignerWithARIO, token: 'ario', }); await turbo.upload({ ...params, fundingMode: new OnDemandFunding({ maxTokenAmount: ARIOToTokenAmount(500), // Max 500 $ARIO topUpBufferMultiplier: 1.1, // 10% buffer to avoid underpayment }), }); ``` #### uploadFolder() Signs and uploads a folder of files. For NodeJS, the `folderPath` of the folder to upload is required. For the browser, an array of `files` is required. The `dataItemOpts` is an optional object that can be used to configure tags, target, and anchor for the data item upload. The `signal` is an optional [AbortSignal] that can be used to cancel the upload or timeout the request. The `maxConcurrentUploads` is an optional number that can be used to limit the number of concurrent uploads. The `throwOnFailure` is an optional boolean that can be used to throw an error if any upload fails. The `manifestOptions` is an optional object that can be used to configure the manifest file, including a custom index file, fallback file, or whether to disable manifests altogether. Manifests are enabled by default. ##### NodeJS Upload Folder ```typescript const folderPath = path.join(__dirname, './my-folder'); const { manifest, fileResponses, manifestResponse } = await turbo.uploadFolder({ folderPath, dataItemOpts: { // optional tags: [ { // User defined content type will overwrite file content type name: 'Content-Type', value: 'text/plain', }, { name: 'My-Custom-Tag', value: 'my-custom-value', }, ], // no timeout or AbortSignal provided }, manifestOptions: { // optional indexFile: 'custom-index.html', fallbackFile: 'custom-fallback.html', disableManifests: false, }, }); ``` ##### Browser Upload Folder ```html const folderInput = document.getElementById('folder'); folderInput.addEventListener('change', async (event) => { const selectedFiles = folderInput.files; console.log('Folder selected:', selectedFiles); const { manifest, fileResponses, manifestResponse } = await turbo.uploadFolder({ files: Array.from(selectedFiles).map((file) => file), }); console.log(manifest, fileResponses, manifestResponse); }); ``` #### topUpWithTokens() Tops up the connected wallet with Credits by submitting a payment transaction for the token amount to the Turbo wallet and then submitting that transaction id to Turbo Payment Service for top up processing. - The `tokenAmount` is the amount of tokens in the token type's smallest unit value (e.g: Winston for arweave token type) to fund the wallet with. - The `feeMultiplier` (optional) is the multiplier to apply to the reward for the transaction to modify its chances of being mined. Credits will be added to the wallet balance after the transaction is confirmed on the given blockchain. Defaults to 1.0, meaning no multiplier. ##### Arweave (AR) Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'arweave' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: WinstonToTokenAmount(100_000_000), // 0.0001 AR feeMultiplier: 1.1, // 10% increase in reward for improved mining chances }); ``` ##### AR.IO Network (ARIO) Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'ario' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: ARIOToTokenAmount(100), // 100 $ARIO }); ``` ##### Ethereum (ETH) Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'ethereum' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: ETHToTokenAmount(0.00001), // 0.00001 ETH }); ``` ##### Polygon (POL / MATIC) Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'pol' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: POLToTokenAmount(0.00001), // 0.00001 POL }); ``` ##### Eth on Base Network Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'base-eth' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: ETHToTokenAmount(0.00001), // 0.00001 ETH bridged on Base Network }); ``` ##### Solana (SOL) Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'solana' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: SOLToTokenAmount(0.00001), // 0.00001 SOL }); ``` ##### KYVE Crypto Top Up ```typescript const turbo = TurboFactory.authenticated({ signer, token: 'kyve' }); const { winc, status, id, ...fundResult } = await turbo.topUpWithTokens({ tokenAmount: KYVEToTokenAmount(0.00001), // 0.00001 KYVE }); ``` #### shareCredits() Shares credits from the connected wallet to the provided native address and approved winc amount. This action will create a signed data item for the approval ```typescript const { approvalDataItemId, approvedWincAmount } = await turbo.shareCredits({ approvedAddress: '2cor...VUa', approvedWincAmount: 800_000_000_000, // 0.8 Credits expiresBySeconds: 3600, // Credits will expire back to original wallet in 1 hour }); ``` #### revokeCredits() Revokes all credits shared from the connected wallet to the provided native address. ```typescript const revokedApprovals = await turbo.revokeCredits({ revokedAddress: '2cor...VUa', }); ``` #### getCreditShareApprovals() Returns all given or received credit share approvals for the connected wallet or the provided native address. ```typescript const { givenApprovals, receivedApprovals } = await turbo.getCreditShareApprovals({ userAddress: '2cor...VUa', }); ``` # TurboFactory (/sdks/turbo-sdk/(apis)/turbofactory) #### unauthenticated() Creates an instance of a client that accesses Turbo's unauthenticated services. ```typescript const turbo = TurboFactory.unauthenticated(); ``` #### authenticated() Creates an instance of a client that accesses Turbo's authenticated and unauthenticated services. Requires either a signer, or private key to be provided. ##### Arweave JWK ```typescript const jwk = await arweave.crypto.generateJWK(); const turbo = TurboFactory.authenticated({ privateKey: jwk }); ``` ##### ArweaveSigner ```typescript const signer = new ArweaveSigner(jwk); const turbo = TurboFactory.authenticated({ signer }); ``` ##### ArconnectSigner ```typescript const signer = new ArconnectSigner(window.arweaveWallet); const turbo = TurboFactory.authenticated({ signer }); ``` ##### EthereumSigner ```typescript const signer = new EthereumSigner(privateKey); const turbo = TurboFactory.authenticated({ signer }); ``` ##### Ethereum Private Key ```typescript const turbo = TurboFactory.authenticated({ privateKey: ethHexadecimalPrivateKey, token: 'ethereum', }); ``` ##### POL (MATIC) Private Key ```typescript const turbo = TurboFactory.authenticated({ privateKey: ethHexadecimalPrivateKey, token: 'pol', }); ``` ##### HexSolanaSigner ```typescript const signer = new HexSolanaSigner(bs58.encode(secretKey)); const turbo = TurboFactory.authenticated({ signer }); ``` ##### Solana Web Wallet Adapter ```typescript const turbo = TurboFactory.authenticated({ walletAdapter: window.solana, token: 'solana', }); ``` ##### Solana Secret Key ```typescript const turbo = TurboFactory.authenticated({ privateKey: bs58.encode(secretKey), token: 'solana', }); ``` ##### KYVE Private Key ```typescript const turbo = TurboFactory.authenticated({ privateKey: kyveHexadecimalPrivateKey, token: 'kyve', }); ``` ##### KYVE Mnemonic ```typescript const turbo = TurboFactory.authenticated({ privateKey: privateKeyFromKyveMnemonic(mnemonic), token: 'kyve', }); ``` # TurboUnauthenticatedClient (/sdks/turbo-sdk/(apis)/turbounauthenticatedclient) #### getSupportedCurrencies() Returns the list of currencies supported by the Turbo Payment Service for topping up a user balance of AR Credits (measured in Winston Credits, or winc). ```typescript const currencies = await turbo.getSupportedCurrencies(); ``` #### getSupportedCountries() Returns the list of countries supported by the Turbo Payment Service's top up workflow. ```typescript const countries = await turbo.getSupportedCountries(); ``` #### getFiatToAR() Returns the current raw fiat to AR conversion rate for a specific currency as reported by third-party pricing oracles. ```typescript const fiatToAR = await turbo.getFiatToAR({ currency: 'USD' }); ``` #### getFiatRates() Returns the current fiat rates for 1 GiB of data for supported currencies, including all top-up adjustments and fees. ```typescript const rates = await turbo.getFiatRates(); ``` #### getWincForFiat() Returns the current amount of Winston Credits including all adjustments for the provided fiat currency. ```typescript const { winc, actualPaymentAmount, quotedPaymentAmount, adjustments } = await turbo.getWincForFiat({ amount: USD(100), }); ``` #### getWincForToken() Returns the current amount of Winston Credits including all adjustments for the provided token amount. ```typescript const { winc, actualTokenAmount, equivalentWincTokenAmount } = await turbo.getWincForToken({ tokenAmount: WinstonToTokenAmount(100_000_000), }); ``` #### getFiatEstimateForBytes() Get the current price from the Turbo Payment Service, denominated in the specified fiat currency, for uploading a specified number of bytes to Turbo. ```typescript const turbo = TurboFactory.unauthenticated(); const { amount } = await turbo.getFiatEstimateForBytes({ byteCount: 1024 * 1024 * 1024, currency: 'usd', // specify the currency for the price }); console.log(amount); // Estimated usd price for 1 GiB ``` **Output:** ```json { "byteCount": 1073741824, "amount": 20.58, "currency": "usd", "winc": "2402378997310" } ``` #### getTokenPriceForBytes() Get the current price from the Turbo Payment Service, denominated in the specified token, for uploading a specified number of bytes to Turbo. ```typescript const turbo = TurboFactory.unauthenticated({ token: 'solana' }); const { tokenPrice } = await turbo.getTokenPriceForBytes({ byteCount: 1024 * 1024 * 100, }); console.log(tokenPrice); // Estimated SOL Price for 100 MiB ``` #### getUploadCosts() Returns the estimated cost in Winston Credits for the provided file sizes, including all upload adjustments and fees. ```typescript const [uploadCostForFile] = await turbo.getUploadCosts({ bytes: [1024] }); const { winc, adjustments } = uploadCostForFile; ``` #### uploadSignedDataItem() Uploads a signed data item. The provided `dataItemStreamFactory` should produce a NEW signed data item stream each time is it invoked. The `dataItemSizeFactory` is a function that returns the size of the file. The `signal` is an optional [AbortSignal] that can be used to cancel the upload or timeout the request. The `events` parameter is an optional object that can be used to listen to upload progress, errors, and success (refer to the [Events] section for more details). ```typescript const filePath = path.join(__dirname, './my-signed-data-item'); const dataItemSize = fs.statSync(filePath).size; const uploadResponse = await turbo.uploadSignedDataItem({ dataItemStreamFactory: () => fs.createReadStream(filePath), dataItemSizeFactory: () => dataItemSize, signal: AbortSignal.timeout(10_000), // cancel the upload after 10 seconds events: { // track upload events only onUploadProgress: ({ totalBytes, processedBytes }) => { console.log('Upload progress:', { totalBytes, processedBytes }); }, onUploadError: (error) => { console.log('Upload error:', { error }); }, onUploadSuccess: () => { console.log('Upload success!'); }, }, }); ``` #### createCheckoutSession() Creates a Stripe checkout session for a Turbo Top Up with the provided amount, currency, owner. The returned URL can be opened in the browser, all payments are processed by Stripe. To leverage promo codes, see [TurboAuthenticatedClient]. ##### Arweave (AR) Fiat Top Up ```typescript const { url, winc, paymentAmount, quotedPaymentAmount, adjustments } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicArweaveAddress, // promo codes require an authenticated client }); // Open checkout session in a browser window.open(url, '_blank'); ``` ##### Ethereum (ETH) Fiat Top Up ```typescript const turbo = TurboFactory.unauthenticated({ token: 'ethereum' }); const { url, winc, paymentAmount } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicEthereumAddress, }); ``` ##### Solana (SOL) Fiat Top Up ```typescript const turbo = TurboFactory.unauthenticated({ token: 'solana' }); const { url, winc, paymentAmount } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicSolanaAddress, }); ``` ##### Polygon (POL / MATIC) Fiat Top Up ```typescript const turbo = TurboFactory.unauthenticated({ token: 'pol' }); const { url, winc, paymentAmount } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicPolygonAddress, }); ``` ##### KYVE Fiat Top Up ```typescript const turbo = TurboFactory.unauthenticated({ token: 'kyve' }); const { url, winc, paymentAmount } = await turbo.createCheckoutSession({ amount: USD(10.0), // $10.00 USD owner: publicKyveAddress, }); ``` #### submitFundTransaction() Submits the transaction ID of a funding transaction to Turbo Payment Service for top up processing. The `txId` is the transaction ID of the transaction to be submitted. Use this API if you've already executed your token transfer to the Turbo wallet. Otherwise, consider using `topUpWithTokens` to execute a new token transfer to the Turbo wallet and submit its resulting transaction ID for top up processing all in one go ```typescript const turbo = TurboFactory.unauthenticated(); // defaults to arweave token type const { status, id, ...fundResult } = await turbo.submitFundTransaction({ txId: 'my-valid-arweave-fund-transaction-id', }); ``` # Events (/sdks/turbo-sdk/events) The SDK provides events for tracking the state signing and uploading data to Turbo. You can listen to these events by providing a callback function to the `events` parameter of the `upload`, `uploadFile`, and `uploadSignedDataItem` methods. - `onProgress` - emitted when the overall progress changes (includes both upload and signing). Each event consists of the total bytes, processed bytes, and the step (upload or signing) - `onError` - emitted when the overall upload or signing fails (includes both upload and signing) - `onSuccess` - emitted when the overall upload or signing succeeds (includes both upload and signing) - this is the last event emitted for the upload or signing process - `onSigningProgress` - emitted when the signing progress changes. - `onSigningError` - emitted when the signing fails. - `onSigningSuccess` - emitted when the signing succeeds - `onUploadProgress` - emitted when the upload progress changes - `onUploadError` - emitted when the upload fails - `onUploadSuccess` - emitted when the upload succeeds ```typescript const uploadResult = await turbo.upload({ data: 'The contents of my file!', signal: AbortSignal.timeout(10_000), // cancel the upload after 10 seconds dataItemOpts: { // optional }, events: { // overall events (includes signing and upload events) onProgress: ({ totalBytes, processedBytes, step }) => { const percentComplete = (processedBytes / totalBytes) * 100; console.log('Overall progress:', { totalBytes, processedBytes, step, percentComplete: percentComplete.toFixed(2) + '%', // eg 50.68% }); }, onError: (error) => { console.log('Overall error:', { error }); }, onSuccess: () => { console.log('Signed and upload data item!'); }, // upload events onUploadProgress: ({ totalBytes, processedBytes }) => { console.log('Upload progress:', { totalBytes, processedBytes }); }, onUploadError: (error) => { console.log('Upload error:', { error }); }, onUploadSuccess: () => { console.log('Upload success!'); }, // signing events onSigningProgress: ({ totalBytes, processedBytes }) => { console.log('Signing progress:', { totalBytes, processedBytes }); }, onSigningError: (error) => { console.log('Signing error:', { error }); }, onSigningSuccess: () => { console.log('Signing success!'); }, }, }); ``` # Turbo SDK (/sdks/turbo-sdk) **For AI and LLM users**: Access the complete Turbo SDK documentation in plain text format at [llm.txt](/sdks/turbo-sdk/llm.txt) for easy consumption by AI agents and language models. # Turbo SDK Please refer to the [source code](https://github.com/ardriveapp/turbo-sdk) for SDK details. # Logging (/sdks/turbo-sdk/logging) The SDK uses winston for logging. You can set the log level using the `setLogLevel` method. ```typescript TurboFactory.setLogLevel('debug'); ``` # Turbo Credit Sharing (/sdks/turbo-sdk/turbo-credit-sharing) Users can share their purchased Credits with other users' wallets by creating Credit Share Approvals. These approvals are created by uploading a signed data item with tags indicating the recipient's wallet address, the amount of Credits to share, and an optional amount of seconds that the approval will expire in. The recipient can then use the shared Credits to pay for their own uploads to Turbo. Shared Credits cannot be re-shared by the recipient to other recipients. Only the original owner of the Credits can share or revoke Credit Share Approvals. Credits that are shared to other wallets may not be used by the original owner of the Credits for sharing or uploading unless the Credit Share Approval is revoked or expired. Approvals can be revoked at any time by similarly uploading a signed data item with tags indicating the recipient's wallet address. This will remove all approvals and prevent the recipient from using the shared Credits. All unused Credits from expired or revoked approvals are returned to the original owner of the Credits. To use the shared Credits, recipient users must provide the wallet address of the user who shared the Credits with them in the `x-paid-by` HTTP header when uploading data. This tells Turbo services to look for and use Credit Share Approvals to pay for the upload before using the signer's balance. For user convenience, during upload the Turbo CLI will use any available Credit Share Approvals found for the connected wallet before using the signing wallet's balance. To instead ignore all Credit shares and only use the signer's balance, use the `--ignore-approvals` flag. To use the signer's balance first before using Credit shares, use the `--use-signer-balance-first` flag. In contrast, the Turbo SDK layer does not provide this functionality and will only use approvals when `paidBy` is provided. The Turbo SDK provides the following methods to manage Credit Share Approvals: - `shareCredits`: Creates a Credit Share Approval for the specified wallet address and amount of Credits. - `revokeCredits`: Revokes all Credit Share Approvals for the specified wallet address. - `listShares`: Lists all Credit Share Approvals for the specified wallet address or connected wallet. - `dataItemOpts: { ...opts, paidBy: string[] }`: Upload methods now accept 'paidBy', an array of wallet addresses that have provided credit share approvals to the user from which to pay, in the order provided and as necessary, for the upload. The Turbo CLI provides the following commands to manage Credit Share Approvals: - `share-credits`: Creates a Credit Share Approval for the specified wallet address and amount of Credits. - `revoke-credits`: Revokes all Credit Share Approvals for the specified wallet address. - `list-shares`: Lists all Credit Share Approvals for the specified wallet address or connected wallet. - `paidBy: --paid-by `: Upload commands now accept '--paid-by', an array of wallet addresses that have provided credit share approvals to the user from which to pay, in the order provided and as necessary, for the upload. - `--ignore-approvals`: Ignore all Credit Share Approvals and only use the signer's balance. - `--use-signer-balance-first`: Use the signer's balance first before using Credit Share Approvals. # Automated Releases (/sdks/wayfinder/(releases)/automated-releases) This repository is configured with GitHub Actions workflows that automate the release process: - **Main Branch**: When changes are merged to `main`, a standard release is created - **Alpha Branch**: When changes are merged to `alpha`, a prerelease (alpha tagged) is created The workflow automatically: 1. Determines whether to create a prerelease or standard release based on the branch 2. Versions packages using changesets 3. Publishes to npm 4. Creates GitHub releases 5. Pushes tags back to the repository To use the automated process: 1. Create changesets for your changes 2. Push your changes to a feature branch 3. Create a pull request to `alpha` (for prereleases) or `main` (for standard releases) 4. When the PR is merged, the release will be automatically created # Creating a Changeset (/sdks/wayfinder/(releases)/creating-a-changeset) To create a changeset when making changes: ```bash npx changeset ``` This will guide you through the process of documenting your changes and selecting which packages are affected. Changesets will be used during the release process to update package versions and generate changelogs. # Manual Release Process (/sdks/wayfinder/(releases)/manual-release-process) If you need to release manually, follow these steps: #### Alpha Releases To release a new alpha version: ```bash npx changeset version ``` 3. Review the version changes and changelogs 4. Commit the changes: ```bash git add . git commit -m "chore(release): version packages" ``` 5. Publish the packages to npm: ```bash npm run build npx changeset publish ``` 6. Push the changes and tags: ```bash git push origin main --follow-tags ``` #### Prerelease Mode For prerelease versions (e.g., beta, alpha): 1. Enter prerelease mode specifying the tag: ```bash npx changeset pre enter beta ``` 2. Create changesets as normal: ```bash npx changeset ``` 3. Version and publish as normal: ```bash npx changeset version git add . git commit -m "chore(release): prerelease version packages" npm run build npx changeset publish git push origin main --follow-tags ``` 4. Exit prerelease mode when ready for a stable release: ```bash npx changeset pre exit ``` 5. Follow the normal release process for the stable version. # Architecture (/sdks/wayfinder/architecture) - Code to interfaces. - Prefer type safety over runtime safety. - Prefer composition over inheritance. - Prefer integration tests over unit tests. # Contributing (/sdks/wayfinder/contributing) 1. Branch from `alpha` 2. Create a new branch for your changes (e.g. `feat/my-feature`) 3. Make your changes on your branch, push them to your branch 4. As you make commits/changes or once you're ready to release, create a changeset describing your changes via `npx changeset`. 5. Follow the prompts to select the packages that are affected by your changes. 6. Add and commit the changeset to your branch 7. Request review from a maintainer, and once approved, merge your changes into the `alpha` branch 8. A release PR will be automatically created with all pending changesets to the `alpha` branch 9. The maintainer will review the PR and merge it into `alpha`, which will trigger the automated release process using all pending changesets # Wayfinder SDK's (/sdks/wayfinder) **For AI and LLM users**: Access the complete Wayfinder SDK's documentation in plain text format at [llm.txt](/sdks/wayfinder/llm.txt) for easy consumption by AI agents and language models. # Wayfinder SDK's Please refer to the [source code](https://github.com/ar-io/wayfinder) for SDK details. # Linting & Formatting (/sdks/wayfinder/linting-formatting) - `yarn lint:check` - checks for linting errors - `yarn lint:fix` - fixes linting errors - `yarn format:check` - checks for formatting errors - `yarn format:fix` - fixes formatting errors # Packages (/sdks/wayfinder/packages) This monorepo contains the following packages: - **[@ar.io/wayfinder-core](./packages/wayfinder-core)** ![npm](https://img.shields.io/npm/v/@ar.io/wayfinder-core.svg) : Core JavaScript library for the Wayfinder routing and verification protocol - **[@ar.io/wayfinder-react](./packages/wayfinder-react)** ![npm](https://img.shields.io/npm/v/@ar.io/wayfinder-react.svg) : React components for Wayfinder, including Hooks and Context provider - **[@ar.io/wayfinder-extension](./packages/wayfinder-extension)** ![chrome](https://img.shields.io/chrome-web-store/v/hnhmeknhajanolcoihhkkaaimapnmgil?label=chrome) : Chrome extension for Wayfinder - **[@ar.io/wayfinder-cli](./packages/cli)** (coming soon) : CLI for interacting with Wayfinder in the terminal # Testing (/sdks/wayfinder/testing) - `yarn test` - runs all tests in all packages (monorepo) # Custom Providers and Strategies (/sdks/wayfinder/wayfinder-core/(advanced-usage)/custom-providers-and-strategies) For advanced use cases, you can provide custom providers and strategies to `createWayfinderClient`: ```javascript const wayfinder = createWayfinderClient({ ario: ARIO.mainnet() // Gateway selection gatewaySelection: 'top-ranked', // Enable caching with custom TTL cache: { ttlSeconds: 3600 }, // 1 hour // Override 'routing' with custom routing strategy routingStrategy: new FastestPingRoutingStrategy({ timeoutMs: 1000, }), // Override 'verification' with custom verification strategy verificationStrategy: new HashVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), }); ``` # Direct Constructor Usage (/sdks/wayfinder/wayfinder-core/(advanced-usage)/direct-constructor-usage) For complete control, you can use the Wayfinder constructor directly. This is useful when you need fine-grained control over the configuration: > _Wayfinder client that caches the top 10 gateways by operator stake from the ARIO Network for 1 hour and uses the fastest pinging routing strategy to select the fastest gateway for requests._ ```javascript const wayfinder = new Wayfinder({ // cache the top 10 gateways by operator stake from the ARIO Network for 1 hour gatewaysProvider: new SimpleCacheGatewaysProvider({ ttlSeconds: 60 * 60, // cache the gateways for 1 hour gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', sortOrder: 'desc', limit: 10, }), }), // routing settings routingSettings: { // use the fastest pinging strategy to select the fastest gateway for requests strategy: new FastestPingRoutingStrategy({ timeoutMs: 1000, }), // events events: { onRoutingStarted: (event) => { console.log('Routing started!', event); }, onRoutingSkipped: (event) => { console.log('Routing skipped!', event); }, onRoutingSucceeded: (event) => { console.log('Routing succeeded!', event); }, }, }, // verification settings verificationSettings: { // enable verification - if false, verification will be skipped for all requests enabled: true, // verify the data using the hash of the data against a list of trusted gateways strategy: new HashVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), // strict verification - if true, verification failures will cause requests to fail strict: true, // events events: { onVerificationProgress: (event) => { console.log('Verification progress!', event); }, onVerificationSucceeded: (event) => { console.log('Verification succeeded!', event); }, onVerificationFailed: (event) => { console.log('Verification failed!', event); }, }, }, }); ``` # NetworkGatewaysProvider (/sdks/wayfinder/wayfinder-core/(gateway-providers)/networkgatewaysprovider) Returns a list of gateways from the ARIO Network based on on-chain metrics. You can specify on-chain metrics for gateways to prioritize the highest quality gateways. This requires installing the `@ar.io/sdk` package and importing the `ARIO` object. *It is recommended to use this provider for most use cases to leverage the AR.IO Network.* ```javascript // requests will be routed to one of the top 10 gateways by operator stake const gatewayProvider = new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', // sort by 'operatorStake' | 'totalDelegatedStake' sortOrder: 'desc', // 'asc' limit: 10, // number of gateways to use filter: (gateway) => { // use only active gateways that did not fail in the last epoch return gateway.status === 'joined' && gateway.stats.failedConsecutiveEpochs === 0; }, }); ``` # StaticGatewaysProvider (/sdks/wayfinder/wayfinder-core/(gateway-providers)/staticgatewaysprovider) The static gateway provider returns a list of gateways that you provide. This is useful for testing or for users who want to use a specific gateway for all requests. ```javascript const gatewayProvider = new StaticGatewaysProvider({ gateways: ['https://arweave.net'], }); ``` # TrustedPeersGatewaysProvider (/sdks/wayfinder/wayfinder-core/(gateway-providers)/trustedpeersgatewaysprovider) Fetches a dynamic list of trusted peer gateways from an AR.IO gateway's `/ar-io/peers` endpoint. This provider is useful for discovering available gateways from a trusted source. ```javascript const gatewayProvider = new TrustedPeersGatewaysProvider({ trustedGateway: 'https://arweave.net', // Gateway to fetch peers from }); // The provider will fetch the peer list from https://arweave.net/ar-io/peers // and return an array of gateway URLs from the response ``` # Caching (/sdks/wayfinder/wayfinder-core/(installation-notes)/caching) Wayfinder supports intelligent caching: - **In browsers**: Uses localStorage for persistent caching across page reloads - **In Node.js**: Uses in-memory caching - **What's cached**: Gateway lists, routing decisions, and more - **Cache configuration**: - `cache: true` - Enable with default 5-minute TTL - `cache: { ttlSeconds: 3600 }` - Enable with custom TTL (in seconds) - `cache: false` - Disable caching (default) # Optional Dependencies (/sdks/wayfinder/wayfinder-core/(installation-notes)/optional-dependencies) The `@ar.io/sdk` package is an optional peer dependency. To use AR.IO network gateways, you must explicitly provide an `ario` instance: **With AR.IO SDK (Recommended):** ```bash npm install @ar.io/wayfinder-core @ar.io/sdk yarn add @ar.io/wayfinder-core @ar.io/sdk ``` - `createWayfinderClient({ ario: ARIO.mainnet() })` uses AR.IO network gateways - Supports intelligent gateway selection criteria - Dynamic gateway discovery and updates # Global request events (/sdks/wayfinder/wayfinder-core/(monitoring-and-events)/global-request-events) Wayfinder emits events during the routing and verification process for all requests, allowing you to monitor its operation. All events are emitted on the `wayfinder.emitter` event emitter, and are updated for each request. ```javascript // Provide events to the Wayfinder constructor for tracking all requests const wayfinder = new Wayfinder({ routingSettings: { events: { onRoutingStarted: (event) => { console.log('Routing started!', event); }, onRoutingSkipped: (event) => { console.log('Routing skipped!', event); }, onRoutingSucceeded: (event) => { console.log('Routing succeeded!', event); }, }, }, verificationSettings: { events: { onVerificationSucceeded: (event) => { console.log(`Verification passed for transaction: ${event.txId}`); }, onVerificationFailed: (event) => { console.error( `Verification failed for transaction: ${event.txId}`, event.error, ); }, onVerificationProgress: (event) => { const percentage = (event.processedBytes / event.totalBytes) * 100; console.log( `Verification progress for ${event.txId}: ${percentage.toFixed(2)}%`, ); }, }, }, }); // listen to the global wayfinder event emitter for all requests wayfinder.emitter.on('routing-succeeded', (event) => { console.log(`Request routed to: ${event.targetGateway}`); }); wayfinder.emitter.on('routing-failed', (event) => { console.error(`Routing failed: ${event.error.message}`); }); wayfinder.emitter.on('verification-progress', (event) => { console.log(`Verification progress: ${event.progress}%`); }); wayfinder.emitter.on('verification-succeeded', (event) => { console.log(`Verification succeeded: ${event.txId}`); }); wayfinder.emitter.on('verification-failed', (event) => { console.error(`Verification failed: ${event.error.message}`); }); ``` # Request-specific events (/sdks/wayfinder/wayfinder-core/(monitoring-and-events)/request-specific-events) You can also provide events to the `request` function to track a single request. These events are called for each request and are not updated for subsequent requests. Events are still emitted to the global event emitter for all requests. It is recommended to use the global event emitter for tracking all requests, and the request-specific events for tracking a single request. ```javascript // create a wayfinder instance with verification enabled const wayfinder = new Wayfinder({ verificationSettings: { enabled: true, strategy: new HashVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), events: { onVerificationProgress: (event) => { console.log(`Global callback handler called for: ${event.txId}`); }, onVerificationSucceeded: (event) => { console.log(`Global callback handler called for: ${event.txId}`); }, }, }, }); const response = await wayfinder.request('ar://example-name', { verificationSettings: { // these callbacks will be triggered for this request only, the global callback handlers are still called events: { onVerificationProgress: (event) => { console.log(`Request-specific callback handler called for: ${event.txId}`); }, onVerificationSucceeded: (event) => { console.log(`Request-specific callback handler called for: ${event.txId}`); }, }, }, }); ``` # CompositeRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/compositeroutingstrategy) Chains multiple routing strategies together, trying each sequentially until one succeeds. This strategy provides maximum resilience by allowing complex fallback scenarios where you can combine different routing approaches. ```javascript import { CompositeRoutingStrategy, FastestPingRoutingStrategy, RandomRoutingStrategy, StaticRoutingStrategy, NetworkGatewaysProvider } from '@ar.io/wayfinder-core'; // Example 1: Try fastest ping first, fallback to random selection const strategy = new CompositeRoutingStrategy({ strategies: [ new FastestPingRoutingStrategy({ timeoutMs: 500, gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 10, }), }), new RandomRoutingStrategy(), // fallback if ping strategy fails ], }); // Example 2: Try preferred gateway, then fastest ping, then any random gateway const complexStrategy = new CompositeRoutingStrategy({ strategies: [ new StaticRoutingStrategy({ gateway: 'https://my-preferred-gateway.com' }), new FastestPingRoutingStrategy({ timeoutMs: 1000 }), new RandomRoutingStrategy(), // final fallback ], }); const gateway = await strategy.selectGateway({ gateways: [new URL('https://gateway1.com'), new URL('https://gateway2.com')], }); ``` **How it works:** 1. The composite strategy tries each routing strategy in order 2. If a strategy successfully returns a gateway, that gateway is used 3. If a strategy throws an error, the next strategy is tried 4. If all strategies fail, an error is thrown 5. The first successful strategy short-circuits the process (remaining strategies are not tried) **Common Use Cases:** - **Performance + Resilience**: Try fastest ping first, fallback to random if ping fails - **Preferred + Network**: Use your own gateway first, fallback to AR.IO network selection - **Multi-tier Fallback**: Try premium gateways, then standard gateways, then any available gateway - **Development + Production**: Use local gateway in development, fallback to production gateways # FastestPingRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/fastestpingroutingstrategy) Selects the fastest gateway based on simple HEAD request to the specified route. ```javascript // use with static gateways (override gatewaysProvider if provided) const routingStrategy = new FastestPingRoutingStrategy({ timeoutMs: 1000, }); const gateway = await routingStrategy.selectGateway({ gateways: [new URL('https://slow.net'), new URL('https://medium.net'), new URL('https://fast.net')], }); // use with gatewaysProvider (fetches dynamically) const routingStrategy2 = new FastestPingRoutingStrategy({ timeoutMs: 1000, gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 20, }), }); const gateway2 = await routingStrategy2.selectGateway({ path: '/ar-io/info' }); // uses gatewaysProvider // override the gatewaysProvider with a static list of gateways const gateway3 = await routingStrategy2.selectGateway({ gateways: [new URL('https://priority-gateway.net')], // overrides gatewaysProvider path: '/ar-io/info' }); ``` # PreferredWithFallbackRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/preferredwithfallbackroutingstrategy) Uses a preferred gateway, with a fallback strategy if the preferred gateway is not available. This is useful for builders who run their own gateways and want to use their own gateway as the preferred gateway, but also want to have a fallback strategy in case their gateway is not available. This strategy is built using `CompositeRoutingStrategy` internally. It first attempts to ping the preferred gateway (using `PingRoutingStrategy` with `StaticRoutingStrategy`), and if that fails, it falls back to the specified fallback strategy. ```javascript const routingStrategy = new PreferredWithFallbackRoutingStrategy({ preferredGateway: 'https://permagate.io', fallbackStrategy: new FastestPingRoutingStrategy({ timeoutMs: 500, }), }); ``` # RandomRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/randomroutingstrategy) Selects a random gateway from a list of gateways. ```javascript // Option 1: Use with static gateways (override gatewaysProvider if provided) const routingStrategy = new RandomRoutingStrategy(); const gateway = await routingStrategy.selectGateway({ gateways: [new URL('https://arweave.net'), new URL('https://permagate.io')], }); // Option 2: Use with gatewaysProvider (fetches dynamically) const routingStrategy2 = new RandomRoutingStrategy({ gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 10, }), }); const gateway2 = await routingStrategy2.selectGateway(); // uses gatewaysProvider // Option 3: Override gatewaysProvider with static gateways const gateway3 = await routingStrategy2.selectGateway({ gateways: [new URL('https://custom-gateway.net')], // overrides gatewaysProvider }); ``` # RoundRobinRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/roundrobinroutingstrategy) Selects gateways in round-robin order. The gateway list is stored in memory and is not persisted across instances. You must provide either `gateways` OR `gatewaysProvider` (not both). ```javascript // use with a static list of gateways const routingStrategy = new RoundRobinRoutingStrategy({ gateways: [new URL('https://arweave.net'), new URL('https://permagate.io')], }); // use with gatewaysProvider (loaded once and memoized) const routingStrategy2 = new RoundRobinRoutingStrategy({ gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', sortOrder: 'desc', limit: 10, }), }); const gateway = await routingStrategy.selectGateway(); // returns the next gateway in round-robin order ``` # StaticRoutingStrategy (/sdks/wayfinder/wayfinder-core/(routing-strategies)/staticroutingstrategy) ```javascript const routingStrategy = new StaticRoutingStrategy({ gateway: 'https://arweave.net', }); const gateway = await routingStrategy.selectGateway(); // always returns the same gateway ``` # Strategy Composition Examples (/sdks/wayfinder/wayfinder-core/(routing-strategies)/strategy-composition-examples) Here are a few “lego-style” examples showing how existing routing strategies can be composed to suit different use cases. Each strategy implements `RoutingStrategy`, so they can be wrapped and combined freely. #### Random + Ping health checks Pick a random gateway, then verify it responds with a `HEAD` request before returning it. ```ts import { RandomRoutingStrategy, PingRoutingStrategy, } from "@ar.io/wayfinder-core"; const strategy = new PingRoutingStrategy({ routingStrategy: new RandomRoutingStrategy(), retries: 2, timeoutMs: 500, }); ``` #### Fastest ping wrapped with a simple cache Find the lowest-latency gateway and cache the result for five minutes to avoid constant pings. ```ts import { FastestPingRoutingStrategy, SimpleCacheRoutingStrategy, } from "@ar.io/wayfinder-core"; const strategy = new SimpleCacheRoutingStrategy({ routingStrategy: new FastestPingRoutingStrategy({ timeoutMs: 500 }), ttlSeconds: 300, }); ``` #### Preferred gateway + network fallback strategy Attempt to use a favorite gateway, but fallback to a fastest pinging strategy using the ARIO Network if it fails. ```ts import { PreferredWithFallbackRoutingStrategy, RandomRoutingStrategy, PingRoutingStrategy, NetworkGatewaysProvider, } from "@ar.io/wayfinder-core"; // these will be our fallback gateways const gatewayProvider = new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 5, }); // this is our fallback strategy if our preferred gateway fails const fastestPingStrategy = new FastestPingRoutingStrategy({ timeoutMs: 500, gatewaysProvider: gatewayProvider, }); // compose the strategies together, the preferred gateway will be used first, and if it fails, the fallback strategy will be used. const strategy = new PreferredWithFallbackRoutingStrategy({ preferredGateway: "https://my-gateway.example", fallbackStrategy: fastestPingStrategy, }); ``` #### Round-robin + ping verification Cycle through gateways sequentially, checking each one’s health before use. ```ts import { RoundRobinRoutingStrategy, PingRoutingStrategy, NetworkGatewaysProvider, } from "@ar.io/wayfinder-core"; // use static gateways const strategy = new PingRoutingStrategy({ routingStrategy: new RoundRobinRoutingStrategy({ gateways: [new URL("https://gw1"), new URL("https://gw2")], }), }); // use a dynamic list of gateways from the ARIO Network const strategy2 = new PingRoutingStrategy({ routingStrategy: new RoundRobinRoutingStrategy({ gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 5, }), }), }); ``` #### Cache around any composed strategy Because `SimpleCacheRoutingStrategy` accepts any `RoutingStrategy`, you can cache more complex compositions too. ```ts import { RandomRoutingStrategy, PingRoutingStrategy, SimpleCacheRoutingStrategy, NetworkGatewaysProvider, } from "@ar.io/wayfinder-core"; // use a dynamic list of gateways from the ARIO Network const randomStrategy = new RandomRoutingStrategy({ gatewaysProvider: new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 20, }), }); // wrap the random strategy with a ping strategy const pingRandom = new PingRoutingStrategy({ routingStrategy: randomStrategy, }); // wrap the ping random strategy with a cache strategy, caching the selected gateway for 10 minutes const cachedStrategy = new SimpleCacheRoutingStrategy({ routingStrategy: pingRandom, ttlSeconds: 600, }); ``` #### Complex multi-strategy fallback with CompositeRoutingStrategy Chain multiple strategies together for maximum resilience - try fastest ping first, then fall back to random selection if ping fails. ```ts import { CompositeRoutingStrategy, FastestPingRoutingStrategy, RandomRoutingStrategy, NetworkGatewaysProvider, } from "@ar.io/wayfinder-core"; // Define gateway provider for both strategies const gatewayProvider = new NetworkGatewaysProvider({ ario: ARIO.mainnet(), sortBy: 'operatorStake', limit: 15, }); // Create a composite strategy that tries fastest ping first, then random const strategy = new CompositeRoutingStrategy({ strategies: [ // Try fastest ping first (high performance, but may fail if all gateways are slow) new FastestPingRoutingStrategy({ timeoutMs: 500, gatewaysProvider: gatewayProvider, }), // Fallback to random selection (guaranteed to work if gateways exist) new RandomRoutingStrategy({ gatewaysProvider: gatewayProvider, }), ], }); ``` In all cases, you can supply the composed strategy to `Wayfinder` (or whatever router factory you use) and pass in a gateways provider: ```ts const router = new Wayfinder({ gatewaysProvider: new StaticGatewaysProvider({ gateways: [new URL("https://gw1"), new URL("https://gw2")], }), routingStrategy: strategy, // any of the compositions above }); ``` # DataRootVerificationStrategy (/sdks/wayfinder/wayfinder-core/(verification-strategies)/datarootverificationstrategy) Verifies data integrity using Arweave by computing the data root for the transaction. This is useful for L1 transactions and is recommended for users who want to ensure the integrity of their data. ```javascript const wayfinder = new Wayfinder({ verificationSettings: { enabled: true, strategy: new DataRootVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), }, }); ``` # HashVerificationStrategy (/sdks/wayfinder/wayfinder-core/(verification-strategies)/hashverificationstrategy) Verifies data integrity using SHA-256 hash comparison. This is the default verification strategy and is recommended for most users looking for a balance between security and performance. ```javascript const wayfinder = new Wayfinder({ verificationSettings: { enabled: true, strategy: new HashVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), }, }); ``` # RemoteVerificationStrategy (/sdks/wayfinder/wayfinder-core/(verification-strategies)/remoteverificationstrategy) This strategy is used to verify data by checking the `x-ar-io-verified` header from the gateway that returned the data. If the header is set to `true`, the data is considered verified and trusted. This strategy is only recommended for users fetching data from their own gateways and want to avoid the overhead of the other verification strategies. ```javascript const wayfinder = new Wayfinder({ verificationSettings: { // no trusted gateways are required for this strategy enabled: true, strategy: new RemoteVerificationStrategy(), }, }); ``` # SignatureVerificationStrategy (/sdks/wayfinder/wayfinder-core/(verification-strategies)/signatureverificationstrategy) Verifies signatures of Arweave transactions and data items. Headers are retrieved from trusted gateways for use during verification. For a transaction, its data root is computed while streaming its data and then utilized alongside its headers for verification. For data items, the ANS-104 deep hash method of signature verification is used. ```javascript const wayfinder = new Wayfinder({ verificationSettings: { enabled: true, strategy: new SignatureVerificationStrategy({ trustedGateways: ['https://permagate.io'], }), }, }); ``` # Dynamic Routing (/sdks/wayfinder/wayfinder-core/dynamic-routing) Wayfinder supports a `resolveUrl` method which generates dynamic redirect URLs to a target gateway based on the provided routing strategy. This function can be used to directly replace any hard-coded gateway URLs, and instead use Wayfinder's routing logic to select a gateway for the request. #### ArNS names Given an ArNS name, the redirect URL will be the same as the original URL, but with the gateway selected by Wayfinder's routing strategy. ```javascript const redirectUrl = await wayfinder.resolveUrl({ arnsName: 'ardrive', }); // results in https://ardrive.\ ``` #### Transaction Ids Given a txId, the redirect URL will be the same as the original URL, but with the gateway selected by Wayfinder's routing strategy. ```javascript const redirectUrl = await wayfinder.resolveUrl({ txId: 'example-tx-id', }); // results in https://\/example-tx-id ``` #### Legacy arweave.net or arweave.dev URLs Given a legacy arweave.net or arweave.dev URL, the redirect URL will be the same as the original URL, but with the gateway selected by Wayfinder's routing strategy. ```javascript const redirectUrl = await wayfinder.resolveUrl({ originalUrl: 'https://arweave.net/example-tx-id', }); // results in https://\/example-tx-id ``` #### ar:// URLs Given an ar:// URL, the redirect URL will be the same as the original URL, but with the gateway selected by Wayfinder's routing strategy. ```javascript const redirectUrl = await wayfinder.resolveUrl({ originalUrl: 'ar://example-name/subpath?query=value', }); // results in https://\/example-name/subpath?query=value ``` # Wayfinder Core (/sdks/wayfinder/wayfinder-core) **For AI and LLM users**: Access the complete Wayfinder Core documentation in plain text format at [llm.txt](/sdks/wayfinder-core/llm.txt) for easy consumption by AI agents and language models. # Wayfinder Core Please refer to the [source code](https://github.com/ar-io/wayfinder/tree/main/packages/wayfinder-core) for SDK details. # Request Flow (/sdks/wayfinder/wayfinder-core/request-flow) The following sequence diagram illustrates how Wayfinder processes requests: ```mermaid sequenceDiagram participant Client participant Wayfinder participant Gateways Provider participant Routing Strategy participant Selected Gateway participant Verification Strategy participant Trusted Gateways Client->>Wayfinder: request('ar://example') activate Wayfinder Wayfinder->>+Gateways Provider: getGateways() Gateways Provider-->>-Wayfinder: List of gateway URLs Wayfinder->>+Routing Strategy: selectGateway() from list of gateways Routing Strategy-->>-Wayfinder: Select gateway for request Wayfinder->>+Selected Gateway: Send HTTP request to target gateway Selected Gateway-->>-Wayfinder: Response with data & txId activate Verification Strategy Wayfinder->>+Verification Strategy: verifyData(responseData, txId) Verification Strategy->>Wayfinder: Emit 'verification-progress' events Verification Strategy->>Trusted Gateways: Request verification headers Trusted Gateways-->>Verification Strategy: Return verification headers Verification Strategy->>Verification Strategy: Compare computed vs trusted data Verification Strategy-->>-Wayfinder: Return request data with verification result alt Verification passed Wayfinder->>Wayfinder: Emit 'verification-passed' event Wayfinder-->>Client: Return verified response else Verification failed Wayfinder->>Wayfinder: Emit 'verification-failed' event Wayfinder-->>Client: Throw verification error end deactivate Wayfinder ``` # Telemetry (/sdks/wayfinder/wayfinder-core/telemetry) Wayfinder can optionally emit OpenTelemetry spans for every request. **By default, telemetry is disabled**. You can control this behavior with the `telemetrySettings` option. ```javascript const wayfinder = createWayfinderClient({ ario: ARIO.mainnet(), // other settings... telemetrySettings: { enabled: true, // disabled by default (must be explicitly enabled) sampleRate: 0.1, // 10% sample rate by default exporterUrl: 'https://your-custom-otel-exporter', // optional, defaults to https://api.honeycomb.io/v1/traces clientName: 'my-custom-client-name', // optional, defaults to wayfinder-core clientVersion: '1.0.0', // optional, defaults to empty }, }); ``` # useWayfinderRequest (/sdks/wayfinder/wayfinder-react/(hooks)/usewayfinderrequest) Fetch the data via wayfinder, and optionally verify the data. ```tsx function WayfinderData({ txId }: { txId: string }) { const request = useWayfinderRequest(); const [data, setData] = useState\(null); const [dataLoading, setDataLoading] = useState(false); const [dataError, setDataError] = useState\(null); useEffect(() => { (async () => { try { setDataLoading(true); setDataError(null); // fetch the data for the txId using wayfinder const response = await request(`ar://${txId}`, { verificationSettings: { enabled: true, // enable verification on the request strict: true, // don't use the data if it's not verified }, }); const data = await response.arrayBuffer(); // or response.json() if you want to parse the data as JSON setData(data); } catch (error) { setDataError(error as Error); } finally { setDataLoading(false); } })(); }, [request, txId]); if (dataError) { return Error loading data: {dataError.message}; } if (dataLoading) { return Loading data...; } if (!data) { return No data; } return ( {data} ); } ``` # useWayfinderUrl (/sdks/wayfinder/wayfinder-react/(hooks)/usewayfinderurl) Get a dynamic URL for an existing `ar://` URL or legacy `arweave.net`/`arweave.dev` URL. Example: ```tsx function WayfinderImage({ txId }: { txId: string }) { const { resolvedUrl, isLoading, error } = useWayfinderUrl({ txId }); if (error) { return Error resolving URL: {error.message}; } if (isLoading) { return Resolving URL...; } return ( ); } ``` # Wayfinder React (/sdks/wayfinder/wayfinder-react) **For AI and LLM users**: Access the complete Wayfinder React documentation in plain text format at [llm.txt](/sdks/wayfinder-react/llm.txt) for easy consumption by AI agents and language models. # Wayfinder React Please refer to the [source code](https://github.com/ar-io/wayfinder/tree/main/packages/wayfinder-react) for SDK details. # What is it? (/sdks/wayfinder/what-is-it) Wayfinder is a simple, open-source client-side routing and verification protocol for the permaweb. It leverages the [AR.IO Network](https://ar.io) to route users to the most optimal gateway for a given request. # Who is it for? (/sdks/wayfinder/who-is-it-for) - **Builders** who need reliable, decentralized access to Arweave data through the powerful [AR.IO Network](https://ar.io) - **Browsers** who demand complete control over their permaweb journey with customizable gateways and robust verification settings for enhanced security and reliability - **Operators** who power the [AR.IO Network](https://ar.io) and want to earn rewards* for serving wayfinder traffic to the growing permaweb ecosystem