Gallery-dl Instagram: What Actually Works

Started with basic downloads, kept hitting walls. Figured out Stories, authentication, rate limits the hard way. Here's what I learned through lots of trial and error.

Why I wrote this

Used gallery-dl for Instagram a while. Basic downloads worked fine - public profiles, individual posts, nothing fancy. Thought that was enough.

Then wanted to download Stories. Hit a wall. Needed to access a private account. Hit another wall. Got rate limited constantly. Realized there's a lot more to this than the basic commands.

Spent way too much time reading GitHub issues, testing different approaches, failing repeatedly. Eventually found stuff that actually works reliably.

Not trying to be comprehensive or cover everything. Just documenting what I wish I knew when I started. If you're stuck with Instagram downloading, maybe this helps.

Authentication setup

Instagram requires login for private profiles, Stories, and higher download limits.

Method 1: Browser cookies (recommended)

# Install gallery-dl with cookies support
pip install gallery-dl[cookies]

# Option A: Use browser-cookies.txt extension
# 1. Install browser extension (Chrome/Firefox)
#    https://github.com/gallery-dl/browser-cookies.txt
# 2. Log into Instagram in browser
# 3. Click extension icon to export cookies
# 4. Save to ~/.config/gallery-dl/cookies.txt

# Option B: Extract cookies manually
# 1. Log into Instagram in browser
# 2. Open DevTools (F12) > Application > Cookies
# 3. Find sessionid cookie
# 4. Copy value

# Configure gallery-dl to use cookies
mkdir -p ~/.config/gallery-dl
cat > ~/.config/gallery-dl/config.json << EOF
{
  "extractor": {
    "instagram": {
      "cookies": "~/.config/gallery-dl/cookies.txt"
    }
  }
}
EOF

Method 2: Browser profile

# Use Selenium/Playwright to handle authentication
from gallery_dl import job

# Download with browser session
job.Download('https://www.instagram.com/p/CODE/')
job.run('instagram', url='https://www.instagram.com/p/CODE/',
        cookies=cookies_dict)

Method 3: Two-factor authentication

# For accounts with 2FA, use session file
gallery-dl --write-log log.txt instagram:user

# First run will prompt for:
# - Username
# - Password
# - 2FA code (check authenticator app)

# Session saved to ~/.config/gallery-dl/instagram-session.json

# Future downloads use saved session
gallery-dl instagram:username

Test authentication

# Test if authentication works
gallery-dl -i --no-download instagram:user

# Should show your username and private posts if successful
# -i: interactive mode
# --no-download: don't actually download files

Downloading Stories and Highlights

Stories require authentication and disappear after 24 hours.

Download current Stories

# Download all Stories from a user
gallery-dl instagram:user/stories

# Download Stories from multiple users
gallery-dl instagram:user1/stories instagram:user2/stories

# Download and organize by date
gallery-dl -o "filename={date:%Y-%m-%d}_{user}_{num}_{id}" instagram:user/stories

Download Highlights

# Download all Highlights
gallery-dl instagram:user/highlights

# Download specific Highlight
gallery-dl instagram:user/highlights/123456789

# Keep Highlight structure (folders)
gallery-dl --no-directory-skip instagram:user/highlights

# This creates:
# highlights/1/
#   highlight_1.jpg
#   highlight_2.jpg
# highlights/2/
#   highlight_1.jpg

Automated Story downloading

#!/usr/bin/env python3
"""
Download Stories from followed users daily
"""
import subprocess
from datetime import datetime

# List of users to monitor
USERS = [
    'user1',
    'user2',
    'user3',
]

def download_stories():
    """Download Stories from all users"""
    timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
    log_file = f'story_download_{timestamp}.log'

    for user in USERS:
        print(f"Downloading Stories from {user}...")

        cmd = [
            'gallery-dl',
            '--write-log', log_file,
            '--verbose',
            f'instagram:{user}/stories'
        ]

        try:
            subprocess.run(cmd, check=True)
            print(f"✓ Downloaded Stories from {user}")
        except subprocess.CalledProcessError:
            print(f"✗ Failed to download from {user}")

if __name__ == '__main__':
    download_stories()

Bulk downloading strategies

Download entire profiles, multiple users, with smart organization.

Download entire profile

# Download all posts (high-res versions)
gallery-dl instagram:username

# Include tagged posts
gallery-dl instagram:username/tagged

# Download with metadata
gallery-dl --write-metadata --write-info-json instagram:username

# Creates filename.jpg and filename.json (likes, comments, etc.)

Smart filename organization

# Organize by date and type
gallery-dl -o "
  filename=instagram/{user}/{date:%Y/%m/%d}_{id}_{filename}.{extension}
  metadata=false
" instagram:username

# Result:
# instagram/username/2026/03/10_123456789_photo.jpg
# instagram/username/2026/03/10_123456790_album_1.jpg
# instagram/username/2026/03/10_123456790_album_2.jpg

# Alternative: Include post title/description
gallery-dl -o "
  filename=instagram/{user}/{date:%Y-%m-%d}_{id}_{title[:50]}.{extension}
  metadata=false
" instagram:username

Batch download multiple users

# Create users.txt with one username per line
cat > users.txt << EOF
user1
user2
user3
user4
EOF

# Download from all users
while read user; do
  gallery-dl "instagram:$user"
done < users.txt

# Parallel download (3 at a time)
cat users.txt | xargs -P 3 -I {} gallery-dl instagram:{}

Resume interrupted downloads

# Gallery-dl automatically skips downloaded files
# But you can enable range support for partial files

# Enable range requests
gallery-dl -o "range=true" instagram:username

# Continue from last download
gallery-dl --download-archive archive.txt instagram:username

# archive.txt records downloaded URLs
# won't re-download if interrupted

Videos and Reels

Download videos in high quality.

Download videos from posts

# Download single video post
gallery-dl instagram:p/CODE12345

# Download all video posts from profile
gallery-dl --filter "type == 'video'" instagram:username

# Download videos only (skip images)
gallery-dl --filter "extension == 'mp4'" instagram:username

Download Reels

# Download Reels from profile
gallery-dl instagram:username/reels

# Download specific Reel
gallery-dl instagram:p/REEL_CODE

# Download all Reels (not posts)
gallery-dl --filter "typename == 'GraphVideo'" instagram:username

# Organize Reels separately
gallery-dl -o "
  filename=reels/{user}/{date:%Y-%m-%d}_{id}.{extension}
" instagram:username/reels

Video quality options

# Download highest quality (default)
gallery-dl instagram:p/CODE

# List available formats first
gallery-dl -K --list-formats instagram:p/CODE

# Download specific format
gallery-dl -f "bestvideo+bestaudio" instagram:p/CODE

# Download with subtitles (if available)
gallery-dl --write-subs --sub-lang en instagram:p/CODE

Handling rate limits

Instagram has strict rate limits. Here's how to work around them.

Configurable delays

# Add delay between requests
gallery-dl -o "sleep-request=3" instagram:username

# Random delays to avoid detection
gallery-dl -o "sleep-request=2..5" instagram:username

# Specific delays for different actions
gallery-dl -o "
  sleep-request=3
  sleep-image=2
  sleep-metadata=1
" instagram:username

Retry logic

# Configure retry attempts
gallery-dl -o "
  retries=5
  retry=3
  timeout=60
" instagram:username

# Continue on HTTP 429 (rate limit)
gallery-dl --abort-on-error false --continue instagram:username

Proxy rotation

# Use proxy
gallery-dl -o "proxy=http://proxy.example.com:8080" instagram:username

# Use SOCKS proxy
gallery-dl -o "socks-proxy=socks5://127.0.0.1:1080" instagram:username

# Load proxies from file
cat > proxies.txt << EOF
http://proxy1.example.com:8080
http://proxy2.example.com:8080
socks5://127.0.0.1:1080
EOF

# Use random proxy from file
gallery-dl --get-proxy from-proxy-file.txt instagram:username

Distribute downloads over time

#!/usr/bin/env python3
"""
Download in batches with delays
"""
import time
import subprocess

USERS = ['user1', 'user2', 'user3', 'user4', 'user5']
BATCH_SIZE = 2
DELAY_BETWEEN_BATCHES = 300  # 5 minutes

def download_batch(users):
    """Download a batch of users"""
    for user in users:
        print(f"Downloading {user}...")
        subprocess.run(['gallery-dl', f'instagram:{user}'])
        time.sleep(60)  # 1 minute between users

# Split into batches
for i in range(0, len(USERS), BATCH_SIZE):
    batch = USERS[i:i + BATCH_SIZE]
    download_batch(batch)

    if i + BATCH_SIZE < len(USERS):
        print(f"Waiting {DELAY_BETWEEN_BATCHES}s before next batch...")
        time.sleep(DELAY_BETWEEN_BATCHES)

Metadata and organization

Extract and organize additional information.

Download with metadata

# Save metadata as JSON
gallery-dl --write-info-json instagram:username

# Save description as .txt file
gallery-dl --write-description instagram:username

# Save all metadata options
gallery-dl \
  --write-info-json \
  --write-description \
  --write-metadata \
  instagram:username

Custom metadata templates

# Create custom metadata files
gallery-dl -o "
  info-json=true
  metadata=true
  metadata-template={
    'user': '{user}',
    'id': '{id}',
    'likes': '{likes}',
    'comments': '{comment_count}',
    'date': '{date}',
    'url': '{url}',
    'description': '{description[:200]}'
  }
" instagram:username

Organize by tags and mentions

# Extract hashtags and mentions
gallery-dl --filter "'#tag' in description" instagram:username

# Save hashtags to separate file
gallery-dl -o "
  metadata-template={tags}
  metadata-directory={directory}/metadata
" instagram:username

Automation workflows

Set up automated downloading schedules.

Cron job for daily downloads

#!/bin/bash
# daily-instagram-download.sh

USERS=("user1" "user2" "user3")
BASE_DIR="/path/to/downloads"
DATE=$(date +%Y%m%d)

for user in "${USERS[@]}"; do
  echo "Downloading $user..."
  gallery-dl \
    --write-log "$BASE_DIR/logs/$user_$DATE.log" \
    --config "$BASE_DIR/config.json" \
    "instagram:$user"
done

# Save as daily-instagram-download.sh
# Make executable: chmod +x daily-instagram-download.sh

# Add to crontab (runs daily at 3 AM)
# crontab -e
# 0 3 * * * /path/to/daily-instagram-download.sh

Python automation script

#!/usr/bin/env python3
"""
Advanced Instagram downloader with filtering and organization
"""
import os
import json
import subprocess
from datetime import datetime
from pathlib import Path

class InstagramDownloader:
    def __init__(self, config_file='config.json'):
        self.config = self.load_config(config_file)
        self.download_dir = Path(self.config.get('download_dir', './downloads'))

    def load_config(self, config_file):
        """Load configuration from JSON"""
        with open(config_file) as f:
            return json.load(f)

    def download_user(self, username, content_type='all'):
        """Download content from a user"""
        url = f'instagram:{username}'

        if content_type == 'stories':
            url += '/stories'
        elif content_type == 'highlights':
            url += '/highlights'
        elif content_type == 'reels':
            url += '/reels'

        cmd = [
            'gallery-dl',
            '--config', self.config.get('gallery_dl_config'),
            '--write-info-json',
            '--write-description',
            url
        ]

        try:
            subprocess.run(cmd, check=True)
            return {'status': 'success', 'user': username}
        except subprocess.CalledProcessError as e:
            return {'status': 'error', 'user': username, 'error': str(e)}

    def download_all_users(self):
        """Download from all configured users"""
        results = []

        for user_config in self.config['users']:
            username = user_config['username']
            content_types = user_config.get('content_types', ['all'])

            for content_type in content_types:
                print(f"Downloading {content_type} from {username}...")
                result = self.download_user(username, content_type)
                results.append(result)

        # Save results
        timestamp = datetime.now().isoformat()
        report = {
            'timestamp': timestamp,
            'results': results
        }

        report_file = self.download_dir / f'report_{timestamp}.json'
        with open(report_file, 'w') as f:
            json.dump(report, f, indent=2)

        return results

# config.json
"""
{
  "download_dir": "./instagram_downloads",
  "gallery_dl_config": "./gallery-dl.conf",
  "users": [
    {
      "username": "user1",
      "content_types": ["all", "stories"]
    },
    {
      "username": "user2",
      "content_types": ["all", "highlights"]
    }
  ]
}
"""

Common issues and solutions

Problems you'll hit at intermediate level.

Issue: Login required error

Fix: Cookies expired. Re-export cookies from browser. Check cookies.txt file format. Ensure sessionid is present. Try logging out and back in on browser, then re-export.

Issue: HTTP 429 Too Many Requests

Fix: Hit rate limit. Increase sleep-request delay. Wait 1-2 hours before retrying. Use different proxies. Authenticate (higher limits with login).

Issue: Stories download as empty

Fix: Stories expired (24h limit) or user has no active Stories. User's account might be private and you're not following them. Check authentication is working.

Issue: Video downloads fail at random

Fix: Instagram's CDN throttling. Enable range requests: -o "range=true". Use proxy. Reduce concurrent downloads to 1.

Issue: Metadata not downloading

Fix: Check --write-info-json flag. Verify permissions on download directory. Instagram might not provide metadata for private accounts.

Issue: Challenge required error

Fix: Instagram flagged suspicious activity. Wait 24-48 hours. Use browser-based authentication. Consider using official API for bulk operations.

Best practices from experience

Hard-earned lessons.

1. Always respect rate limits

Start with conservative delays (3-5 seconds). Increase gradually if no issues. Better to be slow than get IP banned.

2. Organize downloads from day one

Use consistent filename templates. Separate raw downloads from processed content. Back up important collections. Hard to reorganize terabytes of media later.

3. Monitor logs

# Always enable logging
gallery-dl --write-log instagram.log --verbose instagram:username

# Check for errors
grep "ERROR" instagram.log

# Monitor downloads in real-time
tail -f instagram.log

4. Use download archive

Always use --download-archive for large downloads. Prevents re-downloads if interrupted. Saves time and bandwidth.

5. Test on small scale first

Test configuration on single post before bulk download. Verify filename template works. Check metadata format. Scale up once confirmed.

6. Keep gallery-dl updated

# Update regularly
pip install --upgrade gallery-dl

# Check version
gallery-dl --version

7. Respect content creators

Don't redistribute downloaded content. Use for personal archiving only. Support creators through official channels when possible.

Moving forward

With these intermediate techniques, you can handle most Instagram downloading scenarios. Authentication unlocks private content, smart rate limiting prevents bans, automation makes it hands-off.

Advanced topics would include: Instagram Graph API integration, building web UI around gallery-dl, distributed downloading across multiple machines, ML-based content classification. But that's another article.

Gallery-dl's documentation is excellent - GitHub repo has detailed examples. Join their Discord for community support.

Remember: Instagram's terms of service prohibit bulk downloading. Use responsibly, respect rate limits, and don't redistribute content.