| If you need… | Use… | Not Firefox | |--------------|-------|--------------| | Recursive crawl (follow every link) | wget --mirror , httrack | ❌ | | Respecting robots.txt and crawl delays | wget with --wait | ❌ (unless scripted) | | Save 10,000+ pages efficiently | zimit , archivebox , heritrix | ❌ | | Save one complex, JS-heavy page exactly as seen | | ✅ | | Download all images from a gallery page | Firefox + DownThemAll! | ✅ | | Archive pages behind a login (your own account) | Firefox + SingleFile (logged in) | ✅ |
Beyond the Save Button: A Deep Dive into Firefox’s Siterip Capabilities (And Why It’s Not What You Think) firefoxs siterip
Firefox is the scalpel. Siterip tools are the chainsaw. Use the right one. | If you need… | Use… | Not
From addons.mozilla.org. Configure it to save as “Complete HTML file” and enable “Save deferred images.” Use the right one
This is where Firefox shines. Unlike Chrome (which is slowly strangling WebRequest API power), Firefox still supports extensions that can intercept, modify, and batch-download content.
The idea is tantalizing. Imagine opening a menu, clicking a single button, and watching Mozilla Firefox—your humble daily driver browser—crawl every accessible page of a domain, download all the HTML, CSS, JS, and assets, and package it neatly into a local folder. No command line. No wget flags. No httrack configuration.