You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my usage, a bit speed bottleneck is the sequential downloading of images from an article when finding the top image.
While the current implementation attempts to only download partial images if possible, a new session is used for each image, and images are not downloaded in parallel, though each download can take up to a second, depending on the website that's being looked at. What's more, when scraping multiple articles from the same website, the same images are downloaded multiple times, and the top image that's chosen is requested twice - once when searching for the top image, and once when checking requirements.
This PR attempts to fix it by starting multiple image downloads in parallel, as well as caching downloaded images for up to 5 hours. This way, the time taken to scrape one article can be reduced from >30 seconds to 2-3 seconds, and scraping multiple articles from the same site will download fewer images.
The downside is that streaming downloading does not work with the parallel implementation. The streaming didn't improve the download time by much however, so the main downside here is the amount of data transferred.
Issue by dhgelling
Fri Apr 30 12:25:12 2021
Originally opened as codelucas/newspaper#882
In my usage, a bit speed bottleneck is the sequential downloading of images from an article when finding the top image.
While the current implementation attempts to only download partial images if possible, a new session is used for each image, and images are not downloaded in parallel, though each download can take up to a second, depending on the website that's being looked at. What's more, when scraping multiple articles from the same website, the same images are downloaded multiple times, and the top image that's chosen is requested twice - once when searching for the top image, and once when checking requirements.
This PR attempts to fix it by starting multiple image downloads in parallel, as well as caching downloaded images for up to 5 hours. This way, the time taken to scrape one article can be reduced from >30 seconds to 2-3 seconds, and scraping multiple articles from the same site will download fewer images.
The downside is that streaming downloading does not work with the parallel implementation. The streaming didn't improve the download time by much however, so the main downside here is the amount of data transferred.
dhgelling included the following code: https://github.com/codelucas/newspaper/pull/882/commits
The text was updated successfully, but these errors were encountered: