Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep browser window open after scraping? #92

Open
wondering639 opened this issue Apr 7, 2021 · 1 comment
Open

Keep browser window open after scraping? #92

wondering639 opened this issue Apr 7, 2021 · 1 comment

Comments

@wondering639
Copy link

How can one keep the browser window open after scraping has finished (or aborted)? Thanks!

@Flushot
Copy link

Flushot commented Apr 23, 2021

@wondering639 There's only a single browser instance created for the lifecycle of the downloader middleware. I suppose all you'd need to do is avoid running this statement when your crawl is finished: https://github.com/clemfromspace/scrapy-selenium/blob/develop/scrapy_selenium/middlewares.py#L139

You can probably accomplish that through subclassing SeleniumMiddleware and overriding spider_closed() w/o having to actually modify any code in the package itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants