You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Good point. This crate simply ported the Google original library to Rust. Therefore we kept the logic remain: parse -> emit for each request. Indeed, this could be optimized to one parse for multiple requests. (p.s. I don't know why Google never did this.) Of course, contributions are always welcome.
I was looking for a decent robots.txt library written in rust to integrate into my Broad Web Crawler(open source toy project) and so far this one seems like the best bet because of "google" and tests...
But I don't like "parse for each request approach", seems hurtful and unnecessary for performance reasons,
from a swift look at source code I think the change would be somewhere here
This API and example are really confusing...
Why cannot we simply parse once and then call methods to check if URL is allowed?...
The text was updated successfully, but these errors were encountered: