• macniel@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    And the crawlers are bound to that? I doubt that thieving crawlers care much about it.

  • BrikoX@lemmy.zipM
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    It’s not about blocking, it’s about getting payed for scraped content if you are too small to negotiate your own deal. But it’s not enforcable since it relies on the same honor system as robots.txt.

    Per RSL own page:

    Use Cases

    RSL is an open, XML-based document format for defining machine-readable licensing terms for digital assets, including websites, web pages, books, videos, images, and proprietary datasets. It enables publishers, authors, and application developers to:

    • Define licensing and compensation terms, including free, pay-per-crawl, and pay-per-inference, to use digital assets for AI training, web search, and other applications
    • Create public, standardized catalogs and licensing terms for digital assets
    • Enable clients to automate licensing and paying for legal access to digital assets
    • Define and implement standardized licensing and royalty agreements