With the following crawler configuration: ```python from bs4 import BeautifulSoup as Soup url = "https://example.com" loader = RecursiveUrlLoader( url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text ) docs = loader.load() ``` An attacker in control of the contents of `https://example.com` could place a malicious HTML file in there with links like "https://example.completely.different/my_file.html" and the crawler would proceed to download that file as well even though `prevent_outside=True`. https://github.com/langchain-ai/langchain/blob/bf0b3cc0b5ade1fb95a5b1b6fa260e99064c2e22/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L51-L51 Resolved in https://github.com/langchain-ai/langchain/pull/15559
History

No history.

cve-icon MITRE

Status: PUBLISHED

Assigner: @huntr_ai

Published: 2024-02-24T17:59:26.498Z

Updated: 2024-08-01T17:41:16.443Z

Reserved: 2024-01-04T21:47:13.281Z

Link: CVE-2024-0243

cve-icon Vulnrichment

Updated: 2024-08-01T17:41:16.443Z

cve-icon NVD

Status : Awaiting Analysis

Published: 2024-02-26T16:27:49.670

Modified: 2024-03-13T21:15:55.173

Link: CVE-2024-0243

cve-icon Redhat

No data.