Total
9 CVE
CVE | Vendors | Products | Updated | CVSS v3.1 |
---|---|---|---|---|
CVE-2024-1883 | 2024-09-26 | 6.3 Medium | ||
This is a reflected cross site scripting vulnerability in the PaperCut NG/MF application server. An attacker can exploit this weakness by crafting a malicious URL that contains a script. When an unsuspecting user clicks on this malicious link, it could potentially lead to limited loss of confidentiality, integrity or availability. | ||||
CVE-2024-1882 | 1 Papercut | 2 Papercut Mf, Papercut Ng | 2024-09-26 | 7.2 High |
This vulnerability allows an already authenticated admin user to create a malicious payload that could be leveraged for remote code execution on the server hosting the PaperCut NG/MF application server. | ||||
CVE-2024-1221 | 2024-09-26 | 3.1 Low | ||
This vulnerability potentially allows files on a PaperCut NG/MF server to be exposed using a specifically formed payload against the impacted API endpoint. The attacker must carry out some reconnaissance to gain knowledge of a system token. This CVE only affects Linux and macOS PaperCut NG/MF servers. | ||||
CVE-2023-1149 | 1 Btcpayserver | 1 Btcpay Server | 2024-08-02 | 5.4 Medium |
Improper Neutralization of Equivalent Special Elements in GitHub repository btcpayserver/btcpayserver prior to 1.8.0. | ||||
CVE-2023-0493 | 1 Btcpayserver | 1 Btcpay Server | 2024-08-02 | 5.3 Medium |
Improper Neutralization of Equivalent Special Elements in GitHub repository btcpayserver/btcpayserver prior to 1.7.5. | ||||
CVE-2024-34359 | 2024-08-02 | 9.7 Critical | ||
llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload. | ||||
CVE-2024-21600 | 1 Juniper | 1 Junos | 2024-08-01 | 6.5 Medium |
An Improper Neutralization of Equivalent Special Elements vulnerability in the Packet Forwarding Engine (PFE) of Juniper Networks Junos OS on PTX Series allows a unauthenticated, adjacent attacker to cause a Denial of Service (DoS). When MPLS packets are meant to be sent to a flexible tunnel interface (FTI) and if the FTI tunnel is down, these will hit the reject NH, due to which the packets get sent to the CPU and cause a host path wedge condition. This will cause the FPC to hang and requires a manual restart to recover. Please note that this issue specifically affects PTX1000, PTX3000, PTX5000 with FPC3, PTX10002-60C, and PTX10008/16 with LC110x. Other PTX Series devices and Line Cards (LC) are not affected. The following log message can be seen when the issue occurs: Cmerror Op Set: Host Loopback: HOST LOOPBACK WEDGE DETECTED IN PATH ID <id> (URI: /fpc/<fpc>/pfe/<pfe>/cm/<cm>/Host_Loopback/<cm>/HOST_LOOPBACK_MAKE_CMERROR_ID[<id>]) This issue affects Juniper Networks Junos OS: * All versions earlier than 20.4R3-S8; * 21.1 versions earlier than 21.1R3-S4; * 21.2 versions earlier than 21.2R3-S6; * 21.3 versions earlier than 21.3R3-S3; * 21.4 versions earlier than 21.4R3-S5; * 22.1 versions earlier than 22.1R2-S2, 22.1R3; * 22.2 versions earlier than 22.2R2-S1, 22.2R3. | ||||
CVE-2024-4897 | 1 Parisneo | 1 Lollms-webui | 2024-08-01 | N/A |
parisneo/lollms-webui, in its latest version, is vulnerable to remote code execution due to an insecure dependency on llama-cpp-python version llama_cpp_python-0.2.61+cpuavx2-cp311-cp311-manylinux_2_31_x86_64. The vulnerability arises from the application's 'binding_zoo' feature, which allows attackers to upload and interact with a malicious model file hosted on hugging-face, leading to remote code execution. The issue is linked to a known vulnerability in llama-cpp-python, CVE-2024-34359, which has not been patched in lollms-webui as of commit b454f40a. The vulnerability is exploitable through the application's handling of model files in the 'bindings_zoo' feature, specifically when processing gguf format model files. | ||||
CVE-2024-2952 | 2024-08-01 | N/A | ||
BerriAI/litellm is vulnerable to Server-Side Template Injection (SSTI) via the `/completions` endpoint. The vulnerability arises from the `hf_chat_template` method processing the `chat_template` parameter from the `tokenizer_config.json` file through the Jinja template engine without proper sanitization. Attackers can exploit this by crafting malicious `tokenizer_config.json` files that execute arbitrary code on the server. |
Page 1 of 1.