CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
Vulnerability in the JD Edwards EnterpriseOne Tools product of Oracle JD Edwards (component: Web Runtime SEC). Supported versions that are affected are Prior to 9.2.9.0. Easily exploitable vulnerability allows low privileged attacker with network access via HTTP to compromise JD Edwards EnterpriseOne Tools. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of JD Edwards EnterpriseOne Tools. CVSS 3.1 Base Score 6.5 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H). |
Hyperium Hyper before 0.14.19 does not allow for customization of the max_header_list_size method in the H2 third-party software, allowing attackers to perform HTTP2 attacks. |
Expr is an expression language and expression evaluation for Go. Prior to version 1.17.0, if the Expr expression parser is given an unbounded input string, it will attempt to compile the entire string and generate an Abstract Syntax Tree (AST) node for each part of the expression. In scenarios where input size isn’t limited, a malicious or inadvertent extremely large expression can consume excessive memory as the parser builds a huge AST. This can ultimately lead to*excessive memory usage and an Out-Of-Memory (OOM) crash of the process. This issue is relatively uncommon and will only manifest when there are no restrictions on the input size, i.e. the expression length is allowed to grow arbitrarily large. In typical use cases where inputs are bounded or validated, this problem would not occur. The problem has been patched in the latest versions of the Expr library. The fix introduces compile-time limits on the number of AST nodes and memory usage during parsing, preventing any single expression from exhausting resources. Users should upgrade to Expr version 1.17.0 or later, as this release includes the new node budget and memory limit safeguards. Upgrading to v1.17.0 ensures that extremely deep or large expressions are detected and safely aborted during compilation, avoiding the OOM condition. For users who cannot immediately upgrade, the recommended workaround is to impose an input size restriction before parsing. In practice, this means validating or limiting the length of expression strings that your application will accept. For example, set a maximum allowable number of characters (or nodes) for any expression and reject or truncate inputs that exceed this limit. By ensuring no unbounded-length expression is ever fed into the parser, one can prevent the parser from constructing a pathologically large AST and avoid potential memory exhaustion. In short, pre-validate and cap input size as a safeguard in the absence of the patch. |
Knot Resolver before 5.6.0 enables attackers to consume its resources, launching amplification attacks and potentially causing a denial of service. Specifically, a single client query may lead to a hundred TCP connection attempts if a DNS server closes connections without providing a response. |
An issue in how XINJE XD5E-24R and XL5E-16T v3.5.3b handles TCP protocol messages allows attackers to cause a Denial of Service (DoS) via a crafted TCP message. |
An issue was discovered in Atos Eviden BullSequana XH2140 BMC before C4EM-125: OMF_C4E 101.05.0014. Some BullSequana XH products were shipped without proper hardware programming, leading to a potential denial-of-service with privileged access. |
An allocation of resources without limits or throttling vulnerability exists in curl <v7.88.0 based on the "chained" HTTP compression algorithms, meaning that a server response can be compressed multiple times and potentially with differentalgorithms. The number of acceptable "links" in this "decompression chain" wascapped, but the cap was implemented on a per-header basis allowing a maliciousserver to insert a virtually unlimited number of compression steps simply byusing many headers. The use of such a decompression chain could result in a "malloc bomb", making curl end up spending enormous amounts of allocated heap memory, or trying to and returning out of memory errors. |
Allocation of Resources Without Limits or Throttling vulnerability in Badge leading to a denial of service attack.Team Hacker Hotel Badge 2024 on risc-v (billboard modules) allows Flooding.This issue affects Hacker Hotel Badge 2024: from 0.1.0 through 0.1.3. |
Discourse is an open source platform for community discussion. Versions prior to 3.0.1 (stable), 3.1.0.beta2 (beta), and 3.1.0.beta2 (tests-passed) are subject to Allocation of Resources Without Limits or Throttling. As there is no limit on data contained in a draft, a malicious user can create an arbitrarily large draft, forcing the instance to a crawl. This issue is patched in versions 3.0.1 (stable), 3.1.0.beta2 (beta), and 3.1.0.beta2 (tests-passed). There are no workarounds. |
Discourse is an open source platform for community discussion. Versions prior to 3.1.0.beta1 (beta) (tests-passed) are vulnerable to Allocation of Resources Without Limits. Users can create chat drafts of an unlimited length, which can cause a denial of service by generating an excessive load on the server. Additionally, an unlimited number of drafts were loaded when loading the user. This issue has been patched in version 2.1.0.beta1 (beta) and (tests-passed). Users should upgrade to the latest version where a limit has been introduced. There are no workarounds available. |
@fastify/multipart is a Fastify plugin to parse the multipart content-type. Prior to versions 7.4.1 and 6.0.1, @fastify/multipart may experience denial of service due to a number of situations in which an unlimited number of parts are accepted. This includes the multipart body parser accepting an unlimited number of file parts, the multipart body parser accepting an unlimited number of field parts, and the multipart body parser accepting an unlimited number of empty parts as field parts. This is fixed in v7.4.1 (for Fastify v4.x) and v6.0.1 (for Fastify v3.x). There are no known workarounds. |
Werkzeug is a comprehensive WSGI web application library. Prior to version 2.2.3, Werkzeug's multipart form data parser will parse an unlimited number of parts, including file parts. Parts can be a small amount of bytes, but each requires CPU time to parse and may use more memory as Python data. If a request can be made to an endpoint that accesses `request.data`, `request.form`, `request.files`, or `request.get_data(parse_form_data=False)`, it can cause unexpectedly high resource usage. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. The amount of RAM required can trigger an out of memory kill of the process. Unlimited file parts can use up memory and file handles. If many concurrent requests are sent continuously, this can exhaust or kill all available workers. Version 2.2.3 contains a patch for this issue. |
Kiwi TCMS, an open source test management system, does not impose rate limits in versions prior to 12.0. This makes it easier to attempt brute-force attacks against the login page. Users should upgrade to v12.0 or later to receive a patch. As a workaround, users may install and configure a rate-limiting proxy in front of Kiwi TCMS. |
Kiwi TCMS, an open source test management system, does not impose rate limits in versions prior to 12.0. This makes it easier to attempt denial-of-service attacks against the Password reset page. An attacker could potentially send a large number of emails if they know the email addresses of users in Kiwi TCMS. Additionally that may strain SMTP resources. Users should upgrade to v12.0 or later to receive a patch. As potential workarounds, users may install and configure a rate-limiting proxy in front of Kiwi TCMS and/or configure rate limits on their email server when possible. |
Starlite is an Asynchronous Server Gateway Interface (ASGI) framework. Prior to version 1.5.2, the request body parsing in `starlite` allows a potentially unauthenticated attacker to consume a large amount of CPU time and RAM. The multipart body parser processes an unlimited number of file parts and an unlimited number of field parts. This is a remote, potentially unauthenticated Denial of Service vulnerability. This vulnerability affects applications with a request handler that accepts a `Body(media_type=RequestEncodingType.MULTI_PART)`. The large amount of CPU time required for processing requests can block all available worker processes and significantly delay or slow down the processing of legitimate user requests. The large amount of RAM accumulated while processing requests can lead to Out-Of-Memory kills. Complete DoS is achievable by sending many concurrent multipart requests in a loop. Version 1.51.2 contains a patch for this issue.
|
containerd is an open source container runtime. Before versions 1.6.18 and 1.5.18, when importing an OCI image, there was no limit on the number of bytes read for certain files. A maliciously crafted image with a large file where a limit was not applied could cause a denial of service. This bug has been fixed in containerd 1.6.18 and 1.5.18. Users should update to these versions to resolve the issue. As a workaround, ensure that only trusted images are used and that only trusted users have permissions to import images. |
Octobox is software for managing GitHub notifications. Prior to pull request (PR) 2807, a user of the system can provide a specifically crafted search query string that will trigger a ReDoS vulnerability. This issue is fixed in PR 2807. |
ReadJXLImage in JXL in GraphicsMagick before 1.3.46 lacks image dimension resource limits. |
The pairing API request handler in Microsoft HoloLens 1 (Windows Holographic) through 10.0.17763.3046 and HoloLens 2 (Windows Holographic) through 10.0.22621.1244 allows remote attackers to cause a Denial of Service (resource consumption and device unusability) by sending many requests through the Device Portal framework. |
A denial of service is possible from excessive resource consumption in net/http and mime/multipart. Multipart form parsing with mime/multipart.Reader.ReadForm can consume largely unlimited amounts of memory and disk files. This also affects form parsing in the net/http package with the Request methods FormFile, FormValue, ParseMultipartForm, and PostFormValue. ReadForm takes a maxMemory parameter, and is documented as storing "up to maxMemory bytes +10MB (reserved for non-file parts) in memory". File parts which cannot be stored in memory are stored on disk in temporary files. The unconfigurable 10MB reserved for non-file parts is excessively large and can potentially open a denial of service vector on its own. However, ReadForm did not properly account for all memory consumed by a parsed form, such as map entry overhead, part names, and MIME headers, permitting a maliciously crafted form to consume well over 10MB. In addition, ReadForm contained no limit on the number of disk files created, permitting a relatively small request body to create a large number of disk temporary files. With fix, ReadForm now properly accounts for various forms of memory overhead, and should now stay within its documented limit of 10MB + maxMemory bytes of memory consumption. Users should still be aware that this limit is high and may still be hazardous. In addition, ReadForm now creates at most one on-disk temporary file, combining multiple form parts into a single temporary file. The mime/multipart.File interface type's documentation states, "If stored on disk, the File's underlying concrete type will be an *os.File.". This is no longer the case when a form contains more than one file part, due to this coalescing of parts into a single file. The previous behavior of using distinct files for each form part may be reenabled with the environment variable GODEBUG=multipartfiles=distinct. Users should be aware that multipart.ReadForm and the http.Request methods that call it do not limit the amount of disk consumed by temporary files. Callers can limit the size of form data with http.MaxBytesReader. |