CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
Improper Input Validation vulnerability in Apache Software Foundation Apache Airflow Hive Provider.
This issue affects Apache Airflow Apache Hive Provider: before 6.1.1.
Before version 6.1.1 it was possible to bypass the security check to RCE via
principal parameter. For this to be exploited it requires access to modifying the connection details.
It is recommended updating provider version to 6.1.1 in order to avoid this vulnerability. |
Improper Neutralization of Special Elements Used in an SQL Command ('SQL Injection') vulnerability in Apache Software Foundation Apache InLong.This issue affects Apache InLong: from 1.4.0 through 1.7.0.
In the toAuditCkSql method, the groupId, streamId, auditId, and dt are directly concatenated into the SQL query statement, which may lead to SQL injection attacks.
Users are advised to upgrade to Apache InLong's 1.8.0 or cherry-pick [1] to solve it.
[1] https://github.com/apache/inlong/pull/8198 |
Apache Shiro, before 1.12.0 or 2.0.0-alpha-3, may be susceptible to a path traversal attack that results in an authentication bypass when used together with APIs or other web frameworks that route requests based on non-normalized requests.
Mitigation: Update to Apache Shiro 1.12.0+ or 2.0.0-alpha-3+ |
The DBCPConnectionPool and HikariCPConnectionPool Controller Services in Apache NiFi 0.0.2 through 1.21.0 allow an authenticated and authorized user to configure a Database URL with the H2 driver that enables custom code execution.
The resolution validates the Database URL and rejects H2 JDBC locations.
You are recommended to upgrade to version 1.22.0 or later which fixes this issue. |
Deserialization of Untrusted Data Vulnerability in Apache Software Foundation Apache InLong.This issue affects Apache InLong: from 1.4.0 through 1.7.0.
The attacker could bypass the current logic and achieve arbitrary file reading. To solve it, users are advised to upgrade to Apache InLong's 1.8.0 or cherry-pick https://github.com/apache/inlong/pull/8130 . |
Improper Input Validation vulnerability in Apache Software Foundation Apache Traffic Server.This issue affects Apache Traffic Server: through 9.2.1. |
Exposure of Sensitive Information to an Unauthorized Actor vulnerability in Apache Software Foundation Apache Traffic Server.This issue affects Apache Traffic Server: from 8.0.0 through 9.2.0.
8.x users should upgrade to 8.1.7 or later versions
9.x users should upgrade to 9.2.1 or later versions |
** UNSUPPORTED WHEN ASSIGNED ** The Apache Spark UI offers the possibility to enable ACLs via the configuration option spark.acls.enable. With an authentication filter, this checks whether a user has access permissions to view or modify the application. If ACLs are enabled, a code path in HttpSecurityFilter can allow someone to perform impersonation by providing an arbitrary user name. A malicious user might then be able to reach a permission check function that will ultimately build a Unix shell command based on their input, and execute it. This will result in arbitrary shell command execution as the user Spark is currently running as. This issue was disclosed earlier as CVE-2022-33891, but incorrectly claimed version 3.1.3 (which has since gone EOL) would not be affected.
NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
Users are recommended to upgrade to a supported version of Apache Spark, such as version 3.4.0. |
Incorrect Authorization vulnerability in Apache Software Foundation Apache IoTDB.This issue affects the iotdb-web-workbench component on 0.13.3. iotdb-web-workbench is an optional component of IoTDB, providing a web console of the database.
This problem is fixed from version 0.13.4 of iotdb-web-workbench onwards. |
Improper Input Validation vulnerability in Apache Software Foundation Apache Traffic Server. The configuration option proxy.config.http.push_method_enabled didn't function. However, by default the PUSH method is blocked in the ip_allow configuration file.This issue affects Apache Traffic Server: from 8.0.0 through 9.2.0.
8.x users should upgrade to 8.1.7 or later versions
9.x users should upgrade to 9.2.1 or later versions |
A deserialization vulnerability existed when decode a malicious package.This issue affects Apache Dubbo: from 3.1.0 through 3.1.10, from 3.2.0 through 3.2.4.
Users are recommended to upgrade to the latest version, which fixes the issue. |
In Apache Linkis <=1.3.1, because the parameters are not
effectively filtered, the attacker uses the MySQL data source and malicious parameters to
configure a new data source to trigger a deserialization vulnerability, eventually leading to
remote code execution.
Versions of Apache Linkis <= 1.3.0 will be affected.
We recommend users upgrade the version of Linkis to version 1.3.2. |
In Apache Linkis <=1.3.1, due to the lack of effective filtering
of parameters, an attacker configuring malicious Mysql JDBC parameters in JDBC EengineConn Module will trigger a
deserialization vulnerability and eventually lead to remote code execution. Therefore, the parameters in the Mysql JDBC URL should be blacklisted. Versions of Apache Linkis <= 1.3.0 will be affected.
We recommend users upgrade the version of Linkis to version 1.3.2. |
The fix for CVE-2023-24998 was incomplete for Apache Tomcat 11.0.0-M2 to 11.0.0-M4, 10.1.5 to 10.1.7, 9.0.71 to 9.0.73 and 8.5.85 to 8.5.87. If non-default HTTP connector settings were used such that the maxParameterCount could be reached using query string parameters and a request was submitted that supplied exactly maxParameterCount parameters in the query string, the limit for uploaded request parts could be bypassed with the potential for a denial of service to occur. |
Improper Input Validation vulnerability in Apache Software Foundation Apache Airflow Drill Provider.This issue affects Apache Airflow Drill Provider: before 2.3.2. |
In Apache Linkis <=1.3.1, The PublicService module uploads files without restrictions on the path to the uploaded files, and file types.
We recommend users upgrade the version of Linkis to version 1.3.2.
For versions
<=1.3.1, we suggest turning on the file path check switch in linkis.properties
`wds.linkis.workspace.filesystem.owner.check=true`
`wds.linkis.workspace.filesystem.path.check=true` |
** UNSUPPORTED WHEN ASSIGNED **
When using the Chainsaw or SocketAppender components with Log4j 1.x on JRE less than 1.7, an attacker that manages to cause a logging entry involving a specially-crafted (ie, deeply nested)
hashmap or hashtable (depending on which logging component is in use) to be processed could exhaust the available memory in the virtual machine and achieve Denial of Service when the object is deserialized.
This issue affects Apache Log4j before 2. Affected users are recommended to update to Log4j 2.x.
NOTE: This vulnerability only affects products that are no longer supported by the maintainer. |
Apache James server version 3.7.3 and earlier provides a JMX management service without authentication by default. This allows privilege escalation by a
malicious local user.
Administrators are advised to disable JMX, or set up a JMX password.
Note that version 3.7.4 onward will set up a JMX password automatically for Guice users. |
Relative library resolution in linux container-executor binary in Apache Hadoop 3.3.1-3.3.4 on Linux allows local user to gain root privileges. If the YARN cluster is accepting work from remote (authenticated) users, this MAY permit remote users to gain root privileges.
Hadoop 3.3.0 updated the " YARN Secure Containers https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/SecureContainer.html " to add a feature for executing user-submitted applications in isolated linux containers.
The native binary HADOOP_HOME/bin/container-executor is used to launch these containers; it must be owned by root and have the suid bit set in order for the YARN processes to run the containers as the specific users submitting the jobs.
The patch " YARN-10495 https://issues.apache.org/jira/browse/YARN-10495 . make the rpath of container-executor configurable" modified the library loading path for loading .so files from "$ORIGIN/" to ""$ORIGIN/:../lib/native/". This is the a path through which libcrypto.so is located. Thus it is is possible for a user with reduced privileges to install a malicious libcrypto library into a path to which they have write access, invoke the container-executor command, and have their modified library executed as root.
If the YARN cluster is accepting work from remote (authenticated) users, and these users' submitted job are executed in the physical host, rather than a container, then the CVE permits remote users to gain root privileges.
The fix for the vulnerability is to revert the change, which is done in YARN-11441 https://issues.apache.org/jira/browse/YARN-11441 , "Revert YARN-10495". This patch is in hadoop-3.3.5.
To determine whether a version of container-executor is vulnerable, use the readelf command. If the RUNPATH or RPATH value contains the relative path "./lib/native/" then it is at risk
$ readelf -d container-executor|grep 'RUNPATH\|RPATH'
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/:../lib/native/]
If it does not, then it is safe:
$ readelf -d container-executor|grep 'RUNPATH\|RPATH'
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/]
For an at-risk version of container-executor to enable privilege escalation, the owner must be root and the suid bit must be set
$ ls -laF /opt/hadoop/bin/container-executor
---Sr-s---. 1 root hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor
A safe installation lacks the suid bit; ideally is also not owned by root.
$ ls -laF /opt/hadoop/bin/container-executor
-rwxr-xr-x. 1 yarn hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor
This configuration does not support Yarn Secure Containers, but all other hadoop services, including YARN job execution outside secure containers continue to work. |
Privilege Context Switching Error vulnerability in Apache Software Foundation Apache Airflow.This issue affects Apache Airflow: before 2.6.0. |