Support >
  About cloud server >
  What should I do if uploading a file larger than 2MB to a cloud server fails?

What should I do if uploading a file larger than 2MB to a cloud server fails?

Time : 2025-11-29 10:54:44
Edit : DNS.COM

  After deploying a website or application on a cloud server, a seemingly simple yet frustrating problem often arises: file uploads exceeding 2MB fail, displaying an upload error, request timeout, or a 413 Request Entity Too Large error. Many users initially assume it's a network failure or server malfunction, but in most cases, this is related to server or application upload limitations. Only by identifying the specific configuration limitations can the problem be completely resolved. Large file upload failures are typically related to three factors: application layer (e.g., PHP, Node.js, Nginx), server layer (e.g., web server limitations), and client environment (browser/tool ​​limitations). When the file size exceeds the default threshold, regardless of network speed or bandwidth, the backend may directly block the request, preventing the upload process from continuing.

  This most commonly occurs on servers running PHP environments because PHP's default file upload limits are very strict. For example, the `upload_max_filesize` and `post_max_size` parameters in `php.ini` are typically only 2MB or 8MB. If the application doesn't actively adjust these limits, any file larger than this will be directly rejected by the PHP layer. To resolve this, multiple configurations need to be modified simultaneously, and the service reloaded. On the server, PHP upload limits can be adjusted in the following ways:

# Edit the main configuration file
vim /etc/php.ini

# Modify the following values
upload_max_filesize = 50M
post_max_size = 50M
max_execution_time = 300
max_input_time = 300

  The service must be restarted after the modifications are completed.

systemctl restart php-fpm
systemctl restart nginx

  Many people only modify `upload_max_filesize` but neglect `post_max_size`. However, POST data size limits are more likely to cause request blocking than upload limits, so both values ​​must be set to the same or larger. For systems using PHP-FPM or multiple PHP versions, ensure you are editing the actual `php.ini` file used by your application; otherwise, the changes will not take effect.

  Besides PHP's inherent limitations, Nginx also sets default limits on uploaded file sizes. Even if PHP allows larger files, Nginx may intercept requests prematurely, triggering a 413 error. Therefore, the following parameters need to be added to the Nginx configuration:

client_max_body_size 50M;

  This can usually be filled in the http, server, or location block. After modification, reload Nginx.

nginx -t
systemctl reload nginx

  If you forget to configure Nginx, uploads will still fail even if PHP is set to 100M. A common pitfall is that users write `client_max_body_size` in the HTTP block, but the website actually runs in a different secondary configuration file, resulting in the parameter not being applied to the target site. Therefore, the location of the modification must be clearly confirmed.

  For environments using Apache, upload limits may also be controlled at the server level. Apache's default `RequestReadTimeout` or `LimitRequestBody` can also affect file uploads. If using `.htaccess`, you can add:

php_value upload_max_filesize 50M
php_value post_max_size 50M
LimitRequestBody 52428800

  If using a web control panel (such as BT Panel, cPanel, or Plesk), these settings can usually be adjusted in the backend graphical interface. However, the logic for modification is the same: PHP, the web server, and execution timeout limits must be configured simultaneously; otherwise, large file uploads will still be blocked.

  In some application environments, large file upload failures may be related to reverse proxies, CDNs, or load balancers. Common examples include Cloudflare, Nginx reverse proxies, and SLB/ELB load balancers. Cloudflare's free plan allows a single file size limit of 100MB; if the upload exceeds this limit, Cloudflare will directly issue a timeout or block the upload. Solutions include switching to a higher plan, disabling the Orange Cloud proxy, or bypassing Cloudflare's upload interface.

  The reverse proxy configuration must also be modified accordingly; otherwise, the upstream server may allow the upload, but the proxy server may terminate the connection prematurely. For example, add the following to the proxy layer:

proxy_max_temp_file_size 0;
proxy_buffering off;

  These settings can prevent timeout issues caused by proxy caching, especially when uploading videos or installation packages larger than tens of MB.

  If your application uses Node.js, you also need to pay special attention to the file size limits of the body-parser or the framework itself. Common Express settings are as follows:

app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb', extended: true }));

  For Django, you need to modify DATA_UPLOAD_MAX_MEMORY_SIZE and FILE_UPLOAD_MAX_MEMORY_SIZE in settings.py, for example:

DATA_UPLOAD_MAX_MEMORY_SIZE = 52428800
FILE_UPLOAD_MAX_MEMORY_SIZE = 52428800

  In some cases, upload failures are not due to configuration issues, but rather because the file is too large, causing excessive upload time and triggering the server's timeout mechanism. This is especially true for cross-border servers; when accessing overseas cloud servers from within China, network latency is high, making large file uploads very prone to timeouts. In such cases, it is recommended to configure a timeout period:

  Nginx:

proxy_read_timeout 300;
client_body_timeout 300;
send_timeout 300;

  PHP:

max_execution_time = 300
max_input_time = 300

  Node.js:

server.timeout = 300000;

  If using SFTP/FTP tools for uploading, slow speeds may cause disconnections. You may need to adjust the tool's retry count or use resume functionality.

  Another easily overlooked issue is limitations imposed by the browser itself or the front-end upload method. For example, some front-end frameworks limit file size; common checks in Vue, React, or WeChat Mini Programs might forcibly limit uploads to a maximum of 2MB, causing them to be blocked before the upload even begins. To troubleshoot, check the Network tab in your browser's developer tools to see if a request was actually sent. If no upload data was sent, the problem lies in the front-end code, not the server.

  Compressing large files and enabling chunked uploads are also important ways to resolve large file upload failures. For files several hundred MB, such as videos or large compressed files, the following methods are recommended:

  Front-end chunking, for example, using JavaScript:

const chunkSize = 5 * 1024 * 1024;

  Backend merging, such as PHP, can read all fragments and write them to the final file to avoid failures caused by excessively large single file uploads. This overcomes the traditional limitations of web services, even for fragments as small as 5MB, making it suitable for large file uploads.

  If the file size exceeds tens of MB, consider using object storage instead of directly uploading to a cloud server, such as AWS S3, Alibaba Cloud OSS, or Tencent Cloud COS. These storage solutions support extremely large file uploads and inherently possess features like resumeable uploads and accelerated nodes. The common practice is for the frontend to obtain a temporary signature and upload to object storage, while the backend only needs to store the file path, significantly reducing server load.

  When troubleshooting upload failures, it is recommended to use the following order to locate the problem: first check the Nginx or Apache error logs; then check the PHP or backend program logs; check the proxy layer (Cloudflare, Nginx Proxy, SLB); check frontend code limitations; and finally test whether cross-regional network issues cause timeouts. Logs usually clearly indicate the specific cause of the error. For example, in Nginx, "client intended to send too large body" indicates that the `client_max_body_size` setting is too small.

  By comprehensively optimizing the web layer, PHP/Node application layer, proxy layer, and environment layer, the problem of file upload failures exceeding 2MB can be completely resolved. The limit configuration may differ in different scenarios, so each item should be checked to ensure that the configuration at all levels is relaxed to the target requirements. If the server is located overseas, timeout issues caused by insufficient bandwidth and network jitter should also be considered. Increasing timeout parameters or using accelerated lines can further improve stability. Ultimately, by correctly configuring the server, optimizing the upload logic, and appropriately selecting object storage, a stable and efficient large file upload system can be built, preventing users from experiencing continuous upload failures due to limitations.

DNS Luna
DNS Amy
DNS Becky
DNS NOC
Title
Email Address
Type
Information
Code
Submit