Big File Upload Configuration (> 512MB)
The default maximum file size for uploads, in ownCloud, is 512MB. You can increase this limit up to the maximum file size which your filesystem, operating system, or other software allows, for example:
< 2GB on a 32Bit OS-architecture
< 2GB with IE6 - IE8
< 4GB with IE9 - IE11
64-bit filesystems have much higher limits. Please consult the documentation for your filesystem.
Make sure that the latest version of PHP, supported by ownCloud, is installed.
Disable user quotas, which makes them unlimited.
Your temp file or partition has to be big enough to hold multiple parallel uploads from multiple users. For example, if the average upload file size is 4GB and the average number of users uploading at the same time is 25, then you’ll need 200GB of temp space, as the formula below shows.
2 x 4 GB x 25 users = 200 GB required temp space
Twice as much space is required because the file chunks will be put together into a new file before it is finally moved into the user’s folder.
Make sure that the latest version of PHP (at least 5.6) is installed
Disable user quotas, which makes them unlimited
Your temp file or partition has to be big enough to hold multiple parallel uploads from multiple users; e.g. if the max upload size is 10GB and the average number of users uploading at the same time is 100: temp space has to hold at least 10x100 GB
In Centos and RHEL, Apache has a few more default configurations within systemd. You will have to set the temp directory in two places:
In php.ini, e.g.,
sys_temp_dir = "/scratch/tmp"
ownCloud comes with its own
Set the following two parameters inside the corresponding php.ini file (see the Loaded Configuration File section of PHP Version and Information to find your relevant php.ini files) :
php_value upload_max_filesize = 16G php_value post_max_size = 16G
Adjust these values for your needs. If you see PHP timeouts in your logfiles, increase the timeout values, which are in seconds:
php_value max_input_time 3600 php_value max_execution_time 3600
The mod_reqtimeout Apache module could also stop large uploads from completing.
If you’re using this module and getting large file uploads fail, either disable the module in your Apache config or increase the
On Ubuntu, you can disable the module by running the following command:
On CentOS, comment out the following line in
LoadModule reqtimeout_module modules/mod_reqtimeout.so
When you have done run
asdismod or updated
/etc/httpd/conf/httpd.conf, restart Apache.
|There are also several other configuration options in your web server config which could prevent the upload of larger files. Please see your web server’s manual, for how to configure those values correctly:|
If you are using Apache/2.4 with mod_fcgid, as of February/March 2016,
FcgidMaxRequestInMem significantly higher than normal may no
longer be necessary, once bug #51747 is fixed.
If you don’t want to use the ownCloud
you may configure PHP instead. Make sure to comment out any lines
.htaccess pertaining to upload size, if you entered any.
If you are running ownCloud on a 32-bit system, any
directive in your
php.ini file needs to be commented out.
Set the following two parameters inside
php.ini, using your own
desired file size values:
upload_max_filesize = 16G post_max_size = 16G
Tell PHP which temp file you want it to use:
upload_tmp_dir = /var/big_temp_file/
Output Buffering must be turned off in
php.ini, or PHP will return memory-related errors:
output_buffering = 0
As an alternative to the
upload_tmp_dir of PHP (e.g., if you don’t have access to your
php.ini) you can also configure a temporary location for uploaded files by using the
tempdirectory setting in your
If you have configured the
session_lifetime setting in your
config.php (See Sample Config PHP Parameters) file then make sure it is not too low. This setting needs to be configured to at least the time (in seconds) that the longest upload will take.
If unsure remove this completely from your configuration to reset it to the default shown in the
For very long-running uploads (those lasting longer than 1 hr) to public folders, when chunking is not in effect, 'filelocking.ttl' should be set to a significantly large value. If not, large file uploads will fail with a file locking error, because the Redis garbage collection will delete the initially acquired file lock after 1 hour by default.
To estimate a good value, use the following formula:
time in seconds = (maximum upload file size / slowest assumed upload connection).
For the value of "slowest assumed upload connection", take the upload speed of the user with the slowest connection and divide it by two. For example, let’s assume that the user with the slowest connection has an 8MBit/s DSL connection; which usually indicates the download speed. This type of connection would, usually, have 1MBit/s upload speed (but confirm with the ISP). Divide this value in half, to have a buffer when there is network congestion, to arrive at 512KBit/s as the final value.