<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
      <title>Rclone on rclone - rsync for cloud storage </title>
    <link>https://rclone.org/</link>
    <language>en-US</language>
    <author>Nick Craig-Wood</author>
    <rights>Copyright (c) 2017, Nick Craig-Wood; all rights reserved.</rights>
    <updated>Mon, 01 Jan 0001 00:00:00 UTC</updated>
    
    <item>
      <title>Sia</title>
      <link>https://rclone.org/sia/</link>
      <pubDate>Wed, 02 Oct 2019 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/sia/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode66s0hbhb-sia&#34;&gt;&lt;i class=&#34;fa fa-globe&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Sia&lt;/h1&gt;
&lt;p&gt;Sia (&lt;a href=&#34;https://sia.tech/&#34;&gt;sia.tech&lt;/a&gt;) is a decentralized cloud storage platform
based on the &lt;a href=&#34;https://wikipedia.org/wiki/Blockchain&#34;&gt;blockchain&lt;/a&gt; technology.
With rclone you can use it like any other remote filesystem or mount Sia folders
locally. The technology behind it involves a number of new concepts such as
Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on.
If you are new to it, you&#39;d better first familiarize yourself using their
excellent &lt;a href=&#34;https://support.sia.tech/&#34;&gt;support documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Before you can use rclone with Sia, you will need to have a running copy of
&lt;code&gt;Sia-UI&lt;/code&gt; or &lt;code&gt;siad&lt;/code&gt; (the Sia daemon) locally on your computer or on local
network (e.g. a NAS). Please follow the &lt;a href=&#34;https://sia.tech/get-started&#34;&gt;Get started&lt;/a&gt;
guide and install one.&lt;/p&gt;
&lt;p&gt;rclone interacts with Sia network by talking to the Sia daemon via &lt;a href=&#34;https://sia.tech/docs/&#34;&gt;HTTP API&lt;/a&gt;
which is usually available on port &lt;em&gt;9980&lt;/em&gt;. By default you will run the daemon
locally on the same computer so it&#39;s safe to leave the API password blank
(the API URL will be &lt;code&gt;http://127.0.0.1:9980&lt;/code&gt; making external access impossible).&lt;/p&gt;
&lt;p&gt;However, if you want to access Sia daemon running on another node, for example
due to memory constraints or because you want to share single daemon between
several rclone and Sia-UI instances, you&#39;ll need to make a few more provisions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ensure you have &lt;em&gt;Sia daemon&lt;/em&gt; installed directly or in
a &lt;a href=&#34;https://github.com/SiaFoundation/siad/pkgs/container/siad&#34;&gt;docker container&lt;/a&gt;
because Sia-UI does not support this mode natively.&lt;/li&gt;
&lt;li&gt;Run it on externally accessible port, for example provide &lt;code&gt;--api-addr :9980&lt;/code&gt;
and &lt;code&gt;--disable-api-security&lt;/code&gt; arguments on the daemon command line.&lt;/li&gt;
&lt;li&gt;Enforce API password for the &lt;code&gt;siad&lt;/code&gt; daemon via environment variable
&lt;code&gt;SIA_API_PASSWORD&lt;/code&gt; or text file named &lt;code&gt;apipassword&lt;/code&gt; in the daemon directory.&lt;/li&gt;
&lt;li&gt;Set rclone backend option &lt;code&gt;api_password&lt;/code&gt; taking it from above locations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Notes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If your wallet is locked, rclone cannot unlock it automatically.
You should either unlock it in advance by using Sia-UI or via command line
&lt;code&gt;siac wallet unlock&lt;/code&gt;.
Alternatively you can make &lt;code&gt;siad&lt;/code&gt; unlock your wallet automatically upon
startup by running it with environment variable &lt;code&gt;SIA_WALLET_PASSWORD&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If &lt;code&gt;siad&lt;/code&gt; cannot find the &lt;code&gt;SIA_API_PASSWORD&lt;/code&gt; variable or the &lt;code&gt;apipassword&lt;/code&gt; file
in the &lt;code&gt;SIA_DIR&lt;/code&gt; directory, it will generate a random password and store in the
text file named &lt;code&gt;apipassword&lt;/code&gt; under &lt;code&gt;YOUR_HOME/.sia/&lt;/code&gt; directory on Unix
or &lt;code&gt;C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword&lt;/code&gt; on Windows.
Remember this when you configure password in rclone.&lt;/li&gt;
&lt;li&gt;The only way to use &lt;code&gt;siad&lt;/code&gt; without API password is to run it &lt;strong&gt;on localhost&lt;/strong&gt;
with command line argument &lt;code&gt;--authorize-api=false&lt;/code&gt;, but this is insecure and
&lt;strong&gt;strongly discouraged&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of how to make a &lt;code&gt;sia&lt;/code&gt; remote called &lt;code&gt;mySia&lt;/code&gt;.
First, run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; mySia
Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
...
29 / Sia Decentralized Cloud
   \ &amp;#34;sia&amp;#34;
...
Storage&amp;gt; sia
Sia daemon API URL, like http://sia.daemon.host:9980.
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
Keep default if Sia daemon runs on localhost.
Enter a string value. Press Enter for the default (&amp;#34;http://127.0.0.1:9980&amp;#34;).
api_url&amp;gt; http://127.0.0.1:9980
Sia Daemon API Password.
Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n&amp;gt; y
Enter the password:
password:
Confirm the password:
password:
Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n
--------------------
[mySia]
type = sia
api_url = http://127.0.0.1:9980
api_password = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once configured, you can then use &lt;code&gt;rclone&lt;/code&gt; like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;List directories in top level of your Sia storage&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone lsd mySia:
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;List all the files in your Sia storage&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone ls mySia:
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;Upload a local directory to the Sia directory called &lt;em&gt;backup&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy /home/source mySia:backup
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to sia (Sia Decentralized Cloud).&lt;/p&gt;
&lt;h4 id=&#34;sia-api-url&#34;&gt;--sia-api-url&lt;/h4&gt;
&lt;p&gt;Sia daemon API URL, like &lt;a href=&#34;http://sia.daemon.host:9980&#34;&gt;http://sia.daemon.host:9980&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
Keep default if Sia daemon runs on localhost.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      api_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SIA_API_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;http://127.0.0.1:9980&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sia-api-password&#34;&gt;--sia-api-password&lt;/h4&gt;
&lt;p&gt;Sia Daemon API Password.&lt;/p&gt;
&lt;p&gt;Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; Input to this must be obscured - see &lt;a href=&#34;https://rclone.org/commands/rclone_obscure/&#34;&gt;rclone obscure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      api_password&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SIA_API_PASSWORD&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to sia (Sia Decentralized Cloud).&lt;/p&gt;
&lt;h4 id=&#34;sia-user-agent&#34;&gt;--sia-user-agent&lt;/h4&gt;
&lt;p&gt;Siad User Agent&lt;/p&gt;
&lt;p&gt;Sia daemon requires the &#39;Sia-Agent&#39; user agent by default for security&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      user_agent&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SIA_USER_AGENT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;Sia-Agent&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sia-encoding&#34;&gt;--sia-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SIA_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sia-description&#34;&gt;--sia-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SIA_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Modification times not supported&lt;/li&gt;
&lt;li&gt;Checksums not supported&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone about&lt;/code&gt; not supported&lt;/li&gt;
&lt;li&gt;rclone can work only with &lt;em&gt;Siad&lt;/em&gt; or &lt;em&gt;Sia-UI&lt;/em&gt; at the moment,
the &lt;strong&gt;SkyNet daemon is not supported yet.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Sia does not allow control characters or symbols like question and pound
signs in file names. rclone will transparently &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encode&lt;/a&gt;
them for you, but you&#39;d better be aware&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>1Fichier</title>
      <link>https://rclone.org/fichier/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/fichier/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode24s0hbhb-1fichier&#34;&gt;&lt;i class=&#34;fa fa-archive&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 1Fichier&lt;/h1&gt;
&lt;p&gt;This is a backend for the &lt;a href=&#34;https://1fichier.com&#34;&gt;1fichier&lt;/a&gt; cloud
storage service. Note that a Premium subscription is required to use
the API.&lt;/p&gt;
&lt;p&gt;Paths are specified as &lt;code&gt;remote:path&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Paths may be as deep as required, e.g. &lt;code&gt;remote:directory/subdirectory&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;The initial setup for 1Fichier involves getting the API key from the website which you
need to do in your browser.&lt;/p&gt;
&lt;p&gt;Here is an example of how to make a remote called &lt;code&gt;remote&lt;/code&gt;.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
[snip]
XX / 1Fichier
   \ &amp;#34;fichier&amp;#34;
[snip]
Storage&amp;gt; fichier
** See help for fichier backend at: https://rclone.org/fichier/ **

Your API Key, get it from https://1fichier.com/console/params.pl
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
api_key&amp;gt; example_key

Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; 
Remote config
Configuration complete.
Options:
- type: fichier
- api_key: example_key
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once configured you can then use &lt;code&gt;rclone&lt;/code&gt; like this,&lt;/p&gt;
&lt;p&gt;List directories in top level of your 1Fichier account&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List all the files in your 1Fichier account&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To copy a local directory to a 1Fichier directory called backup&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy /home/source remote:backup
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;modification-times-and-hashes&#34;&gt;Modification times and hashes&lt;/h3&gt;
&lt;p&gt;1Fichier does not support modification times. It supports the Whirlpool hash algorithm.&lt;/p&gt;
&lt;h3 id=&#34;duplicated-files&#34;&gt;Duplicated files&lt;/h3&gt;
&lt;p&gt;1Fichier can have two files with exactly the same name and path (unlike a
normal file system).&lt;/p&gt;
&lt;p&gt;Duplicated files cause problems with the syncing and you will see
messages in the log about duplicates.&lt;/p&gt;
&lt;h3 id=&#34;restricted-filename-characters&#34;&gt;Restricted filename characters&lt;/h3&gt;
&lt;p&gt;In addition to the &lt;a href=&#34;https://rclone.org/overview/#restricted-characters&#34;&gt;default restricted characters set&lt;/a&gt;
the following characters are also replaced:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;\&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x5C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＼&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＜&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;gt;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3E&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＞&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;quot;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x22&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＂&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;$&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x24&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＄&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x60&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;｀&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&#39;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x27&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＇&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;File names can also not start or end with the following characters.
These only get replaced if they are the first or last character in the
name:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SP&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x20&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;␠&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Invalid UTF-8 bytes will also be &lt;a href=&#34;https://rclone.org/overview/#invalid-utf8&#34;&gt;replaced&lt;/a&gt;,
as they can&#39;t be used in JSON strings.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to fichier (1Fichier).&lt;/p&gt;
&lt;h4 id=&#34;fichier-api-key&#34;&gt;--fichier-api-key&lt;/h4&gt;
&lt;p&gt;Your API Key, get it from &lt;a href=&#34;https://1fichier.com/console/params.pl&#34;&gt;https://1fichier.com/console/params.pl&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      api_key&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_API_KEY&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to fichier (1Fichier).&lt;/p&gt;
&lt;h4 id=&#34;fichier-shared-folder&#34;&gt;--fichier-shared-folder&lt;/h4&gt;
&lt;p&gt;If you want to download a shared folder, add this parameter.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      shared_folder&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_SHARED_FOLDER&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;fichier-file-password&#34;&gt;--fichier-file-password&lt;/h4&gt;
&lt;p&gt;If you want to download a shared file that is password protected, add this parameter.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; Input to this must be obscured - see &lt;a href=&#34;https://rclone.org/commands/rclone_obscure/&#34;&gt;rclone obscure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      file_password&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_FILE_PASSWORD&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;fichier-folder-password&#34;&gt;--fichier-folder-password&lt;/h4&gt;
&lt;p&gt;If you want to list the files in a shared folder that is password protected, add this parameter.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; Input to this must be obscured - see &lt;a href=&#34;https://rclone.org/commands/rclone_obscure/&#34;&gt;rclone obscure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      folder_password&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_FOLDER_PASSWORD&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;fichier-cdn&#34;&gt;--fichier-cdn&lt;/h4&gt;
&lt;p&gt;Set if you wish to use CDN download links.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      cdn&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_CDN&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;fichier-encoding&#34;&gt;--fichier-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;fichier-description&#34;&gt;--fichier-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_FICHIER_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;rclone about&lt;/code&gt; is not supported by the 1Fichier backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy &lt;code&gt;mfs&lt;/code&gt; (most free space) as a member of an rclone union
remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/overview/#optional-features&#34;&gt;List of backends that do not support rclone about&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_about/&#34;&gt;rclone about&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Akamai Netstorage</title>
      <link>https://rclone.org/netstorage/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/netstorage/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode49s0hbhb-akamai-netstorage&#34;&gt;&lt;i class=&#34;fas fa-database&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Akamai NetStorage&lt;/h1&gt;
&lt;p&gt;Paths are specified as &lt;code&gt;remote:&lt;/code&gt;
You may put subdirectories in too, e.g. &lt;code&gt;remote:/path/to/dir&lt;/code&gt;.
If you have a CP code you can use that as the folder after the domain such as &amp;lt;domain&amp;gt;/&amp;lt;cpcode&amp;gt;/&amp;lt;internal directories within cpcode&amp;gt;.&lt;/p&gt;
&lt;p&gt;For example, this is commonly configured with or without a CP code:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;With a CP code&lt;/strong&gt;. &lt;code&gt;[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Without a CP code&lt;/strong&gt;. &lt;code&gt;[your-domain-prefix]-nsu.akamaihd.net&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See all buckets
rclone lsd remote:
The initial setup for Netstorage involves getting an account and secret. Use &lt;code&gt;rclone config&lt;/code&gt; to walk you through the setup process.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here&#39;s an example of how to make a remote called &lt;code&gt;ns1&lt;/code&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;To begin the interactive configuration process, enter this command:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Type &lt;code&gt;n&lt;/code&gt; to create a new remote.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;n) New remote
d) Delete remote
q) Quit config
e/n/d/q&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;For this example, enter &lt;code&gt;ns1&lt;/code&gt; when you reach the name&amp;gt; prompt.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;name&amp;gt; ns1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Enter &lt;code&gt;netstorage&lt;/code&gt; as the type of storage to configure.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
XX / NetStorage
   \ &amp;#34;netstorage&amp;#34;
Storage&amp;gt; netstorage
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / HTTP protocol
   \ &amp;#34;http&amp;#34;
 2 / HTTPS protocol
   \ &amp;#34;https&amp;#34;
protocol&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;Specify your NetStorage host, CP code, and any necessary content paths using this format: &lt;code&gt;&amp;lt;domain&amp;gt;/&amp;lt;cpcode&amp;gt;/&amp;lt;content&amp;gt;/&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
host&amp;gt; baseball-nsu.akamaihd.net/123456/content/
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;7&#34;&gt;
&lt;li&gt;Set the netstorage account name&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
account&amp;gt; username
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;8&#34;&gt;
&lt;li&gt;Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the &lt;code&gt;y&lt;/code&gt; option to set your own password then enter your secret.
Note: The secret is stored in the &lt;code&gt;rclone.conf&lt;/code&gt; file with hex-encoded encryption.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;y) Yes type in my own password
g) Generate random password
y/g&amp;gt; y
Enter the password:
password:
Confirm the password:
password:
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;9&#34;&gt;
&lt;li&gt;View the summary and confirm your remote configuration.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[ns1]
type = netstorage
protocol = http
host = baseball-nsu.akamaihd.net/123456/content/
account = username
secret = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This remote is called &lt;code&gt;ns1&lt;/code&gt; and can now be used.&lt;/p&gt;
&lt;h2 id=&#34;example-operations&#34;&gt;Example operations&lt;/h2&gt;
&lt;p&gt;Get started with rclone and NetStorage with these examples. For additional rclone commands, visit &lt;a href=&#34;https://rclone.org/commands/&#34;&gt;https://rclone.org/commands/&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;see-contents-of-a-directory-in-your-project&#34;&gt;See contents of a directory in your project&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd ns1:/974012/testing/
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;sync-the-contents-local-with-remote&#34;&gt;Sync the contents local with remote&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;rclone sync . ns1:/974012/testing/
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;upload-local-content-to-remote&#34;&gt;Upload local content to remote&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;rclone copy notes.txt ns1:/974012/testing/
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;delete-content-on-remote&#34;&gt;Delete content on remote&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;rclone delete ns1:/974012/testing/notes.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;move-or-copy-content-between-cp-codes&#34;&gt;Move or copy content between CP codes.&lt;/h3&gt;
&lt;p&gt;Your credentials must have access to two CP codes on the same remote. You can&#39;t perform operations between different remotes.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;features&#34;&gt;Features&lt;/h2&gt;
&lt;h3 id=&#34;symlink-support&#34;&gt;Symlink Support&lt;/h3&gt;
&lt;p&gt;The Netstorage backend changes the rclone &lt;code&gt;--links, -l&lt;/code&gt; behavior. When uploading, instead of creating the .rclonelink file, use the &amp;quot;symlink&amp;quot; API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.&lt;/p&gt;
&lt;p&gt;This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the &amp;quot;backend symlink&amp;quot; command to create a symlink on the NetStorage server, refer to &amp;quot;symlink&amp;quot; section below.&lt;/p&gt;
&lt;p&gt;Individual symlink files on the remote can be used with the commands like &amp;quot;cat&amp;quot; to print the destination name, or &amp;quot;delete&amp;quot; to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.&lt;/p&gt;
&lt;h3 id=&#34;implicit-vs-explicit-directories&#34;&gt;Implicit vs. Explicit Directories&lt;/h3&gt;
&lt;p&gt;With NetStorage, directories can exist in one of two forms:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Explicit Directory&lt;/strong&gt;. This is an actual, physical directory that you have created in a storage group.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Implicit Directory&lt;/strong&gt;. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as &amp;quot;implicit.&amp;quot; While the directories aren&#39;t physically created, they exist implicitly and the noted path is connected with the uploaded file.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.&lt;/p&gt;
&lt;h3 id=&#34;fast-list-listr-support&#34;&gt;&lt;code&gt;--fast-list&lt;/code&gt; / ListR support&lt;/h3&gt;
&lt;p&gt;NetStorage remote supports the ListR feature by using the &amp;quot;list&amp;quot; NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they&#39;re encountered.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rclone will use the ListR method for some commands by default&lt;/strong&gt;. Commands such as &lt;code&gt;lsf -R&lt;/code&gt; will use ListR by default. To disable this, include the &lt;code&gt;--disable listR&lt;/code&gt; option to use the non-recursive method of listing objects.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rclone will not use the ListR method for some commands&lt;/strong&gt;. Commands such as &lt;code&gt;sync&lt;/code&gt; don&#39;t use ListR by default. To force using the ListR method, include the  &lt;code&gt;--fast-list&lt;/code&gt; option.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are pros and cons of using the ListR method, refer to &lt;a href=&#34;https://rclone.org/docs/#fast-list&#34;&gt;rclone documentation&lt;/a&gt;. In general, the sync command over an existing deep tree on the remote will run faster with the &amp;quot;--fast-list&amp;quot; flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: There is a known limitation that &amp;quot;lsf -R&amp;quot; will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass &amp;quot;--disable listR&amp;quot; flag if these numbers are important in the output.&lt;/p&gt;
&lt;h3 id=&#34;purge&#34;&gt;Purge&lt;/h3&gt;
&lt;p&gt;NetStorage remote supports the purge feature by using the &amp;quot;quick-delete&amp;quot; NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Read the &lt;a href=&#34;https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html&#34;&gt;NetStorage Usage API&lt;/a&gt; for considerations when using &amp;quot;quick-delete&amp;quot;. In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to netstorage (Akamai NetStorage).&lt;/p&gt;
&lt;h4 id=&#34;netstorage-host&#34;&gt;--netstorage-host&lt;/h4&gt;
&lt;p&gt;Domain+path of NetStorage host to connect to.&lt;/p&gt;
&lt;p&gt;Format should be &lt;code&gt;&amp;lt;domain&amp;gt;/&amp;lt;internal folders&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      host&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_NETSTORAGE_HOST&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;netstorage-account&#34;&gt;--netstorage-account&lt;/h4&gt;
&lt;p&gt;Set the NetStorage account name&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      account&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_NETSTORAGE_ACCOUNT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;netstorage-secret&#34;&gt;--netstorage-secret&lt;/h4&gt;
&lt;p&gt;Set the NetStorage account secret/G2O key for authentication.&lt;/p&gt;
&lt;p&gt;Please choose the &#39;y&#39; option to set your own password then enter your secret.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; Input to this must be obscured - see &lt;a href=&#34;https://rclone.org/commands/rclone_obscure/&#34;&gt;rclone obscure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      secret&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_NETSTORAGE_SECRET&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to netstorage (Akamai NetStorage).&lt;/p&gt;
&lt;h4 id=&#34;netstorage-protocol&#34;&gt;--netstorage-protocol&lt;/h4&gt;
&lt;p&gt;Select between HTTP or HTTPS protocol.&lt;/p&gt;
&lt;p&gt;Most users should choose HTTPS, which is the default.
HTTP is provided primarily for debugging purposes.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      protocol&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_NETSTORAGE_PROTOCOL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;https&amp;quot;&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;http&amp;quot;
&lt;ul&gt;
&lt;li&gt;HTTP protocol&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;https&amp;quot;
&lt;ul&gt;
&lt;li&gt;HTTPS protocol&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;netstorage-description&#34;&gt;--netstorage-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_NETSTORAGE_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;backend-commands&#34;&gt;Backend commands&lt;/h2&gt;
&lt;p&gt;Here are the commands specific to the netstorage backend.&lt;/p&gt;
&lt;p&gt;Run them with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend COMMAND remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The help below will explain what arguments each command takes.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/commands/rclone_backend/&#34;&gt;backend&lt;/a&gt; command for more
info on how to pass options and arguments.&lt;/p&gt;
&lt;p&gt;These can be run on a running backend using the rc command
&lt;a href=&#34;https://rclone.org/rc/#backend-command&#34;&gt;backend/command&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;du&#34;&gt;du&lt;/h3&gt;
&lt;p&gt;Return disk usage information for a specified directory&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend du remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The usage information returned, includes the targeted directory as well as all
files stored in any sub-directories that may exist.&lt;/p&gt;
&lt;h3 id=&#34;symlink&#34;&gt;symlink&lt;/h3&gt;
&lt;p&gt;You can create a symbolic link in ObjectStore with the symlink action.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend symlink remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The desired path location (including applicable sub-directories) ending in
the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable.
&lt;code&gt;rclone backend symlink &amp;lt;src&amp;gt; &amp;lt;path&amp;gt;&lt;/code&gt;&lt;/p&gt;

</description>
    </item>
    
    <item>
      <title>Alias</title>
      <link>https://rclone.org/alias/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/alias/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode6s0hbhb-alias&#34;&gt;&lt;i class=&#34;fa fa-link&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Alias&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;alias&lt;/code&gt; remote provides a new name for another remote.&lt;/p&gt;
&lt;p&gt;Paths may be as deep as required or a local path,
e.g. &lt;code&gt;remote:directory/subdirectory&lt;/code&gt; or &lt;code&gt;/directory/subdirectory&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;During the initial setup with &lt;code&gt;rclone config&lt;/code&gt; you will specify the target
remote. The target remote can either be a local path or another remote.&lt;/p&gt;
&lt;p&gt;Subfolders can be used in target remote. Assume an alias remote named &lt;code&gt;backup&lt;/code&gt;
with the target &lt;code&gt;mydrive:private/backup&lt;/code&gt;. Invoking &lt;code&gt;rclone mkdir backup:desktop&lt;/code&gt;
is exactly the same as invoking &lt;code&gt;rclone mkdir mydrive:private/backup/desktop&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;There will be no special handling of paths containing &lt;code&gt;..&lt;/code&gt; segments.
Invoking &lt;code&gt;rclone mkdir backup:../desktop&lt;/code&gt; is exactly the same as invoking
&lt;code&gt;rclone mkdir mydrive:private/backup/../desktop&lt;/code&gt;.
The empty path is not allowed as a remote. To alias the current directory
use &lt;code&gt;.&lt;/code&gt; instead.&lt;/p&gt;
&lt;p&gt;The target remote can also be a &lt;a href=&#34;https://rclone.org/docs/#connection-strings&#34;&gt;connection string&lt;/a&gt;.
This can be used to modify the config of a remote for different uses, e.g.
the alias  &lt;code&gt;myDriveTrash&lt;/code&gt; with the target remote &lt;code&gt;myDrive,trashed_only:&lt;/code&gt;
can be used to only show the trashed files in &lt;code&gt;myDrive&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of how to make an alias called &lt;code&gt;remote&lt;/code&gt; for local folder.
First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Alias for an existing remote
   \ &amp;#34;alias&amp;#34;
[snip]
Storage&amp;gt; alias
Remote or path to alias.
Can be &amp;#34;myremote:path/to/dir&amp;#34;, &amp;#34;myremote:bucket&amp;#34;, &amp;#34;myremote:&amp;#34; or &amp;#34;/local/path&amp;#34;.
remote&amp;gt; /mnt/storage/backup
Remote config
Configuration complete.
Options:
- type: alias
- remote: /mnt/storage/backup
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
remote               alias

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q&amp;gt; q
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once configured you can then use &lt;code&gt;rclone&lt;/code&gt; like this,&lt;/p&gt;
&lt;p&gt;List directories in top level in &lt;code&gt;/mnt/storage/backup&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List all the files in &lt;code&gt;/mnt/storage/backup&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Copy another local directory to the alias directory called source&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy /home/source remote:source
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to alias (Alias for an existing remote).&lt;/p&gt;
&lt;h4 id=&#34;alias-remote&#34;&gt;--alias-remote&lt;/h4&gt;
&lt;p&gt;Remote or path to alias.&lt;/p&gt;
&lt;p&gt;Can be &amp;quot;myremote:path/to/dir&amp;quot;, &amp;quot;myremote:bucket&amp;quot;, &amp;quot;myremote:&amp;quot; or &amp;quot;/local/path&amp;quot;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      remote&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_ALIAS_REMOTE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to alias (Alias for an existing remote).&lt;/p&gt;
&lt;h4 id=&#34;alias-description&#34;&gt;--alias-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_ALIAS_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    
    <item>
      <title>Amazon Drive</title>
      <link>https://rclone.org/amazonclouddrive/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/amazonclouddrive/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode2s0hbhb-amazon-drive&#34;&gt;&lt;i class=&#34;fab fa-amazon&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Amazon Drive&lt;/h1&gt;
&lt;p&gt;Amazon Drive, formerly known as Amazon Cloud Drive, was a cloud storage
service run by Amazon for consumers.&lt;/p&gt;
&lt;p&gt;From 2023-12-31, &lt;a href=&#34;https://www.amazon.com/b?ie=UTF8&amp;amp;node=23943055011&#34;&gt;Amazon Drive has been discontinued&lt;/a&gt;
by Amazon so the Amazon Drive backend has been removed.&lt;/p&gt;
&lt;p&gt;You can still use Amazon Photos to access your photos and videos.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Amazon S3</title>
      <link>https://rclone.org/s3/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/s3/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode65s0hbhb-amazon-s3-storage-providers&#34;&gt;&lt;i class=&#34;fab fa-amazon&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Amazon S3 Storage Providers&lt;/h1&gt;
&lt;p&gt;The S3 backend can be used with a number of different providers:&lt;/p&gt;
&lt;ul class=&#34;list-group&#34;&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    AWS S3
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://aws.amazon.com/s3/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#configuration&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Alibaba Cloud (Aliyun) Object Storage System (OSS)
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.alibabacloud.com/product/oss/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#alibaba-oss&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Ceph
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;http://ceph.com/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#ceph&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    China Mobile Ecloud Elastic Object Storage (EOS)
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://ecloud.10086.cn/home/product-introduction/eos/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#china-mobile-ecloud-eos&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Cloudflare R2
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://blog.cloudflare.com/r2-open-beta/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#cloudflare-r2&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Arvan Cloud Object Storage (AOS)
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.arvancloud.com/en/products/cloud-storage&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#arvan-cloud&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    DigitalOcean Spaces
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.digitalocean.com/products/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#digitalocean-spaces&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Dreamhost
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.dreamhost.com/cloud/storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#dreamhost&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    GCS
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://cloud.google.com/storage/docs&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#google-cloud-storage&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Huawei OBS
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.huaweicloud.com/intl/en-us/product/obs.html&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#huawei-obs&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    IBM COS S3
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;http://www.ibm.com/cloud/object-storage&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#ibm-cos-s3&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    IDrive e2
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.idrive.com/e2/?refer=rclone&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#idrive-e2&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    IONOS Cloud
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://cloud.ionos.com/storage/object-storage&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#ionos&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Leviia Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.leviia.com/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#leviia&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Liara Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://liara.ir/landing/object-storage&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#liara-cloud&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Linode Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.linode.com/products/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#linode&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Magalu Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://magalu.cloud/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#magalu&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Minio
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.minio.io/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#minio&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Petabox
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://petabox.io/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#petabox&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Qiniu Cloud Object Storage (Kodo)
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.qiniu.com/en/products/kodo&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#qiniu&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    RackCorp Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.rackcorp.com/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#RackCorp&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Rclone Serve S3
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/commands/rclone_serve_http/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#rclone&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Scaleway
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.scaleway.com/en/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#scaleway&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Seagate Lyve Cloud
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.seagate.com/gb/en/services/cloud/storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#lyve&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    SeaweedFS
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://github.com/chrislusf/seaweedfs/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#seaweedfs&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    StackPath
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://www.stackpath.com/products/object-storage/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#stackpath&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Storj
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://storj.io/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#storj&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Synology C2 Object Storage
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://c2.synology.com/en-global/object-storage/overview&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#synology-c2&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Tencent Cloud Object Storage (COS)
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://intl.cloud.tencent.com/product/cos&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#tencent-cos&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;

&lt;li class=&#34;list-group-item d-flex justify-content-between py-1&#34;&gt;
  &lt;span&gt;
    Wasabi
  &lt;/span&gt;
  &lt;span&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://wasabi.com/&#34; target=&#34;_blank&#34;&gt;&lt;i class=&#34;fa fa-home&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Home&lt;/a&gt;
    &lt;a class=&#34;badge badge-primary badge-pill&#34; role=&#34;button btn-sm&#34; href=&#34;https://rclone.org/s3/#wasabi&#34;&gt;&lt;i class=&#34;fa fa-book&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; Config&lt;/a&gt;
  &lt;/span&gt;
&lt;/li&gt;


&lt;/ul&gt;

&lt;p&gt;Paths are specified as &lt;code&gt;remote:bucket&lt;/code&gt; (or &lt;code&gt;remote:&lt;/code&gt; for the &lt;code&gt;lsd&lt;/code&gt;
command.)  You may put subdirectories in too, e.g. &lt;code&gt;remote:bucket/path/to/dir&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Once you have made a remote (see the provider specific section above)
you can use it like this:&lt;/p&gt;
&lt;p&gt;See all buckets&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make a new bucket&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone mkdir remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List the contents of a bucket&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sync &lt;code&gt;/home/local/directory&lt;/code&gt; to the remote bucket, deleting any excess
files in the bucket.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone sync --interactive /home/local/directory remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of making an s3 configuration for the AWS S3 provider.
Most applies to the other providers as well, any differences are described &lt;a href=&#34;#providers&#34;&gt;below&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
Choose your S3 provider.
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ &amp;#34;AWS&amp;#34;
 2 / Ceph Object Storage
   \ &amp;#34;Ceph&amp;#34;
 3 / DigitalOcean Spaces
   \ &amp;#34;DigitalOcean&amp;#34;
 4 / Dreamhost DreamObjects
   \ &amp;#34;Dreamhost&amp;#34;
 5 / IBM COS S3
   \ &amp;#34;IBMCOS&amp;#34;
 6 / Minio Object Storage
   \ &amp;#34;Minio&amp;#34;
 7 / Wasabi Object Storage
   \ &amp;#34;Wasabi&amp;#34;
 8 / Any other S3 compatible provider
   \ &amp;#34;Other&amp;#34;
provider&amp;gt; 1
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id&amp;gt; XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key&amp;gt; YYY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia, or Pacific Northwest.
   | Leave location constraint empty.
   \ &amp;#34;us-east-1&amp;#34;
   / US East (Ohio) Region
 2 | Needs location constraint us-east-2.
   \ &amp;#34;us-east-2&amp;#34;
   / US West (Oregon) Region
 3 | Needs location constraint us-west-2.
   \ &amp;#34;us-west-2&amp;#34;
   / US West (Northern California) Region
 4 | Needs location constraint us-west-1.
   \ &amp;#34;us-west-1&amp;#34;
   / Canada (Central) Region
 5 | Needs location constraint ca-central-1.
   \ &amp;#34;ca-central-1&amp;#34;
   / EU (Ireland) Region
 6 | Needs location constraint EU or eu-west-1.
   \ &amp;#34;eu-west-1&amp;#34;
   / EU (London) Region
 7 | Needs location constraint eu-west-2.
   \ &amp;#34;eu-west-2&amp;#34;
   / EU (Frankfurt) Region
 8 | Needs location constraint eu-central-1.
   \ &amp;#34;eu-central-1&amp;#34;
   / Asia Pacific (Singapore) Region
 9 | Needs location constraint ap-southeast-1.
   \ &amp;#34;ap-southeast-1&amp;#34;
   / Asia Pacific (Sydney) Region
10 | Needs location constraint ap-southeast-2.
   \ &amp;#34;ap-southeast-2&amp;#34;
   / Asia Pacific (Tokyo) Region
11 | Needs location constraint ap-northeast-1.
   \ &amp;#34;ap-northeast-1&amp;#34;
   / Asia Pacific (Seoul)
12 | Needs location constraint ap-northeast-2.
   \ &amp;#34;ap-northeast-2&amp;#34;
   / Asia Pacific (Mumbai)
13 | Needs location constraint ap-south-1.
   \ &amp;#34;ap-south-1&amp;#34;
   / Asia Pacific (Hong Kong) Region
14 | Needs location constraint ap-east-1.
   \ &amp;#34;ap-east-1&amp;#34;
   / South America (Sao Paulo) Region
15 | Needs location constraint sa-east-1.
   \ &amp;#34;sa-east-1&amp;#34;
region&amp;gt; 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
endpoint&amp;gt;
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
   \ &amp;#34;&amp;#34;
 2 / US East (Ohio) Region.
   \ &amp;#34;us-east-2&amp;#34;
 3 / US West (Oregon) Region.
   \ &amp;#34;us-west-2&amp;#34;
 4 / US West (Northern California) Region.
   \ &amp;#34;us-west-1&amp;#34;
 5 / Canada (Central) Region.
   \ &amp;#34;ca-central-1&amp;#34;
 6 / EU (Ireland) Region.
   \ &amp;#34;eu-west-1&amp;#34;
 7 / EU (London) Region.
   \ &amp;#34;eu-west-2&amp;#34;
 8 / EU Region.
   \ &amp;#34;EU&amp;#34;
 9 / Asia Pacific (Singapore) Region.
   \ &amp;#34;ap-southeast-1&amp;#34;
10 / Asia Pacific (Sydney) Region.
   \ &amp;#34;ap-southeast-2&amp;#34;
11 / Asia Pacific (Tokyo) Region.
   \ &amp;#34;ap-northeast-1&amp;#34;
12 / Asia Pacific (Seoul)
   \ &amp;#34;ap-northeast-2&amp;#34;
13 / Asia Pacific (Mumbai)
   \ &amp;#34;ap-south-1&amp;#34;
14 / Asia Pacific (Hong Kong)
   \ &amp;#34;ap-east-1&amp;#34;
15 / South America (Sao Paulo) Region.
   \ &amp;#34;sa-east-1&amp;#34;
location_constraint&amp;gt; 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ &amp;#34;private&amp;#34;
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ &amp;#34;public-read&amp;#34;
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ &amp;#34;public-read-write&amp;#34;
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ &amp;#34;authenticated-read&amp;#34;
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ &amp;#34;bucket-owner-read&amp;#34;
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ &amp;#34;bucket-owner-full-control&amp;#34;
acl&amp;gt; 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ &amp;#34;&amp;#34;
 2 / AES256
   \ &amp;#34;AES256&amp;#34;
server_side_encryption&amp;gt; 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
 2 / Standard storage class
   \ &amp;#34;STANDARD&amp;#34;
 3 / Reduced redundancy storage class
   \ &amp;#34;REDUCED_REDUNDANCY&amp;#34;
 4 / Standard Infrequent Access storage class
   \ &amp;#34;STANDARD_IA&amp;#34;
 5 / One Zone Infrequent Access storage class
   \ &amp;#34;ONEZONE_IA&amp;#34;
 6 / Glacier storage class
   \ &amp;#34;GLACIER&amp;#34;
 7 / Glacier Deep Archive storage class
   \ &amp;#34;DEEP_ARCHIVE&amp;#34;
 8 / Intelligent-Tiering storage class
   \ &amp;#34;INTELLIGENT_TIERING&amp;#34;
 9 / Glacier Instant Retrieval storage class
   \ &amp;#34;GLACIER_IR&amp;#34;
storage_class&amp;gt; 1
Remote config
Configuration complete.
Options:
- type: s3
- provider: AWS
- env_auth: false
- access_key_id: XXX
- secret_access_key: YYY
- region: us-east-1
- endpoint:
- location_constraint:
- acl: private
- server_side_encryption:
- storage_class:
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;modification-times-and-hashes&#34;&gt;Modification times and hashes&lt;/h3&gt;
&lt;h4 id=&#34;modification-times&#34;&gt;Modification times&lt;/h4&gt;
&lt;p&gt;The modified time is stored as metadata on the object as
&lt;code&gt;X-Amz-Meta-Mtime&lt;/code&gt; as floating point since the epoch, accurate to 1 ns.&lt;/p&gt;
&lt;p&gt;If the modification time needs to be updated rclone will attempt to perform a server
side copy to update the modification if the object can be copied in a single part.
In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
storage the object will be uploaded rather than copied.&lt;/p&gt;
&lt;p&gt;Note that reading this from the object takes an additional &lt;code&gt;HEAD&lt;/code&gt;
request as the metadata isn&#39;t returned in object listings.&lt;/p&gt;
&lt;h4 id=&#34;hashes&#34;&gt;Hashes&lt;/h4&gt;
&lt;p&gt;For small objects which weren&#39;t uploaded as multipart uploads (objects
sized below &lt;code&gt;--s3-upload-cutoff&lt;/code&gt; if uploaded with rclone) rclone uses
the &lt;code&gt;ETag:&lt;/code&gt; header as an MD5 checksum.&lt;/p&gt;
&lt;p&gt;However for objects which were uploaded as multipart uploads or with
server side encryption (SSE-AWS or SSE-C) the &lt;code&gt;ETag&lt;/code&gt; header is no
longer the MD5 sum of the data, so rclone adds an additional piece of
metadata &lt;code&gt;X-Amz-Meta-Md5chksum&lt;/code&gt; which is a base64 encoded MD5 hash (in
the same format as is required for &lt;code&gt;Content-MD5&lt;/code&gt;).  You can use base64 -d and hexdump to check this value manually:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;echo &#39;VWTGdNx3LyXQDfA0e2Edxw==&#39; | base64 -d | hexdump
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or you can use &lt;code&gt;rclone check&lt;/code&gt; to verify the hashes are OK.&lt;/p&gt;
&lt;p&gt;For large objects, calculating this hash can take some time so the
addition of this hash can be disabled with &lt;code&gt;--s3-disable-checksum&lt;/code&gt;.
This will mean that these objects do not have an MD5 checksum.&lt;/p&gt;
&lt;p&gt;Note that reading this from the object takes an additional &lt;code&gt;HEAD&lt;/code&gt;
request as the metadata isn&#39;t returned in object listings.&lt;/p&gt;
&lt;h3 id=&#34;reducing-costs&#34;&gt;Reducing costs&lt;/h3&gt;
&lt;h4 id=&#34;avoiding-head-requests-to-read-the-modification-time&#34;&gt;Avoiding HEAD requests to read the modification time&lt;/h4&gt;
&lt;p&gt;By default, rclone will use the modification time of objects stored in
S3 for syncing.  This is stored in object metadata which unfortunately
takes an extra HEAD request to read which can be expensive (in time
and money).&lt;/p&gt;
&lt;p&gt;The modification time is used by default for all operations that
require checking the time a file was last updated. It allows rclone to
treat the remote more like a true filesystem, but it is inefficient on
S3 because it requires an extra API call to retrieve the metadata.&lt;/p&gt;
&lt;p&gt;The extra API calls can be avoided when syncing (using &lt;code&gt;rclone sync&lt;/code&gt;
or &lt;code&gt;rclone copy&lt;/code&gt;) in a few different ways, each with its own
tradeoffs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;--size-only&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Only checks the size of files.&lt;/li&gt;
&lt;li&gt;Uses no extra transactions.&lt;/li&gt;
&lt;li&gt;If the file doesn&#39;t change size then rclone won&#39;t detect it has
changed.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone sync --size-only /path/to/source s3:bucket&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--checksum&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Checks the size and MD5 checksum of files.&lt;/li&gt;
&lt;li&gt;Uses no extra transactions.&lt;/li&gt;
&lt;li&gt;The most accurate detection of changes possible.&lt;/li&gt;
&lt;li&gt;Will cause the source to read an MD5 checksum which, if it is a
local disk, will cause lots of disk activity.&lt;/li&gt;
&lt;li&gt;If the source and destination are both S3 this is the
&lt;strong&gt;recommended&lt;/strong&gt; flag to use for maximum efficiency.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone sync --checksum /path/to/source s3:bucket&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--update --use-server-modtime&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Uses no extra transactions.&lt;/li&gt;
&lt;li&gt;Modification time becomes the time the object was uploaded.&lt;/li&gt;
&lt;li&gt;For many operations this is sufficient to determine if it needs
uploading.&lt;/li&gt;
&lt;li&gt;Using &lt;code&gt;--update&lt;/code&gt; along with &lt;code&gt;--use-server-modtime&lt;/code&gt;, avoids the
extra API call and uploads files whose local modification time
is newer than the time it was last uploaded.&lt;/li&gt;
&lt;li&gt;Files created with timestamps in the past will be missed by the sync.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone sync --update --use-server-modtime /path/to/source s3:bucket&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These flags can and should be used in combination with &lt;code&gt;--fast-list&lt;/code&gt; -
see below.&lt;/p&gt;
&lt;p&gt;If using &lt;code&gt;rclone mount&lt;/code&gt; or any command using the VFS (eg &lt;code&gt;rclone serve&lt;/code&gt;) commands then you might want to consider using the VFS flag
&lt;code&gt;--no-modtime&lt;/code&gt; which will stop rclone reading the modification time
for every object. You could also use &lt;code&gt;--use-server-modtime&lt;/code&gt; if you are
happy with the modification times of the objects being the time of
upload.&lt;/p&gt;
&lt;h4 id=&#34;avoiding-get-requests-to-read-directory-listings&#34;&gt;Avoiding GET requests to read directory listings&lt;/h4&gt;
&lt;p&gt;Rclone&#39;s default directory traversal is to process each directory
individually.  This takes one API call per directory.  Using the
&lt;code&gt;--fast-list&lt;/code&gt; flag will read all info about the objects into
memory first using a smaller number of API calls (one per 1000
objects). See the &lt;a href=&#34;https://rclone.org/docs/#fast-list&#34;&gt;rclone docs&lt;/a&gt; for more details.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone sync --fast-list --checksum /path/to/source s3:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;--fast-list&lt;/code&gt; trades off API transactions for memory use. As a rough
guide rclone uses 1k of memory per object stored, so using
&lt;code&gt;--fast-list&lt;/code&gt; on a sync of a million objects will use roughly 1 GiB of
RAM.&lt;/p&gt;
&lt;p&gt;If you are only copying a small number of files into a big repository
then using &lt;code&gt;--no-traverse&lt;/code&gt; is a good idea. This finds objects directly
instead of through directory listings. You can do a &amp;quot;top-up&amp;quot; sync very
cheaply by using &lt;code&gt;--max-age&lt;/code&gt; and &lt;code&gt;--no-traverse&lt;/code&gt; to copy only recent
files, eg&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&#39;d then do a full &lt;code&gt;rclone sync&lt;/code&gt; less often.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;--fast-list&lt;/code&gt; isn&#39;t required in the top-up sync.&lt;/p&gt;
&lt;h4 id=&#34;avoiding-head-requests-after-put&#34;&gt;Avoiding HEAD requests after PUT&lt;/h4&gt;
&lt;p&gt;By default, rclone will HEAD every object it uploads. It does this to
check the object got uploaded correctly.&lt;/p&gt;
&lt;p&gt;You can disable this with the &lt;a href=&#34;#s3-no-head&#34;&gt;--s3-no-head&lt;/a&gt; option - see
there for more details.&lt;/p&gt;
&lt;p&gt;Setting this flag increases the chance for undetected upload failures.&lt;/p&gt;
&lt;h3 id=&#34;increasing-performance&#34;&gt;Increasing performance&lt;/h3&gt;
&lt;h4 id=&#34;using-server-side-copy&#34;&gt;Using server-side copy&lt;/h4&gt;
&lt;p&gt;If you are copying objects between S3 buckets in the same region, you should
use server-side copy.
This is much faster than downloading and re-uploading the objects, as no data is transferred.&lt;/p&gt;
&lt;p&gt;For rclone to use server-side copy, you must use the same remote for the source and destination.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy s3:source-bucket s3:destination-bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When using server-side copy, the performance is limited by the rate at which rclone issues
API requests to S3.
See below for how to increase the number of API requests rclone makes.&lt;/p&gt;
&lt;h4 id=&#34;increasing-the-rate-of-api-requests&#34;&gt;Increasing the rate of API requests&lt;/h4&gt;
&lt;p&gt;You can increase the rate of API requests to S3 by increasing the parallelism using &lt;code&gt;--transfers&lt;/code&gt; and &lt;code&gt;--checkers&lt;/code&gt;
options.&lt;/p&gt;
&lt;p&gt;Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
Depending on your provider, you can increase significantly the number of transfers and checkers.&lt;/p&gt;
&lt;p&gt;For example, with AWS S3, if you can increase the number of checkers to values like 200.
If you are doing a server-side copy, you can also increase the number of transfers to 200.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will need to experiment with these values to find the optimal settings for your setup.&lt;/p&gt;
&lt;h3 id=&#34;versions&#34;&gt;Versions&lt;/h3&gt;
&lt;p&gt;When bucket versioning is enabled (this can be done with rclone with
the &lt;a href=&#34;#versioning&#34;&gt;&lt;code&gt;rclone backend versioning&lt;/code&gt;&lt;/a&gt; command) when rclone
uploads a new version of a file it creates a
&lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html&#34;&gt;new version of it&lt;/a&gt;
Likewise when you delete a file, the old version will be marked hidden
and still be available.&lt;/p&gt;
&lt;p&gt;Old versions of files, where available, are visible using the
&lt;a href=&#34;#s3-versions&#34;&gt;&lt;code&gt;--s3-versions&lt;/code&gt;&lt;/a&gt; flag.&lt;/p&gt;
&lt;p&gt;It is also possible to view a bucket as it was at a certain point in
time, using the &lt;a href=&#34;#s3-version-at&#34;&gt;&lt;code&gt;--s3-version-at&lt;/code&gt;&lt;/a&gt; flag. This will
show the file versions as they were at that time, showing files that
have been deleted afterwards, and hiding files that were created
since.&lt;/p&gt;
&lt;p&gt;If you wish to remove all the old versions then you can use the
&lt;a href=&#34;#cleanup-hidden&#34;&gt;&lt;code&gt;rclone backend cleanup-hidden remote:bucket&lt;/code&gt;&lt;/a&gt;
command which will delete all the old hidden versions of files,
leaving the current ones intact. You can also supply a path and only
old versions under that path will be deleted, e.g.
&lt;code&gt;rclone backend cleanup-hidden remote:bucket/path/to/stuff&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When you &lt;code&gt;purge&lt;/code&gt; a bucket, the current and the old versions will be
deleted then the bucket will be deleted.&lt;/p&gt;
&lt;p&gt;However &lt;code&gt;delete&lt;/code&gt; will cause the current versions of the files to
become hidden old versions.&lt;/p&gt;
&lt;p&gt;Here is a session showing the listing and retrieval of an old
version followed by a &lt;code&gt;cleanup&lt;/code&gt; of the old versions.&lt;/p&gt;
&lt;p&gt;Show current version and all the versions with &lt;code&gt;--s3-versions&lt;/code&gt; flag.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q ls s3:cleanup-test
        9 one.txt

$ rclone -q --s3-versions ls s3:cleanup-test
        9 one.txt
        8 one-v2016-07-04-141032-000.txt
       16 one-v2016-07-04-141003-000.txt
       15 one-v2016-07-02-155621-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Retrieve an old version&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Clean up all the old versions and show that they&#39;ve gone.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q backend cleanup-hidden s3:cleanup-test

$ rclone -q ls s3:cleanup-test
        9 one.txt

$ rclone -q --s3-versions ls s3:cleanup-test
        9 one.txt
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;versions-naming-caveat&#34;&gt;Versions naming caveat&lt;/h4&gt;
&lt;p&gt;When using &lt;code&gt;--s3-versions&lt;/code&gt; flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions&#39; names
are created by inserting timestamp between file name and its extension.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;        9 file.txt
        8 file-v2023-07-17-161032-000.txt
       16 file-v2023-06-15-141003-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If there are real files present with the same names as versions, then
behaviour of &lt;code&gt;--s3-versions&lt;/code&gt; can be unpredictable.&lt;/p&gt;
&lt;h3 id=&#34;cleanup&#34;&gt;Cleanup&lt;/h3&gt;
&lt;p&gt;If you run &lt;code&gt;rclone cleanup s3:bucket&lt;/code&gt; then it will remove all pending
multipart uploads older than 24 hours. You can use the &lt;code&gt;--interactive&lt;/code&gt;/&lt;code&gt;i&lt;/code&gt;
or &lt;code&gt;--dry-run&lt;/code&gt; flag to see exactly what it will do. If you want more control over the
expiry date then run &lt;code&gt;rclone backend cleanup s3:bucket -o max-age=1h&lt;/code&gt;
to expire all uploads older than one hour. You can use &lt;code&gt;rclone backend list-multipart-uploads s3:bucket&lt;/code&gt; to see the pending multipart
uploads.&lt;/p&gt;
&lt;h3 id=&#34;restricted-filename-characters&#34;&gt;Restricted filename characters&lt;/h3&gt;
&lt;p&gt;S3 allows any valid UTF-8 string as a key.&lt;/p&gt;
&lt;p&gt;Invalid UTF-8 bytes will be &lt;a href=&#34;https://rclone.org/overview/#invalid-utf8&#34;&gt;replaced&lt;/a&gt;, as
they can&#39;t be used in XML.&lt;/p&gt;
&lt;p&gt;The following characters are replaced since these are problematic when
dealing with the REST API:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NUL&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x00&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;␀&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x2F&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;／&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The encoding will also encode these file names as they don&#39;t seem to
work with the SDK properly:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File name&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;.&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;．&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;..&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;．．&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&#34;multipart-uploads&#34;&gt;Multipart uploads&lt;/h3&gt;
&lt;p&gt;rclone supports multipart uploads with S3 which means that it can
upload files bigger than 5 GiB.&lt;/p&gt;
&lt;p&gt;Note that files uploaded &lt;em&gt;both&lt;/em&gt; with multipart upload &lt;em&gt;and&lt;/em&gt; through
crypt remotes do not have MD5 sums.&lt;/p&gt;
&lt;p&gt;rclone switches from single part uploads to multipart uploads at the
point specified by &lt;code&gt;--s3-upload-cutoff&lt;/code&gt;.  This can be a maximum of 5 GiB
and a minimum of 0 (ie always upload multipart files).&lt;/p&gt;
&lt;p&gt;The chunk sizes used in the multipart upload are specified by
&lt;code&gt;--s3-chunk-size&lt;/code&gt; and the number of chunks uploaded concurrently is
specified by &lt;code&gt;--s3-upload-concurrency&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Multipart uploads will use &lt;code&gt;--transfers&lt;/code&gt; * &lt;code&gt;--s3-upload-concurrency&lt;/code&gt; *
&lt;code&gt;--s3-chunk-size&lt;/code&gt; extra memory.  Single part uploads to not use extra
memory.&lt;/p&gt;
&lt;p&gt;Single part transfers can be faster than multipart transfers or slower
depending on your latency from S3 - the more latency, the more likely
single part transfers will be faster.&lt;/p&gt;
&lt;p&gt;Increasing &lt;code&gt;--s3-upload-concurrency&lt;/code&gt; will increase throughput (8 would
be a sensible value) and increasing &lt;code&gt;--s3-chunk-size&lt;/code&gt; also increases
throughput (16M would be sensible).  Increasing either of these will
use more memory.  The default values are high enough to gain most of
the possible performance without using too much memory.&lt;/p&gt;
&lt;h3 id=&#34;buckets-and-regions&#34;&gt;Buckets and Regions&lt;/h3&gt;
&lt;p&gt;With Amazon S3 you can list buckets (&lt;code&gt;rclone lsd&lt;/code&gt;) using any region,
but you can only access the content of a bucket from the region it was
created in.  If you attempt to access a bucket from the wrong region,
you will get an error, &lt;code&gt;incorrect region, the bucket is not in &#39;XXX&#39; region&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;authentication&#34;&gt;Authentication&lt;/h3&gt;
&lt;p&gt;There are a number of ways to supply &lt;code&gt;rclone&lt;/code&gt; with a set of AWS
credentials, with and without using the environment.&lt;/p&gt;
&lt;p&gt;The different authentication methods are tried in this order:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Directly in the rclone configuration file (&lt;code&gt;env_auth = false&lt;/code&gt; in the config file):
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;access_key_id&lt;/code&gt; and &lt;code&gt;secret_access_key&lt;/code&gt; are required.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;session_token&lt;/code&gt; can be optionally set when using AWS STS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Runtime configuration (&lt;code&gt;env_auth = true&lt;/code&gt; in the config file):
&lt;ul&gt;
&lt;li&gt;Export the following environment variables before running &lt;code&gt;rclone&lt;/code&gt;:
&lt;ul&gt;
&lt;li&gt;Access Key ID: &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; or &lt;code&gt;AWS_ACCESS_KEY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Secret Access Key: &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; or &lt;code&gt;AWS_SECRET_KEY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Session Token: &lt;code&gt;AWS_SESSION_TOKEN&lt;/code&gt; (optional)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Or, use a &lt;a href=&#34;https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html&#34;&gt;named profile&lt;/a&gt;:
&lt;ul&gt;
&lt;li&gt;Profile files are standard files used by AWS CLI tools&lt;/li&gt;
&lt;li&gt;By default it will use the profile in your home directory (e.g. &lt;code&gt;~/.aws/credentials&lt;/code&gt; on unix based systems) file and the &amp;quot;default&amp;quot; profile, to change set these environment variables or config keys:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AWS_SHARED_CREDENTIALS_FILE&lt;/code&gt; to control which file or the &lt;code&gt;shared_credentials_file&lt;/code&gt; config key.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AWS_PROFILE&lt;/code&gt; to control which profile to use or the &lt;code&gt;profile&lt;/code&gt; config key.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Or, run &lt;code&gt;rclone&lt;/code&gt; in an ECS task with an IAM role (AWS only).&lt;/li&gt;
&lt;li&gt;Or, run &lt;code&gt;rclone&lt;/code&gt; on an EC2 instance with an IAM role (AWS only).&lt;/li&gt;
&lt;li&gt;Or, run &lt;code&gt;rclone&lt;/code&gt; in an EKS pod with an IAM role that is associated with a service account (AWS only).&lt;/li&gt;
&lt;li&gt;Or, use &lt;a href=&#34;https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html&#34;&gt;process credentials&lt;/a&gt; to read config from an external program.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With &lt;code&gt;env_auth = true&lt;/code&gt; rclone (which uses the SDK for Go v2) should support
&lt;a href=&#34;https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html&#34;&gt;all authentication methods&lt;/a&gt;
that the &lt;code&gt;aws&lt;/code&gt; CLI tool does and the other AWS SDKs.&lt;/p&gt;
&lt;p&gt;If none of these option actually end up providing &lt;code&gt;rclone&lt;/code&gt; with AWS
credentials then S3 interaction will be non-authenticated (see the
&lt;a href=&#34;#anonymous-access&#34;&gt;anonymous access&lt;/a&gt; section for more info).&lt;/p&gt;
&lt;h3 id=&#34;s3-permissions&#34;&gt;S3 Permissions&lt;/h3&gt;
&lt;p&gt;When using the &lt;code&gt;sync&lt;/code&gt; subcommand of &lt;code&gt;rclone&lt;/code&gt; the following minimum
permissions are required to be available on the bucket being written to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ListBucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DeleteObject&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GetObject&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PutObject&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PutObjectACL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CreateBucket&lt;/code&gt; (unless using &lt;a href=&#34;#s3-no-check-bucket&#34;&gt;s3-no-check-bucket&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When using the &lt;code&gt;lsd&lt;/code&gt; subcommand, the &lt;code&gt;ListAllMyBuckets&lt;/code&gt; permission is required.&lt;/p&gt;
&lt;p&gt;Example policy:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{
    &amp;#34;Version&amp;#34;: &amp;#34;2012-10-17&amp;#34;,
    &amp;#34;Statement&amp;#34;: [
        {
            &amp;#34;Effect&amp;#34;: &amp;#34;Allow&amp;#34;,
            &amp;#34;Principal&amp;#34;: {
                &amp;#34;AWS&amp;#34;: &amp;#34;arn:aws:iam::USER_SID:user/USER_NAME&amp;#34;
            },
            &amp;#34;Action&amp;#34;: [
                &amp;#34;s3:ListBucket&amp;#34;,
                &amp;#34;s3:DeleteObject&amp;#34;,
                &amp;#34;s3:GetObject&amp;#34;,
                &amp;#34;s3:PutObject&amp;#34;,
                &amp;#34;s3:PutObjectAcl&amp;#34;
            ],
            &amp;#34;Resource&amp;#34;: [
              &amp;#34;arn:aws:s3:::BUCKET_NAME/*&amp;#34;,
              &amp;#34;arn:aws:s3:::BUCKET_NAME&amp;#34;
            ]
        },
        {
            &amp;#34;Effect&amp;#34;: &amp;#34;Allow&amp;#34;,
            &amp;#34;Action&amp;#34;: &amp;#34;s3:ListAllMyBuckets&amp;#34;,
            &amp;#34;Resource&amp;#34;: &amp;#34;arn:aws:s3:::*&amp;#34;
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Notes on above:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;This is a policy that can be used when creating bucket. It assumes
that &lt;code&gt;USER_NAME&lt;/code&gt; has been created.&lt;/li&gt;
&lt;li&gt;The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket&#39;s objects.&lt;/li&gt;
&lt;li&gt;When using &lt;a href=&#34;#s3-no-check-bucket&#34;&gt;s3-no-check-bucket&lt;/a&gt; and the bucket already exsits, the &lt;code&gt;&amp;quot;arn:aws:s3:::BUCKET_NAME&amp;quot;&lt;/code&gt; doesn&#39;t have to be included.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For reference, &lt;a href=&#34;https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b&#34;&gt;here&#39;s an Ansible script&lt;/a&gt;
that will generate one or more buckets that will work with &lt;code&gt;rclone sync&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;key-management-system-kms&#34;&gt;Key Management System (KMS)&lt;/h3&gt;
&lt;p&gt;If you are using server-side encryption with KMS then you must make
sure rclone is configured with &lt;code&gt;server_side_encryption = aws:kms&lt;/code&gt;
otherwise you will find you can&#39;t transfer small objects - these will
create checksum errors.&lt;/p&gt;
&lt;h3 id=&#34;glacier-and-glacier-deep-archive&#34;&gt;Glacier and Glacier Deep Archive&lt;/h3&gt;
&lt;p&gt;You can upload objects using the glacier storage class or transition them to glacier using a &lt;a href=&#34;http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html&#34;&gt;lifecycle policy&lt;/a&gt;.
The bucket can still be synced or copied into normally, but if rclone
tries to access data from the glacier storage class you will see an error like below.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this case you need to &lt;a href=&#34;http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html&#34;&gt;restore&lt;/a&gt;
the object(s) in question before using rclone.&lt;/p&gt;
&lt;p&gt;Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.&lt;/p&gt;
&lt;h3 id=&#34;object-lock-enabled-s3-bucket&#34;&gt;Object-lock enabled S3 bucket&lt;/h3&gt;
&lt;p&gt;According to AWS&#39;s &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission&#34;&gt;documentation on S3 Object Lock&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As mentioned in the &lt;a href=&#34;#modification-times-and-hashes&#34;&gt;Modification times and hashes&lt;/a&gt; section,
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
A simple solution is to set the &lt;code&gt;--s3-upload-cutoff 0&lt;/code&gt; and force all the files to be uploaded as multipart.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).&lt;/p&gt;
&lt;h4 id=&#34;s3-provider&#34;&gt;--s3-provider&lt;/h4&gt;
&lt;p&gt;Choose your S3 provider.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      provider&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_PROVIDER&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;AWS&amp;quot;
&lt;ul&gt;
&lt;li&gt;Amazon Web Services (AWS) S3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Alibaba&amp;quot;
&lt;ul&gt;
&lt;li&gt;Alibaba Cloud Object Storage System (OSS) formerly Aliyun&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ArvanCloud&amp;quot;
&lt;ul&gt;
&lt;li&gt;Arvan Cloud Object Storage (AOS)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Ceph&amp;quot;
&lt;ul&gt;
&lt;li&gt;Ceph Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ChinaMobile&amp;quot;
&lt;ul&gt;
&lt;li&gt;China Mobile Ecloud Elastic Object Storage (EOS)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Cloudflare&amp;quot;
&lt;ul&gt;
&lt;li&gt;Cloudflare R2 Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;DigitalOcean&amp;quot;
&lt;ul&gt;
&lt;li&gt;DigitalOcean Spaces&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Dreamhost&amp;quot;
&lt;ul&gt;
&lt;li&gt;Dreamhost DreamObjects&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;GCS&amp;quot;
&lt;ul&gt;
&lt;li&gt;Google Cloud Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;HuaweiOBS&amp;quot;
&lt;ul&gt;
&lt;li&gt;Huawei Object Storage Service&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;IBMCOS&amp;quot;
&lt;ul&gt;
&lt;li&gt;IBM COS S3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;IDrive&amp;quot;
&lt;ul&gt;
&lt;li&gt;IDrive e2&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;IONOS&amp;quot;
&lt;ul&gt;
&lt;li&gt;IONOS Cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;LyveCloud&amp;quot;
&lt;ul&gt;
&lt;li&gt;Seagate Lyve Cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Leviia&amp;quot;
&lt;ul&gt;
&lt;li&gt;Leviia Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Liara&amp;quot;
&lt;ul&gt;
&lt;li&gt;Liara Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Linode&amp;quot;
&lt;ul&gt;
&lt;li&gt;Linode Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Magalu&amp;quot;
&lt;ul&gt;
&lt;li&gt;Magalu Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Minio&amp;quot;
&lt;ul&gt;
&lt;li&gt;Minio Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Netease&amp;quot;
&lt;ul&gt;
&lt;li&gt;Netease Object Storage (NOS)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Petabox&amp;quot;
&lt;ul&gt;
&lt;li&gt;Petabox Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;RackCorp&amp;quot;
&lt;ul&gt;
&lt;li&gt;RackCorp Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Rclone&amp;quot;
&lt;ul&gt;
&lt;li&gt;Rclone S3 Server&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Scaleway&amp;quot;
&lt;ul&gt;
&lt;li&gt;Scaleway Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;SeaweedFS&amp;quot;
&lt;ul&gt;
&lt;li&gt;SeaweedFS S3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;StackPath&amp;quot;
&lt;ul&gt;
&lt;li&gt;StackPath Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Storj&amp;quot;
&lt;ul&gt;
&lt;li&gt;Storj (S3 Compatible Gateway)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Synology&amp;quot;
&lt;ul&gt;
&lt;li&gt;Synology C2 Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;TencentCOS&amp;quot;
&lt;ul&gt;
&lt;li&gt;Tencent Cloud Object Storage (COS)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Wasabi&amp;quot;
&lt;ul&gt;
&lt;li&gt;Wasabi Object Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Qiniu&amp;quot;
&lt;ul&gt;
&lt;li&gt;Qiniu Object Storage (Kodo)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;Other&amp;quot;
&lt;ul&gt;
&lt;li&gt;Any other S3 compatible provider&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-env-auth&#34;&gt;--s3-env-auth&lt;/h4&gt;
&lt;p&gt;Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).&lt;/p&gt;
&lt;p&gt;Only applies if access_key_id and secret_access_key is blank.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      env_auth&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_ENV_AUTH&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;false&amp;quot;
&lt;ul&gt;
&lt;li&gt;Enter AWS credentials in the next step.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;true&amp;quot;
&lt;ul&gt;
&lt;li&gt;Get AWS credentials from the environment (env vars or IAM).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-access-key-id&#34;&gt;--s3-access-key-id&lt;/h4&gt;
&lt;p&gt;AWS Access Key ID.&lt;/p&gt;
&lt;p&gt;Leave blank for anonymous access or runtime credentials.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      access_key_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-secret-access-key&#34;&gt;--s3-secret-access-key&lt;/h4&gt;
&lt;p&gt;AWS Secret Access Key (password).&lt;/p&gt;
&lt;p&gt;Leave blank for anonymous access or runtime credentials.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      secret_access_key&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-region&#34;&gt;--s3-region&lt;/h4&gt;
&lt;p&gt;Region to connect to.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      region&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_REGION&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;us-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;The default endpoint - a good choice if you are unsure.&lt;/li&gt;
&lt;li&gt;US Region, Northern Virginia, or Pacific Northwest.&lt;/li&gt;
&lt;li&gt;Leave location constraint empty.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-east-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;US East (Ohio) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint us-east-2.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;US West (Northern California) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint us-west-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-west-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;US West (Oregon) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint us-west-2.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ca-central-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Canada (Central) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint ca-central-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Ireland) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint EU or eu-west-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (London) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint eu-west-2.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-3&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Paris) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint eu-west-3.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-north-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Stockholm) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint eu-north-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Milan) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint eu-south-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-central-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Frankfurt) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint eu-central-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-southeast-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Singapore) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-southeast-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-southeast-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Sydney) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-southeast-2.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Tokyo) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-northeast-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Seoul).&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-northeast-2.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-3&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Osaka-Local).&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-northeast-3.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Mumbai).&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-south-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Hong Kong) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint ap-east-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;sa-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;South America (Sao Paulo) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint sa-east-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;il-central-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Israel (Tel Aviv) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint il-central-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;me-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Middle East (Bahrain) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint me-south-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;af-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Africa (Cape Town) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint af-south-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;cn-north-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;China (Beijing) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint cn-north-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;cn-northwest-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;China (Ningxia) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint cn-northwest-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-gov-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;AWS GovCloud (US-East) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint us-gov-east-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-gov-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;AWS GovCloud (US) Region.&lt;/li&gt;
&lt;li&gt;Needs location constraint us-gov-west-1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-endpoint&#34;&gt;--s3-endpoint&lt;/h4&gt;
&lt;p&gt;Endpoint for S3 API.&lt;/p&gt;
&lt;p&gt;Leave blank if using AWS to use the default endpoint for the region.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      endpoint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_ENDPOINT&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-location-constraint&#34;&gt;--s3-location-constraint&lt;/h4&gt;
&lt;p&gt;Location constraint - must be set to match the Region.&lt;/p&gt;
&lt;p&gt;Used when creating buckets only.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      location_constraint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_LOCATION_CONSTRAINT&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;Empty for US Region, Northern Virginia, or Pacific Northwest&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-east-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;US East (Ohio) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;US West (Northern California) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-west-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;US West (Oregon) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ca-central-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Canada (Central) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Ireland) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (London) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-west-3&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Paris) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-north-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Stockholm) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;eu-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU (Milan) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;EU&amp;quot;
&lt;ul&gt;
&lt;li&gt;EU Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-southeast-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Singapore) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-southeast-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Sydney) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Tokyo) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-2&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Seoul) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-northeast-3&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Osaka-Local) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Mumbai) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ap-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Asia Pacific (Hong Kong) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;sa-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;South America (Sao Paulo) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;il-central-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Israel (Tel Aviv) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;me-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Middle East (Bahrain) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;af-south-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;Africa (Cape Town) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;cn-north-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;China (Beijing) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;cn-northwest-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;China (Ningxia) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-gov-east-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;AWS GovCloud (US-East) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;us-gov-west-1&amp;quot;
&lt;ul&gt;
&lt;li&gt;AWS GovCloud (US) Region&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-acl&#34;&gt;--s3-acl&lt;/h4&gt;
&lt;p&gt;Canned ACL used when creating buckets and storing or copying objects.&lt;/p&gt;
&lt;p&gt;This ACL is used for creating objects and if bucket_acl isn&#39;t set, for creating buckets too.&lt;/p&gt;
&lt;p&gt;For more info visit &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl&#34;&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note that this ACL is applied when server-side copying objects as S3
doesn&#39;t copy the ACL from the source but rather writes a fresh one.&lt;/p&gt;
&lt;p&gt;If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      acl&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_ACL&lt;/li&gt;
&lt;li&gt;Provider:    !Storj,Synology,Cloudflare&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;default&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets Full_CONTROL.&lt;/li&gt;
&lt;li&gt;No one else has access rights (default).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;private&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;No one else has access rights (default).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ access.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read-write&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ and WRITE access.&lt;/li&gt;
&lt;li&gt;Granting this on a bucket is generally not recommended.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;authenticated-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AuthenticatedUsers group gets READ access.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;bucket-owner-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Object owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;Bucket owner gets READ access.&lt;/li&gt;
&lt;li&gt;If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;bucket-owner-full-control&amp;quot;
&lt;ul&gt;
&lt;li&gt;Both the object owner and the bucket owner get FULL_CONTROL over the object.&lt;/li&gt;
&lt;li&gt;If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;private&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;No one else has access rights (default).&lt;/li&gt;
&lt;li&gt;This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ access.&lt;/li&gt;
&lt;li&gt;This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read-write&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ and WRITE access.&lt;/li&gt;
&lt;li&gt;This acl is available on IBM Cloud (Infra), On-Premise IBM COS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;authenticated-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AuthenticatedUsers group gets READ access.&lt;/li&gt;
&lt;li&gt;Not supported on Buckets.&lt;/li&gt;
&lt;li&gt;This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-server-side-encryption&#34;&gt;--s3-server-side-encryption&lt;/h4&gt;
&lt;p&gt;The server-side encryption algorithm used when storing this object in S3.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      server_side_encryption&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SERVER_SIDE_ENCRYPTION&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,ChinaMobile,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;AES256&amp;quot;
&lt;ul&gt;
&lt;li&gt;AES256&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;aws:kms&amp;quot;
&lt;ul&gt;
&lt;li&gt;aws:kms&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sse-kms-key-id&#34;&gt;--s3-sse-kms-key-id&lt;/h4&gt;
&lt;p&gt;If using KMS ID you must provide the ARN of Key.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sse_kms_key_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SSE_KMS_KEY_ID&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;arn:aws:kms:us-east-1:*&amp;quot;
&lt;ul&gt;
&lt;li&gt;arn:aws:kms:*&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-storage-class&#34;&gt;--s3-storage-class&lt;/h4&gt;
&lt;p&gt;The storage class to use when storing new objects in S3.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      storage_class&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_STORAGE_CLASS&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;Default&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;STANDARD&amp;quot;
&lt;ul&gt;
&lt;li&gt;Standard storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;REDUCED_REDUNDANCY&amp;quot;
&lt;ul&gt;
&lt;li&gt;Reduced redundancy storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;STANDARD_IA&amp;quot;
&lt;ul&gt;
&lt;li&gt;Standard Infrequent Access storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;ONEZONE_IA&amp;quot;
&lt;ul&gt;
&lt;li&gt;One Zone Infrequent Access storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;GLACIER&amp;quot;
&lt;ul&gt;
&lt;li&gt;Glacier storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;DEEP_ARCHIVE&amp;quot;
&lt;ul&gt;
&lt;li&gt;Glacier Deep Archive storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;INTELLIGENT_TIERING&amp;quot;
&lt;ul&gt;
&lt;li&gt;Intelligent-Tiering storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;GLACIER_IR&amp;quot;
&lt;ul&gt;
&lt;li&gt;Glacier Instant Retrieval storage class&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).&lt;/p&gt;
&lt;h4 id=&#34;s3-bucket-acl&#34;&gt;--s3-bucket-acl&lt;/h4&gt;
&lt;p&gt;Canned ACL used when creating buckets.&lt;/p&gt;
&lt;p&gt;For more info visit &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl&#34;&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note that this ACL is applied when only when creating buckets.  If it
isn&#39;t set then &amp;quot;acl&amp;quot; is used instead.&lt;/p&gt;
&lt;p&gt;If the &amp;quot;acl&amp;quot; and &amp;quot;bucket_acl&amp;quot; are empty strings then no X-Amz-Acl:
header is added and the default (private) will be used.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      bucket_acl&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_BUCKET_ACL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;private&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;No one else has access rights (default).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ access.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;public-read-write&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AllUsers group gets READ and WRITE access.&lt;/li&gt;
&lt;li&gt;Granting this on a bucket is generally not recommended.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;authenticated-read&amp;quot;
&lt;ul&gt;
&lt;li&gt;Owner gets FULL_CONTROL.&lt;/li&gt;
&lt;li&gt;The AuthenticatedUsers group gets READ access.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-requester-pays&#34;&gt;--s3-requester-pays&lt;/h4&gt;
&lt;p&gt;Enables requester pays option when interacting with S3 bucket.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      requester_pays&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_REQUESTER_PAYS&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sse-customer-algorithm&#34;&gt;--s3-sse-customer-algorithm&lt;/h4&gt;
&lt;p&gt;If using SSE-C, the server-side encryption algorithm used when storing this object in S3.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sse_customer_algorithm&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SSE_CUSTOMER_ALGORITHM&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,ChinaMobile,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;AES256&amp;quot;
&lt;ul&gt;
&lt;li&gt;AES256&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sse-customer-key&#34;&gt;--s3-sse-customer-key&lt;/h4&gt;
&lt;p&gt;To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.&lt;/p&gt;
&lt;p&gt;Alternatively you can provide --sse-customer-key-base64.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sse_customer_key&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,ChinaMobile,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sse-customer-key-base64&#34;&gt;--s3-sse-customer-key-base64&lt;/h4&gt;
&lt;p&gt;If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.&lt;/p&gt;
&lt;p&gt;Alternatively you can provide --sse-customer-key.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sse_customer_key_base64&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY_BASE64&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,ChinaMobile,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sse-customer-key-md5&#34;&gt;--s3-sse-customer-key-md5&lt;/h4&gt;
&lt;p&gt;If using SSE-C you may provide the secret encryption key MD5 checksum (optional).&lt;/p&gt;
&lt;p&gt;If you leave it blank, this is calculated automatically from the sse_customer_key provided.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sse_customer_key_md5&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY_MD5&lt;/li&gt;
&lt;li&gt;Provider:    AWS,Ceph,ChinaMobile,Minio&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-upload-cutoff&#34;&gt;--s3-upload-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to chunked upload.&lt;/p&gt;
&lt;p&gt;Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_UPLOAD_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     200Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-chunk-size&#34;&gt;--s3-chunk-size&lt;/h4&gt;
&lt;p&gt;Chunk size to use for uploading.&lt;/p&gt;
&lt;p&gt;When uploading files larger than upload_cutoff or files with unknown
size (e.g. from &amp;quot;rclone rcat&amp;quot; or uploaded with &amp;quot;rclone mount&amp;quot; or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.&lt;/p&gt;
&lt;p&gt;Note that &amp;quot;--s3-upload-concurrency&amp;quot; chunks of this size are buffered
in memory per transfer.&lt;/p&gt;
&lt;p&gt;If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.&lt;/p&gt;
&lt;p&gt;Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.&lt;/p&gt;
&lt;p&gt;Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48 GiB.  If you wish to stream upload
larger files then you will need to increase chunk_size.&lt;/p&gt;
&lt;p&gt;Increasing the chunk size decreases the accuracy of the progress
statistics displayed with &amp;quot;-P&amp;quot; flag. Rclone treats chunk as sent when
it&#39;s buffered by the AWS SDK, when in fact it may still be uploading.
A bigger chunk size means a bigger AWS SDK buffer and progress
reporting more deviating from the truth.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_CHUNK_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     5Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-max-upload-parts&#34;&gt;--s3-max-upload-parts&lt;/h4&gt;
&lt;p&gt;Maximum number of parts in a multipart upload.&lt;/p&gt;
&lt;p&gt;This option defines the maximum number of multipart chunks to use
when doing a multipart upload.&lt;/p&gt;
&lt;p&gt;This can be useful if a service does not support the AWS S3
specification of 10,000 chunks.&lt;/p&gt;
&lt;p&gt;Rclone will automatically increase the chunk size when uploading a
large file of a known size to stay below this number of chunks limit.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      max_upload_parts&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_MAX_UPLOAD_PARTS&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     10000&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-copy-cutoff&#34;&gt;--s3-copy-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to multipart copy.&lt;/p&gt;
&lt;p&gt;Any files larger than this that need to be server-side copied will be
copied in chunks of this size.&lt;/p&gt;
&lt;p&gt;The minimum is 0 and the maximum is 5 GiB.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      copy_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_COPY_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     4.656Gi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-disable-checksum&#34;&gt;--s3-disable-checksum&lt;/h4&gt;
&lt;p&gt;Don&#39;t store MD5 checksum with object metadata.&lt;/p&gt;
&lt;p&gt;Normally rclone will calculate the MD5 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      disable_checksum&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DISABLE_CHECKSUM&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-shared-credentials-file&#34;&gt;--s3-shared-credentials-file&lt;/h4&gt;
&lt;p&gt;Path to the shared credentials file.&lt;/p&gt;
&lt;p&gt;If env_auth = true then rclone can use a shared credentials file.&lt;/p&gt;
&lt;p&gt;If this variable is empty rclone will look for the
&amp;quot;AWS_SHARED_CREDENTIALS_FILE&amp;quot; env variable. If the env value is empty
it will default to the current user&#39;s home directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Linux/OSX: &amp;quot;$HOME/.aws/credentials&amp;quot;
Windows:   &amp;quot;%USERPROFILE%\.aws\credentials&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      shared_credentials_file&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SHARED_CREDENTIALS_FILE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-profile&#34;&gt;--s3-profile&lt;/h4&gt;
&lt;p&gt;Profile to use in the shared credentials file.&lt;/p&gt;
&lt;p&gt;If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.&lt;/p&gt;
&lt;p&gt;If empty it will default to the environment variable &amp;quot;AWS_PROFILE&amp;quot; or
&amp;quot;default&amp;quot; if that environment variable is also not set.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      profile&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_PROFILE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-session-token&#34;&gt;--s3-session-token&lt;/h4&gt;
&lt;p&gt;An AWS session token.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      session_token&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SESSION_TOKEN&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-upload-concurrency&#34;&gt;--s3-upload-concurrency&lt;/h4&gt;
&lt;p&gt;Concurrency for multipart uploads and copies.&lt;/p&gt;
&lt;p&gt;This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads and copies.&lt;/p&gt;
&lt;p&gt;If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_concurrency&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_UPLOAD_CONCURRENCY&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     4&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-force-path-style&#34;&gt;--s3-force-path-style&lt;/h4&gt;
&lt;p&gt;If true use path style access if false use virtual hosted style.&lt;/p&gt;
&lt;p&gt;If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro&#34;&gt;the AWS S3
docs&lt;/a&gt;
for more info.&lt;/p&gt;
&lt;p&gt;Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.&lt;/p&gt;
&lt;p&gt;Note that if your bucket isn&#39;t a valid DNS name, i.e. has &#39;.&#39; or &#39;_&#39; in,
you&#39;ll need to set this to true.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      force_path_style&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_FORCE_PATH_STYLE&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-v2-auth&#34;&gt;--s3-v2-auth&lt;/h4&gt;
&lt;p&gt;If true use v2 authentication.&lt;/p&gt;
&lt;p&gt;If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.&lt;/p&gt;
&lt;p&gt;Use this only if v4 signatures don&#39;t work, e.g. pre Jewel/v10 CEPH.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      v2_auth&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_V2_AUTH&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-dual-stack&#34;&gt;--s3-use-dual-stack&lt;/h4&gt;
&lt;p&gt;If true use AWS S3 dual-stack endpoint (IPv6 support).&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html&#34;&gt;AWS Docs on Dualstack Endpoints&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_dual_stack&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_DUAL_STACK&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-accelerate-endpoint&#34;&gt;--s3-use-accelerate-endpoint&lt;/h4&gt;
&lt;p&gt;If true use the AWS S3 accelerated endpoint.&lt;/p&gt;
&lt;p&gt;See: &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html&#34;&gt;AWS S3 Transfer acceleration&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_accelerate_endpoint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_ACCELERATE_ENDPOINT&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-leave-parts-on-error&#34;&gt;--s3-leave-parts-on-error&lt;/h4&gt;
&lt;p&gt;If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.&lt;/p&gt;
&lt;p&gt;It should be set to true for resuming uploads across different sessions.&lt;/p&gt;
&lt;p&gt;WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      leave_parts_on_error&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_LEAVE_PARTS_ON_ERROR&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-list-chunk&#34;&gt;--s3-list-chunk&lt;/h4&gt;
&lt;p&gt;Size of listing chunk (response list for each ListObject S3 request).&lt;/p&gt;
&lt;p&gt;This option is also known as &amp;quot;MaxKeys&amp;quot;, &amp;quot;max-items&amp;quot;, or &amp;quot;page-size&amp;quot; from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see &lt;a href=&#34;https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html&#34;&gt;AWS S3&lt;/a&gt;.
In Ceph, this can be increased with the &amp;quot;rgw list buckets max chunk&amp;quot; option.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      list_chunk&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_LIST_CHUNK&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     1000&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-list-version&#34;&gt;--s3-list-version&lt;/h4&gt;
&lt;p&gt;Version of ListObjects to use: 1,2 or 0 for auto.&lt;/p&gt;
&lt;p&gt;When S3 originally launched it only provided the ListObjects call to
enumerate objects in a bucket.&lt;/p&gt;
&lt;p&gt;However in May 2016 the ListObjectsV2 call was introduced. This is
much higher performance and should be used if at all possible.&lt;/p&gt;
&lt;p&gt;If set to the default, 0, rclone will guess according to the provider
set which list objects method to call. If it guesses wrong, then it
may be set manually here.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      list_version&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_LIST_VERSION&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     0&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-list-url-encode&#34;&gt;--s3-list-url-encode&lt;/h4&gt;
&lt;p&gt;Whether to url encode listings: true/false/unset&lt;/p&gt;
&lt;p&gt;Some providers support URL encoding listings and where this is
available this is more reliable when using control characters in file
names. If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone&#39;s choice here.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      list_url_encode&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_LIST_URL_ENCODE&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-no-check-bucket&#34;&gt;--s3-no-check-bucket&lt;/h4&gt;
&lt;p&gt;If set, don&#39;t attempt to check the bucket exists or create it.&lt;/p&gt;
&lt;p&gt;This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.&lt;/p&gt;
&lt;p&gt;It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      no_check_bucket&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_NO_CHECK_BUCKET&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-no-head&#34;&gt;--s3-no-head&lt;/h4&gt;
&lt;p&gt;If set, don&#39;t HEAD uploaded objects to check integrity.&lt;/p&gt;
&lt;p&gt;This can be useful when trying to minimise the number of transactions
rclone does.&lt;/p&gt;
&lt;p&gt;Setting it means that if rclone receives a 200 OK message after
uploading an object with PUT then it will assume that it got uploaded
properly.&lt;/p&gt;
&lt;p&gt;In particular it will assume:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the metadata, including modtime, storage class and content type was as uploaded&lt;/li&gt;
&lt;li&gt;the size was as uploaded&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It reads the following items from the response for a single part PUT:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the MD5SUM&lt;/li&gt;
&lt;li&gt;The uploaded date&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For multipart uploads these items aren&#39;t read.&lt;/p&gt;
&lt;p&gt;If an source object of unknown length is uploaded then rclone &lt;strong&gt;will&lt;/strong&gt; do a
HEAD request.&lt;/p&gt;
&lt;p&gt;Setting this flag increases the chance for undetected upload failures,
in particular an incorrect size, so it isn&#39;t recommended for normal
operation. In practice the chance of an undetected upload failure is
very small even with this flag.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      no_head&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_NO_HEAD&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-no-head-object&#34;&gt;--s3-no-head-object&lt;/h4&gt;
&lt;p&gt;If set, do not do HEAD before GET when getting objects.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      no_head_object&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_NO_HEAD_OBJECT&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-encoding&#34;&gt;--s3-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-memory-pool-flush-time&#34;&gt;--s3-memory-pool-flush-time&lt;/h4&gt;
&lt;p&gt;How often internal memory buffer pools will be flushed. (no longer used)&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      memory_pool_flush_time&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_MEMORY_POOL_FLUSH_TIME&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     1m0s&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-memory-pool-use-mmap&#34;&gt;--s3-memory-pool-use-mmap&lt;/h4&gt;
&lt;p&gt;Whether to use mmap buffers in internal memory pool. (no longer used)&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      memory_pool_use_mmap&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_MEMORY_POOL_USE_MMAP&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-disable-http2&#34;&gt;--s3-disable-http2&lt;/h4&gt;
&lt;p&gt;Disable usage of http2 for S3 backends.&lt;/p&gt;
&lt;p&gt;There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2.  HTTP/2 is enabled by default for the s3 backend but can be
disabled here.  When the issue is solved this flag will be removed.&lt;/p&gt;
&lt;p&gt;See: &lt;a href=&#34;https://github.com/rclone/rclone/issues/4673&#34;&gt;https://github.com/rclone/rclone/issues/4673&lt;/a&gt;, &lt;a href=&#34;https://github.com/rclone/rclone/issues/3631&#34;&gt;https://github.com/rclone/rclone/issues/3631&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      disable_http2&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DISABLE_HTTP2&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-download-url&#34;&gt;--s3-download-url&lt;/h4&gt;
&lt;p&gt;Custom endpoint for downloads.
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      download_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DOWNLOAD_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-directory-markers&#34;&gt;--s3-directory-markers&lt;/h4&gt;
&lt;p&gt;Upload an empty object with a trailing slash when a new directory is created&lt;/p&gt;
&lt;p&gt;Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with &amp;quot;/&amp;quot;, to persist the folder.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      directory_markers&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DIRECTORY_MARKERS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-multipart-etag&#34;&gt;--s3-use-multipart-etag&lt;/h4&gt;
&lt;p&gt;Whether to use ETag in multipart uploads for verification&lt;/p&gt;
&lt;p&gt;This should be true, false or left unset to use the default for the provider.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_multipart_etag&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_MULTIPART_ETAG&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-unsigned-payload&#34;&gt;--s3-use-unsigned-payload&lt;/h4&gt;
&lt;p&gt;Whether to use an unsigned payload in PutObject&lt;/p&gt;
&lt;p&gt;Rclone has to avoid the AWS SDK seeking the body when calling
PutObject. The AWS provider can add checksums in the trailer to avoid
seeking but other providers can&#39;t.&lt;/p&gt;
&lt;p&gt;This should be true, false or left unset to use the default for the provider.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_unsigned_payload&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_UNSIGNED_PAYLOAD&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-presigned-request&#34;&gt;--s3-use-presigned-request&lt;/h4&gt;
&lt;p&gt;Whether to use a presigned request or PutObject for single part uploads&lt;/p&gt;
&lt;p&gt;If this is false rclone will use PutObject from the AWS SDK to upload
an object.&lt;/p&gt;
&lt;p&gt;Versions of rclone &amp;lt; 1.59 use presigned requests to upload a single
part object and setting this flag to true will re-enable that
functionality. This shouldn&#39;t be necessary except in exceptional
circumstances or for testing.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_presigned_request&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_PRESIGNED_REQUEST&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-versions&#34;&gt;--s3-versions&lt;/h4&gt;
&lt;p&gt;Include old versions in directory listings.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      versions&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_VERSIONS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-version-at&#34;&gt;--s3-version-at&lt;/h4&gt;
&lt;p&gt;Show file versions as they were at the specified time.&lt;/p&gt;
&lt;p&gt;The parameter should be a date, &amp;quot;2006-01-02&amp;quot;, datetime &amp;quot;2006-01-02
15:04:05&amp;quot; or a duration for that long ago, eg &amp;quot;100d&amp;quot; or &amp;quot;1h&amp;quot;.&lt;/p&gt;
&lt;p&gt;Note that when using this no file write operations are permitted,
so you can&#39;t upload files or delete them.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/docs/#time-option&#34;&gt;the time option docs&lt;/a&gt; for valid formats.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      version_at&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_VERSION_AT&lt;/li&gt;
&lt;li&gt;Type:        Time&lt;/li&gt;
&lt;li&gt;Default:     off&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-version-deleted&#34;&gt;--s3-version-deleted&lt;/h4&gt;
&lt;p&gt;Show deleted file markers when using versions.&lt;/p&gt;
&lt;p&gt;This shows deleted file markers in the listing when using versions. These will appear
as 0 size files. The only operation which can be performed on them is deletion.&lt;/p&gt;
&lt;p&gt;Deleting a delete marker will reveal the previous version.&lt;/p&gt;
&lt;p&gt;Deleted files will always show with a timestamp.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      version_deleted&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_VERSION_DELETED&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-decompress&#34;&gt;--s3-decompress&lt;/h4&gt;
&lt;p&gt;If set this will decompress gzip encoded objects.&lt;/p&gt;
&lt;p&gt;It is possible to upload objects to S3 with &amp;quot;Content-Encoding: gzip&amp;quot;
set. Normally rclone will download these files as compressed objects.&lt;/p&gt;
&lt;p&gt;If this flag is set then rclone will decompress these files with
&amp;quot;Content-Encoding: gzip&amp;quot; as they are received. This means that rclone
can&#39;t check the size and hash but the file contents will be decompressed.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      decompress&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DECOMPRESS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-might-gzip&#34;&gt;--s3-might-gzip&lt;/h4&gt;
&lt;p&gt;Set this if the backend might gzip objects.&lt;/p&gt;
&lt;p&gt;Normally providers will not alter objects when they are downloaded. If
an object was not uploaded with &lt;code&gt;Content-Encoding: gzip&lt;/code&gt; then it won&#39;t
be set on download.&lt;/p&gt;
&lt;p&gt;However some providers may gzip objects even if they weren&#39;t uploaded
with &lt;code&gt;Content-Encoding: gzip&lt;/code&gt; (eg Cloudflare).&lt;/p&gt;
&lt;p&gt;A symptom of this would be receiving errors like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ERROR corrupted on transfer: sizes differ NNN vs MMM
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you set this flag and rclone downloads an object with
Content-Encoding: gzip set and chunked transfer encoding, then rclone
will decompress the object on the fly.&lt;/p&gt;
&lt;p&gt;If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone&#39;s choice here.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      might_gzip&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_MIGHT_GZIP&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-accept-encoding-gzip&#34;&gt;--s3-use-accept-encoding-gzip&lt;/h4&gt;
&lt;p&gt;Whether to send &lt;code&gt;Accept-Encoding: gzip&lt;/code&gt; header.&lt;/p&gt;
&lt;p&gt;By default, rclone will append &lt;code&gt;Accept-Encoding: gzip&lt;/code&gt; to the request to download
compressed objects whenever possible.&lt;/p&gt;
&lt;p&gt;However some providers such as Google Cloud Storage may alter the HTTP headers, breaking
the signature of the request.&lt;/p&gt;
&lt;p&gt;A symptom of this would be receiving errors like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this case, you might want to try disabling this option.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_accept_encoding_gzip&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_ACCEPT_ENCODING_GZIP&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-no-system-metadata&#34;&gt;--s3-no-system-metadata&lt;/h4&gt;
&lt;p&gt;Suppress setting and reading of system metadata&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      no_system_metadata&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_NO_SYSTEM_METADATA&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sts-endpoint&#34;&gt;--s3-sts-endpoint&lt;/h4&gt;
&lt;p&gt;Endpoint for STS (deprecated).&lt;/p&gt;
&lt;p&gt;Leave blank if using AWS to use the default endpoint for the region.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sts_endpoint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_STS_ENDPOINT&lt;/li&gt;
&lt;li&gt;Provider:    AWS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-already-exists&#34;&gt;--s3-use-already-exists&lt;/h4&gt;
&lt;p&gt;Set if rclone should report BucketAlreadyExists errors on bucket creation.&lt;/p&gt;
&lt;p&gt;At some point during the evolution of the s3 protocol, AWS started
returning an &lt;code&gt;AlreadyOwnedByYou&lt;/code&gt; error when attempting to create a
bucket that the user already owned, rather than a
&lt;code&gt;BucketAlreadyExists&lt;/code&gt; error.&lt;/p&gt;
&lt;p&gt;Unfortunately exactly what has been implemented by s3 clones is a
little inconsistent, some return &lt;code&gt;AlreadyOwnedByYou&lt;/code&gt;, some return
&lt;code&gt;BucketAlreadyExists&lt;/code&gt; and some return no error at all.&lt;/p&gt;
&lt;p&gt;This is important to rclone because it ensures the bucket exists by
creating it on quite a lot of operations (unless
&lt;code&gt;--s3-no-check-bucket&lt;/code&gt; is used).&lt;/p&gt;
&lt;p&gt;If rclone knows the provider can return &lt;code&gt;AlreadyOwnedByYou&lt;/code&gt; or returns
no error then it can report &lt;code&gt;BucketAlreadyExists&lt;/code&gt; errors when the user
attempts to create a bucket not owned by them. Otherwise rclone
ignores the &lt;code&gt;BucketAlreadyExists&lt;/code&gt; error which can lead to confusion.&lt;/p&gt;
&lt;p&gt;This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_already_exists&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_ALREADY_EXISTS&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-use-multipart-uploads&#34;&gt;--s3-use-multipart-uploads&lt;/h4&gt;
&lt;p&gt;Set if rclone should use multipart uploads.&lt;/p&gt;
&lt;p&gt;You can change this if you want to disable the use of multipart uploads.
This shouldn&#39;t be necessary in normal operation.&lt;/p&gt;
&lt;p&gt;This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      use_multipart_uploads&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_USE_MULTIPART_UPLOADS&lt;/li&gt;
&lt;li&gt;Type:        Tristate&lt;/li&gt;
&lt;li&gt;Default:     unset&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-sdk-log-mode&#34;&gt;--s3-sdk-log-mode&lt;/h4&gt;
&lt;p&gt;Set to debug the SDK&lt;/p&gt;
&lt;p&gt;This can be set to a comma separated list of the following functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Signing&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Retries&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Request&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;RequestWithBody&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Response&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ResponseWithBody&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DeprecatedUsage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;RequestEventMessage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ResponseEventMessage&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use &lt;code&gt;Off&lt;/code&gt; to disable and &lt;code&gt;All&lt;/code&gt; to set all log levels. You will need to
use &lt;code&gt;-vv&lt;/code&gt; to see the debug level logs.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      sdk_log_mode&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_SDK_LOG_MODE&lt;/li&gt;
&lt;li&gt;Type:        Bits&lt;/li&gt;
&lt;li&gt;Default:     Off&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;s3-description&#34;&gt;--s3-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_S3_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;metadata&#34;&gt;Metadata&lt;/h3&gt;
&lt;p&gt;User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.&lt;/p&gt;
&lt;p&gt;Here are the possible system metadata items for the s3 backend.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Help&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;th&gt;Read Only&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;btime&lt;/td&gt;
&lt;td&gt;Time of file birth (creation) read from Last-Modified header&lt;/td&gt;
&lt;td&gt;RFC 3339&lt;/td&gt;
&lt;td&gt;2006-01-02T15:04:05.999999999Z07:00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Y&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cache-control&lt;/td&gt;
&lt;td&gt;Cache-Control header&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;no-cache&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;content-disposition&lt;/td&gt;
&lt;td&gt;Content-Disposition header&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;inline&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;content-encoding&lt;/td&gt;
&lt;td&gt;Content-Encoding header&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;gzip&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;content-language&lt;/td&gt;
&lt;td&gt;Content-Language header&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;en-US&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;content-type&lt;/td&gt;
&lt;td&gt;Content-Type header&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;text/plain&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mtime&lt;/td&gt;
&lt;td&gt;Time of last modification, read from rclone metadata&lt;/td&gt;
&lt;td&gt;RFC 3339&lt;/td&gt;
&lt;td&gt;2006-01-02T15:04:05.999999999Z07:00&lt;/td&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tier&lt;/td&gt;
&lt;td&gt;Tier of the object&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;GLACIER&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Y&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/docs/#metadata&#34;&gt;metadata&lt;/a&gt; docs for more info.&lt;/p&gt;
&lt;h2 id=&#34;backend-commands&#34;&gt;Backend commands&lt;/h2&gt;
&lt;p&gt;Here are the commands specific to the s3 backend.&lt;/p&gt;
&lt;p&gt;Run them with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend COMMAND remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The help below will explain what arguments each command takes.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/commands/rclone_backend/&#34;&gt;backend&lt;/a&gt; command for more
info on how to pass options and arguments.&lt;/p&gt;
&lt;p&gt;These can be run on a running backend using the rc command
&lt;a href=&#34;https://rclone.org/rc/#backend-command&#34;&gt;backend/command&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;restore&#34;&gt;restore&lt;/h3&gt;
&lt;p&gt;Restore objects from GLACIER or INTELLIGENT-TIERING archive tier&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend restore remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command can be used to restore one or more objects from GLACIER to normal storage
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier.&lt;/p&gt;
&lt;p&gt;Usage Examples:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone --interactive backend restore --include &amp;quot;*.txt&amp;quot; s3:bucket/path -o priority=Standard -o lifetime=1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All the objects shown will be marked for restore, then&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend restore --include &amp;quot;*.txt&amp;quot; s3:bucket/path -o priority=Standard -o lifetime=1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It returns a list of status dictionaries with Remote and Status
keys. The Status will be OK if it was successful or an error message
if not.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
    {
        &amp;quot;Status&amp;quot;: &amp;quot;OK&amp;quot;,
        &amp;quot;Remote&amp;quot;: &amp;quot;test.txt&amp;quot;
    },
    {
        &amp;quot;Status&amp;quot;: &amp;quot;OK&amp;quot;,
        &amp;quot;Remote&amp;quot;: &amp;quot;test/file4.txt&amp;quot;
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;description&amp;quot;: The optional description for the job.&lt;/li&gt;
&lt;li&gt;&amp;quot;lifetime&amp;quot;: Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING storage&lt;/li&gt;
&lt;li&gt;&amp;quot;priority&amp;quot;: Priority of restore: Standard|Expedited|Bulk&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;restore-status&#34;&gt;restore-status&lt;/h3&gt;
&lt;p&gt;Show the restore status for objects being restored from GLACIER or INTELLIGENT-TIERING storage&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend restore-status remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command can be used to show the status for objects being restored from GLACIER to normal storage
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier.&lt;/p&gt;
&lt;p&gt;Usage Examples:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend restore-status s3:bucket/path/to/object
rclone backend restore-status s3:bucket/path/to/directory
rclone backend restore-status -o all s3:bucket/path/to/directory
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command does not obey the filters.&lt;/p&gt;
&lt;p&gt;It returns a list of status dictionaries.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
    {
        &amp;quot;Remote&amp;quot;: &amp;quot;file.txt&amp;quot;,
        &amp;quot;VersionID&amp;quot;: null,
        &amp;quot;RestoreStatus&amp;quot;: {
            &amp;quot;IsRestoreInProgress&amp;quot;: true,
            &amp;quot;RestoreExpiryDate&amp;quot;: &amp;quot;2023-09-06T12:29:19+01:00&amp;quot;
        },
        &amp;quot;StorageClass&amp;quot;: &amp;quot;GLACIER&amp;quot;
    },
    {
        &amp;quot;Remote&amp;quot;: &amp;quot;test.pdf&amp;quot;,
        &amp;quot;VersionID&amp;quot;: null,
        &amp;quot;RestoreStatus&amp;quot;: {
            &amp;quot;IsRestoreInProgress&amp;quot;: false,
            &amp;quot;RestoreExpiryDate&amp;quot;: &amp;quot;2023-09-06T12:29:19+01:00&amp;quot;
        },
        &amp;quot;StorageClass&amp;quot;: &amp;quot;DEEP_ARCHIVE&amp;quot;
    },
    {
        &amp;quot;Remote&amp;quot;: &amp;quot;test.gz&amp;quot;,
        &amp;quot;VersionID&amp;quot;: null,
        &amp;quot;RestoreStatus&amp;quot;: {
            &amp;quot;IsRestoreInProgress&amp;quot;: true,
            &amp;quot;RestoreExpiryDate&amp;quot;: &amp;quot;null&amp;quot;
        },
        &amp;quot;StorageClass&amp;quot;: &amp;quot;INTELLIGENT_TIERING&amp;quot;
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;all&amp;quot;: if set then show all objects, not just ones with restore status&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;list-multipart-uploads&#34;&gt;list-multipart-uploads&lt;/h3&gt;
&lt;p&gt;List the unfinished multipart uploads&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend list-multipart-uploads remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command lists the unfinished multipart uploads in JSON format.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend list-multipart s3:bucket/path/to/object
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It returns a dictionary of buckets with values as lists of unfinished
multipart uploads.&lt;/p&gt;
&lt;p&gt;You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &amp;quot;rclone&amp;quot;: [
    {
      &amp;quot;Initiated&amp;quot;: &amp;quot;2020-06-26T14:20:36Z&amp;quot;,
      &amp;quot;Initiator&amp;quot;: {
        &amp;quot;DisplayName&amp;quot;: &amp;quot;XXX&amp;quot;,
        &amp;quot;ID&amp;quot;: &amp;quot;arn:aws:iam::XXX:user/XXX&amp;quot;
      },
      &amp;quot;Key&amp;quot;: &amp;quot;KEY&amp;quot;,
      &amp;quot;Owner&amp;quot;: {
        &amp;quot;DisplayName&amp;quot;: null,
        &amp;quot;ID&amp;quot;: &amp;quot;XXX&amp;quot;
      },
      &amp;quot;StorageClass&amp;quot;: &amp;quot;STANDARD&amp;quot;,
      &amp;quot;UploadId&amp;quot;: &amp;quot;XXX&amp;quot;
    }
  ],
  &amp;quot;rclone-1000files&amp;quot;: [],
  &amp;quot;rclone-dst&amp;quot;: []
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;cleanup-1&#34;&gt;cleanup&lt;/h3&gt;
&lt;p&gt;Remove unfinished multipart uploads.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.&lt;/p&gt;
&lt;p&gt;Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.&lt;/p&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;max-age&amp;quot;: Max age of upload to delete&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cleanup-hidden&#34;&gt;cleanup-hidden&lt;/h3&gt;
&lt;p&gt;Remove old versions of files.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup-hidden remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command removes any old hidden versions of files
on a versions enabled bucket.&lt;/p&gt;
&lt;p&gt;Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup-hidden s3:bucket/path/to/dir
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;versioning&#34;&gt;versioning&lt;/h3&gt;
&lt;p&gt;Set/get versioning support for a bucket.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend versioning remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command sets versioning support if a parameter is
passed and then returns the current versioning status for the bucket
supplied.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend versioning s3:bucket # read status only
rclone backend versioning s3:bucket Enabled
rclone backend versioning s3:bucket Suspended
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It may return &amp;quot;Enabled&amp;quot;, &amp;quot;Suspended&amp;quot; or &amp;quot;Unversioned&amp;quot;. Note that once versioning
has been enabled the status can&#39;t be set back to &amp;quot;Unversioned&amp;quot;.&lt;/p&gt;
&lt;h3 id=&#34;set&#34;&gt;set&lt;/h3&gt;
&lt;p&gt;Set command for updating the config parameters.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend set remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This set command can be used to update the config parameters
for a running s3 backend.&lt;/p&gt;
&lt;p&gt;Usage Examples:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The option keys are named as they are in the config file.&lt;/p&gt;
&lt;p&gt;This rebuilds the connection to the s3 backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.&lt;/p&gt;
&lt;p&gt;It doesn&#39;t return anything.&lt;/p&gt;

&lt;h3 id=&#34;anonymous-access&#34;&gt;Anonymous access to public buckets&lt;/h3&gt;
&lt;p&gt;If you want to use rclone to access a public bucket, configure with a
blank &lt;code&gt;access_key_id&lt;/code&gt; and &lt;code&gt;secret_access_key&lt;/code&gt;.  Your config should end
up looking like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[anons3]
type = s3
provider = AWS
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Then use it as normal with the name of the public bucket, e.g.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd anons3:1000genomes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will be able to list and copy data but not upload it.&lt;/p&gt;
&lt;p&gt;You can also do this entirely on the command line&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd :s3,provider=AWS:1000genomes
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;providers&#34;&gt;Providers&lt;/h2&gt;
&lt;h3 id=&#34;aws-s3&#34;&gt;AWS S3&lt;/h3&gt;
&lt;p&gt;This is the provider used as main example and described in the &lt;a href=&#34;#configuration&#34;&gt;configuration&lt;/a&gt; section above.&lt;/p&gt;
&lt;h3 id=&#34;aws-snowball-edge&#34;&gt;AWS Snowball Edge&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://aws.amazon.com/snowball/&#34;&gt;AWS Snowball&lt;/a&gt; is a hardware
appliance used for transferring bulk data back to AWS. Its main
software interface is S3 object storage.&lt;/p&gt;
&lt;p&gt;To use rclone with AWS Snowball Edge devices, configure as standard
for an &#39;S3 Compatible Service&#39;.&lt;/p&gt;
&lt;p&gt;If using rclone pre v1.59 be sure to set &lt;code&gt;upload_cutoff = 0&lt;/code&gt; otherwise
you will run into authentication header issues as the snowball device
does not support query parameter based authentication.&lt;/p&gt;
&lt;p&gt;With rclone v1.59 or later setting &lt;code&gt;upload_cutoff&lt;/code&gt; should not be necessary.&lt;/p&gt;
&lt;p&gt;eg.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[snowball]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = http://[IP of Snowball]:8080
upload_cutoff = 0
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;ceph&#34;&gt;Ceph&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://ceph.com/&#34;&gt;Ceph&lt;/a&gt; is an open-source, unified, distributed
storage system designed for excellent performance, reliability and
scalability.  It has an S3 compatible object storage interface.&lt;/p&gt;
&lt;p&gt;To use rclone with Ceph, configure as above but leave the region blank
and set the endpoint.  You should end up with something like this in
your config:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[ceph]
type = s3
provider = Ceph
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region =
endpoint = https://ceph.endpoint.example.com
location_constraint =
acl =
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a
version of rclone before v1.59 then you may need to supply the
parameter &lt;code&gt;--s3-upload-cutoff 0&lt;/code&gt; or put this in the config file as
&lt;code&gt;upload_cutoff 0&lt;/code&gt; to work around a bug which causes uploading of small
files to fail.&lt;/p&gt;
&lt;p&gt;Note also that Ceph sometimes puts &lt;code&gt;/&lt;/code&gt; in the passwords it gives
users.  If you read the secret access key using the command line tools
you will get a JSON blob with the &lt;code&gt;/&lt;/code&gt; escaped as &lt;code&gt;\/&lt;/code&gt;.  Make sure you
only write &lt;code&gt;/&lt;/code&gt; in the secret access key.&lt;/p&gt;
&lt;p&gt;Eg the dump from Ceph looks something like this (irrelevant keys
removed).&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{
    &amp;#34;user_id&amp;#34;: &amp;#34;xxx&amp;#34;,
    &amp;#34;display_name&amp;#34;: &amp;#34;xxxx&amp;#34;,
    &amp;#34;keys&amp;#34;: [
        {
            &amp;#34;user&amp;#34;: &amp;#34;xxx&amp;#34;,
            &amp;#34;access_key&amp;#34;: &amp;#34;xxxxxx&amp;#34;,
            &amp;#34;secret_key&amp;#34;: &amp;#34;xxxxxx\/xxxx&amp;#34;
        }
    ],
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Because this is a json dump, it is encoding the &lt;code&gt;/&lt;/code&gt; as &lt;code&gt;\/&lt;/code&gt;, so if you
use the secret key as &lt;code&gt;xxxxxx/xxxx&lt;/code&gt;  it will work fine.&lt;/p&gt;
&lt;h3 id=&#34;cloudflare-r2&#34;&gt;Cloudflare R2&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.cloudflare.com/r2-open-beta/&#34;&gt;Cloudflare R2&lt;/a&gt; Storage
allows developers to store large amounts of unstructured data without
the costly egress bandwidth fees associated with typical cloud storage
services.&lt;/p&gt;
&lt;p&gt;Here is an example of making a Cloudflare R2 configuration. First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;p&gt;Note that all buckets are private, and all are stored in the same
&amp;quot;auto&amp;quot; region. It is necessary to use Cloudflare workers to share the
content of a bucket publicly.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; r2
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Magalu, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
   \ (s3)
...
Storage&amp;gt; s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
XX / Cloudflare R2 Storage
   \ (Cloudflare)
...
provider&amp;gt; Cloudflare
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; SECRET_ACCESS_KEY
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / R2 buckets are automatically distributed across Cloudflare&amp;#39;s data centers for low latency.
   \ (auto)
region&amp;gt; 1
Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
endpoint&amp;gt; https://ACCOUNT_ID.r2.cloudflarestorage.com
Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave your config looking something like:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[r2]
type = s3
provider = Cloudflare
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
region = auto
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
acl = private
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now run &lt;code&gt;rclone lsf r2:&lt;/code&gt; to see your buckets and &lt;code&gt;rclone lsf r2:bucket&lt;/code&gt; to look within a bucket.&lt;/p&gt;
&lt;p&gt;For R2 tokens with the &amp;quot;Object Read &amp;amp; Write&amp;quot; permission, you may also
need to add &lt;code&gt;no_check_bucket = true&lt;/code&gt; for object uploads to work
correctly.&lt;/p&gt;
&lt;p&gt;Note that Cloudflare decompresses files uploaded with
&lt;code&gt;Content-Encoding: gzip&lt;/code&gt; by default which is a deviation from what AWS
does. If this is causing a problem then upload the files with
&lt;code&gt;--header-upload &amp;quot;Cache-Control: no-transform&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h3 id=&#34;dreamhost&#34;&gt;Dreamhost&lt;/h3&gt;
&lt;p&gt;Dreamhost &lt;a href=&#34;https://www.dreamhost.com/cloud/storage/&#34;&gt;DreamObjects&lt;/a&gt; is
an object storage system based on CEPH.&lt;/p&gt;
&lt;p&gt;To use rclone with Dreamhost, configure as above but leave the region blank
and set the endpoint.  You should end up with something like this in
your config:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[dreamobjects]
type = s3
provider = DreamHost
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;google-cloud-storage&#34;&gt;Google Cloud Storage&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://cloud.google.com/storage/docs&#34;&gt;GoogleCloudStorage&lt;/a&gt; is an &lt;a href=&#34;https://cloud.google.com/storage/docs/interoperability&#34;&gt;S3-interoperable&lt;/a&gt; object storage service from Google Cloud Platform.&lt;/p&gt;
&lt;p&gt;To connect to Google Cloud Storage you will need an access key and secret key. These can be retrieved by creating an &lt;a href=&#34;https://cloud.google.com/storage/docs/authentication/managing-hmackeys&#34;&gt;HMAC key&lt;/a&gt;.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[gs]
type = s3
provider = GCS
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; that &lt;code&gt;--s3-versions&lt;/code&gt; does not work with GCS when it needs to do directory paging. Rclone will return the error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is Google bug &lt;a href=&#34;https://issuetracker.google.com/u/0/issues/312292516&#34;&gt;#312292516&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;digitalocean-spaces&#34;&gt;DigitalOcean Spaces&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.digitalocean.com/products/object-storage/&#34;&gt;Spaces&lt;/a&gt; is an &lt;a href=&#34;https://developers.digitalocean.com/documentation/spaces/&#34;&gt;S3-interoperable&lt;/a&gt; object storage service from cloud provider DigitalOcean.&lt;/p&gt;
&lt;p&gt;To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the &amp;quot;&lt;a href=&#34;https://cloud.digitalocean.com/settings/api/tokens&#34;&gt;Applications &amp;amp; API&lt;/a&gt;&amp;quot; page of the DigitalOcean control panel. They will be needed when prompted by &lt;code&gt;rclone config&lt;/code&gt; for your &lt;code&gt;access_key_id&lt;/code&gt; and &lt;code&gt;secret_access_key&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When prompted for a &lt;code&gt;region&lt;/code&gt; or &lt;code&gt;location_constraint&lt;/code&gt;, press enter to use the default value. The region must be included in the &lt;code&gt;endpoint&lt;/code&gt; setting (e.g. &lt;code&gt;nyc3.digitaloceanspaces.com&lt;/code&gt;). The default values can be used for other settings.&lt;/p&gt;
&lt;p&gt;Going through the whole process of creating a new remote by running &lt;code&gt;rclone config&lt;/code&gt;, each prompt should be answered as shown below:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Storage&amp;gt; s3
env_auth&amp;gt; 1
access_key_id&amp;gt; YOUR_ACCESS_KEY
secret_access_key&amp;gt; YOUR_SECRET_KEY
region&amp;gt;
endpoint&amp;gt; nyc3.digitaloceanspaces.com
location_constraint&amp;gt;
acl&amp;gt;
storage_class&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The resulting configuration file should look like:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[spaces]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once configured, you can create a new Space and begin copying files. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;huawei-obs&#34;&gt;Huawei OBS&lt;/h3&gt;
&lt;p&gt;Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.&lt;/p&gt;
&lt;p&gt;OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Or you can also configure via the interactive command line:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; obs
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
 9 / Huawei Object Storage Service
   \ (HuaweiOBS)
[snip]
provider&amp;gt; 9
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; your-access-key-id
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; your-secret-access-key
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / AF-Johannesburg
   \ (af-south-1)
 2 / AP-Bangkok
   \ (ap-southeast-2)
[snip]
region&amp;gt; 1
Option endpoint.
Endpoint for OBS API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / AF-Johannesburg
   \ (obs.af-south-1.myhuaweicloud.com)
 2 / AP-Bangkok
   \ (obs.ap-southeast-2.myhuaweicloud.com)
[snip]
endpoint&amp;gt; 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
[snip]
acl&amp;gt; 1
Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt;
--------------------
[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
obs                  s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q&amp;gt; q
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;ibm-cos-s3&#34;&gt;IBM COS (S3)&lt;/h3&gt;
&lt;p&gt;Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (&lt;a href=&#34;http://www.ibm.com/cloud/object-storage&#34;&gt;http://www.ibm.com/cloud/object-storage&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;To configure access to IBM COS S3, follow the steps below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Run rclone config and select n for a new remote.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	2018/02/14 14:13:11 NOTICE: Config file &amp;#34;C:\\Users\\a\\.config\\rclone\\rclone.conf&amp;#34; not found - using defaults
	No remotes found, make a new one?
	n) New remote
	s) Set configuration password
	q) Quit config
	n/s/q&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Enter the name for the configuration&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	name&amp;gt; &amp;lt;YOUR NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Select &amp;quot;s3&amp;quot; storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Select IBM COS as the S3 Storage Provider.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose the S3 provider.
Choose a number from below, or type in your own value
	 1 / Choose this option to configure Storage to AWS S3
	   \ &amp;#34;AWS&amp;#34;
 	 2 / Choose this option to configure Storage to Ceph Systems
  	 \ &amp;#34;Ceph&amp;#34;
	 3 /  Choose this option to configure Storage to Dreamhost
     \ &amp;#34;Dreamhost&amp;#34;
   4 / Choose this option to the configure Storage to IBM COS S3
   	 \ &amp;#34;IBMCOS&amp;#34;
 	 5 / Choose this option to the configure Storage to Minio
     \ &amp;#34;Minio&amp;#34;
	 Provider&amp;gt;4
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Enter the Access Key and Secret.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	AWS Access Key ID - leave blank for anonymous access or runtime credentials.
	access_key_id&amp;gt; &amp;lt;&amp;gt;
	AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
	secret_access_key&amp;gt; &amp;lt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	Endpoint for IBM COS S3 API.
	Specify if using an IBM COS On Premise.
	Choose a number from below, or type in your own value
	 1 / US Cross Region Endpoint
   	   \ &amp;#34;s3-api.us-geo.objectstorage.softlayer.net&amp;#34;
	 2 / US Cross Region Dallas Endpoint
   	   \ &amp;#34;s3-api.dal.us-geo.objectstorage.softlayer.net&amp;#34;
 	 3 / US Cross Region Washington DC Endpoint
   	   \ &amp;#34;s3-api.wdc-us-geo.objectstorage.softlayer.net&amp;#34;
	 4 / US Cross Region San Jose Endpoint
	   \ &amp;#34;s3-api.sjc-us-geo.objectstorage.softlayer.net&amp;#34;
	 5 / US Cross Region Private Endpoint
	   \ &amp;#34;s3-api.us-geo.objectstorage.service.networklayer.com&amp;#34;
	 6 / US Cross Region Dallas Private Endpoint
	   \ &amp;#34;s3-api.dal-us-geo.objectstorage.service.networklayer.com&amp;#34;
	 7 / US Cross Region Washington DC Private Endpoint
	   \ &amp;#34;s3-api.wdc-us-geo.objectstorage.service.networklayer.com&amp;#34;
	 8 / US Cross Region San Jose Private Endpoint
	   \ &amp;#34;s3-api.sjc-us-geo.objectstorage.service.networklayer.com&amp;#34;
	 9 / US Region East Endpoint
	   \ &amp;#34;s3.us-east.objectstorage.softlayer.net&amp;#34;
	10 / US Region East Private Endpoint
	   \ &amp;#34;s3.us-east.objectstorage.service.networklayer.com&amp;#34;
	11 / US Region South Endpoint
[snip]
	34 / Toronto Single Site Private Endpoint
	   \ &amp;#34;s3.tor01.objectstorage.service.networklayer.com&amp;#34;
	endpoint&amp;gt;1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;7&#34;&gt;
&lt;li&gt;Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	 1 / US Cross Region Standard
	   \ &amp;#34;us-standard&amp;#34;
	 2 / US Cross Region Vault
	   \ &amp;#34;us-vault&amp;#34;
	 3 / US Cross Region Cold
	   \ &amp;#34;us-cold&amp;#34;
	 4 / US Cross Region Flex
	   \ &amp;#34;us-flex&amp;#34;
	 5 / US East Region Standard
	   \ &amp;#34;us-east-standard&amp;#34;
	 6 / US East Region Vault
	   \ &amp;#34;us-east-vault&amp;#34;
	 7 / US East Region Cold
	   \ &amp;#34;us-east-cold&amp;#34;
	 8 / US East Region Flex
	   \ &amp;#34;us-east-flex&amp;#34;
	 9 / US South Region Standard
	   \ &amp;#34;us-south-standard&amp;#34;
	10 / US South Region Vault
	   \ &amp;#34;us-south-vault&amp;#34;
[snip]
	32 / Toronto Flex
	   \ &amp;#34;tor01-flex&amp;#34;
location_constraint&amp;gt;1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;9&#34;&gt;
&lt;li&gt;Specify a canned ACL. IBM Cloud (Storage) supports &amp;quot;public-read&amp;quot; and &amp;quot;private&amp;quot;. IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
      1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
      \ &amp;#34;private&amp;#34;
      2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
      \ &amp;#34;public-read&amp;#34;
      3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
      \ &amp;#34;public-read-write&amp;#34;
      4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
      \ &amp;#34;authenticated-read&amp;#34;
acl&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;12&#34;&gt;
&lt;li&gt;Review the displayed configuration and accept to save the &amp;quot;remote&amp;quot; then quit. The config file should look like this&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	[xxx]
	type = s3
	Provider = IBMCOS
	access_key_id = xxx
	secret_access_key = yyy
	endpoint = s3-api.us-geo.objectstorage.softlayer.net
	location_constraint = us-standard
	acl = private
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;13&#34;&gt;
&lt;li&gt;Execute rclone commands&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;	1)	Create a bucket.
		rclone mkdir IBM-COS-XREGION:newbucket
	2)	List available buckets.
		rclone lsd IBM-COS-XREGION:
		-1 2017-11-08 21:16:22        -1 test
		-1 2018-02-14 20:16:39        -1 newbucket
	3)	List contents of a bucket.
		rclone ls IBM-COS-XREGION:newbucket
		18685952 test.exe
	4)	Copy a file from local to remote.
		rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
	5)	Copy a file from remote to local.
		rclone copy IBM-COS-XREGION:newbucket/file.txt .
	6)	Delete a file on remote.
		rclone delete IBM-COS-XREGION:newbucket/file.txt
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;idrive-e2&#34;&gt;IDrive e2&lt;/h3&gt;
&lt;p&gt;Here is an example of making an &lt;a href=&#34;https://www.idrive.com/e2/&#34;&gt;IDrive e2&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n

Enter name for new remote.
name&amp;gt; e2

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IDrive e2
   \ (IDrive)
[snip]
provider&amp;gt; IDrive

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; YOUR_ACCESS_KEY

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; YOUR_SECRET_KEY

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
acl&amp;gt; 

Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; 

Configuration complete.
Options:
- type: s3
- provider: IDrive
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: q9d9.la12.idrivee2-5.com
Keep this &amp;#34;e2&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;ionos&#34;&gt;IONOS Cloud&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://cloud.ionos.com/storage/object-storage&#34;&gt;IONOS S3 Object Storage&lt;/a&gt; is a service offered by IONOS for storing and accessing unstructured data.
To connect to the service, you will need an access key and a secret key. These can be found in the &lt;a href=&#34;https://dcd.ionos.com/&#34;&gt;Data Center Designer&lt;/a&gt;, by selecting &lt;strong&gt;Manager resources&lt;/strong&gt; &amp;gt; &lt;strong&gt;Object Storage Key Manager&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Here is an example of a configuration. First, run &lt;code&gt;rclone config&lt;/code&gt;. This will walk you through an interactive setup process. Type &lt;code&gt;n&lt;/code&gt; to add the new remote, and then enter a name:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Enter name for new remote.
name&amp;gt; ionos-fra
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Type &lt;code&gt;s3&lt;/code&gt; to choose the connection type:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Type &lt;code&gt;IONOS&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IONOS Cloud
   \ (IONOS)
[snip]
provider&amp;gt; IONOS
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Press Enter to choose the default option &lt;code&gt;Enter AWS credentials in the next step&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Enter your Access Key and Secret key. These can be retrieved in the &lt;a href=&#34;https://dcd.ionos.com/&#34;&gt;Data Center Designer&lt;/a&gt;, click on the menu “Manager resources”  / &amp;quot;Object Storage Key Manager&amp;quot;.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; YOUR_ACCESS_KEY

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; YOUR_SECRET_KEY
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose the region where your bucket is located:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Frankfurt, Germany
   \ (de)
 2 / Berlin, Germany
   \ (eu-central-2)
 3 / Logrono, Spain
   \ (eu-south-2)
region&amp;gt; 2
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose the endpoint from the same region:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option endpoint.
Endpoint for IONOS S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Frankfurt, Germany
   \ (s3-eu-central-1.ionoscloud.com)
 2 / Berlin, Germany
   \ (s3-eu-central-2.ionoscloud.com)
 3 / Logrono, Spain
   \ (s3-eu-south-2.ionoscloud.com)
endpoint&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Press Enter to choose the default option or choose the desired ACL setting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
[snip]
acl&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Press Enter to skip the advanced config:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Press Enter to save the configuration, and then &lt;code&gt;q&lt;/code&gt; to quit the configuration process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Configuration complete.
Options:
- type: s3
- provider: IONOS
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: s3-eu-central-1.ionoscloud.com
Keep this &amp;#34;ionos-fra&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Done! Now you can try some commands (for macOS, use &lt;code&gt;./rclone&lt;/code&gt; instead of &lt;code&gt;rclone&lt;/code&gt;).&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a bucket (the name must be unique within the whole IONOS S3)&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone mkdir ionos-fra:my-bucket
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;List available buckets&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone lsd ionos-fra:
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Copy a file from local to remote&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy /Users/file.txt ionos-fra:my-bucket
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;List contents of a bucket&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone ls ionos-fra:my-bucket
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Copy a file from remote to local&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy ionos-fra:my-bucket/file.txt
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;minio&#34;&gt;Minio&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://minio.io/&#34;&gt;Minio&lt;/a&gt; is an object storage server built for cloud application developers and devops.&lt;/p&gt;
&lt;p&gt;It is very easy to install and provides an S3 compatible server which can be used by rclone.&lt;/p&gt;
&lt;p&gt;To use it, install Minio following the instructions &lt;a href=&#34;https://docs.minio.io/docs/minio-quickstart-guide&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When it configures itself Minio will print something like this&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region:    us-east-1
SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis

Browser Access:
   http://192.168.1.106:9000  http://172.23.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Drive Capacity: 26 GiB Free, 165 GiB Total
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;These details need to go into &lt;code&gt;rclone config&lt;/code&gt; like this.  Note that it
is important to put the region in as stated above.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;env_auth&amp;gt; 1
access_key_id&amp;gt; USWUXHGYZQYFYFFIT3RE
secret_access_key&amp;gt; MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region&amp;gt; us-east-1
endpoint&amp;gt; http://192.168.1.106:9000
location_constraint&amp;gt;
server_side_encryption&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Which makes the config file look like this&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[minio]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So once set up, for example, to copy files into a bucket&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy /path/to/files minio:bucket
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;qiniu&#34;&gt;Qiniu Cloud Object Storage (Kodo)&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.qiniu.com/en/products/kodo&#34;&gt;Qiniu Cloud Object Storage (Kodo)&lt;/a&gt;, a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.&lt;/p&gt;
&lt;p&gt;To configure access to Qiniu Kodo, follow the steps below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;rclone config&lt;/code&gt; and select &lt;code&gt;n&lt;/code&gt; for a new remote.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Give the name of the configuration. For example, name it &#39;qiniu&#39;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;name&amp;gt; qiniu
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Select &lt;code&gt;s3&lt;/code&gt; storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Select &lt;code&gt;Qiniu&lt;/code&gt; provider.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
   \ &amp;#34;AWS&amp;#34;
[snip]
22 / Qiniu Object Storage (Kodo)
   \ (Qiniu)
[snip]
provider&amp;gt; Qiniu
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Enter your SecretId and SecretKey of Qiniu Kodo.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (&amp;#34;false&amp;#34;).
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
access_key_id&amp;gt; AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
secret_access_key&amp;gt; xxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;   / The default endpoint - a good choice if you are unsure.
 1 | East China Region 1.
   | Needs location constraint cn-east-1.
   \ (cn-east-1)
   / East China Region 2.
 2 | Needs location constraint cn-east-2.
   \ (cn-east-2)
   / North China Region 1.
 3 | Needs location constraint cn-north-1.
   \ (cn-north-1)
   / South China Region 1.
 4 | Needs location constraint cn-south-1.
   \ (cn-south-1)
   / North America Region.
 5 | Needs location constraint us-north-1.
   \ (us-north-1)
   / Southeast Asia Region 1.
 6 | Needs location constraint ap-southeast-1.
   \ (ap-southeast-1)
   / Northeast Asia Region 1.
 7 | Needs location constraint ap-northeast-1.
   \ (ap-northeast-1)
[snip]
endpoint&amp;gt; 1

Option endpoint.
Endpoint for Qiniu Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / East China Endpoint 1
   \ (s3-cn-east-1.qiniucs.com)
 2 / East China Endpoint 2
   \ (s3-cn-east-2.qiniucs.com)
 3 / North China Endpoint 1
   \ (s3-cn-north-1.qiniucs.com)
 4 / South China Endpoint 1
   \ (s3-cn-south-1.qiniucs.com)
 5 / North America Endpoint 1
   \ (s3-us-north-1.qiniucs.com)
 6 / Southeast Asia Endpoint 1
   \ (s3-ap-southeast-1.qiniucs.com)
 7 / Northeast Asia Endpoint 1
   \ (s3-ap-northeast-1.qiniucs.com)
endpoint&amp;gt; 1

Option location_constraint.
Location constraint - must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / East China Region 1
   \ (cn-east-1)
 2 / East China Region 2
   \ (cn-east-2)
 3 / North China Region 1
   \ (cn-north-1)
 4 / South China Region 1
   \ (cn-south-1)
 5 / North America Region 1
   \ (us-north-1)
 6 / Southeast Asia Region 1
   \ (ap-southeast-1)
 7 / Northeast Asia Region 1
   \ (ap-northeast-1)
location_constraint&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;7&#34;&gt;
&lt;li&gt;Choose acl and storage class.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
[snip]
acl&amp;gt; 2
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Standard storage class
   \ (STANDARD)
 2 / Infrequent access storage mode
   \ (LINE)
 3 / Archive storage mode
   \ (GLACIER)
 4 / Deep archive storage mode
   \ (DEEP_ARCHIVE)
[snip]
storage_class&amp;gt; 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n&amp;gt; n
Remote config
--------------------
[qiniu]
- type: s3
- provider: Qiniu
- access_key_id: xxx
- secret_access_key: xxx
- region: cn-east-1
- endpoint: s3-cn-east-1.qiniucs.com
- location_constraint: cn-east-1
- acl: public-read
- storage_class: STANDARD
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
qiniu                s3
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;RackCorp&#34;&gt;RackCorp&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.rackcorp.com/storage/s3storage&#34;&gt;RackCorp Object Storage&lt;/a&gt; is an S3 compatible object storage platform from your friendly cloud provider RackCorp.
The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.&lt;/p&gt;
&lt;p&gt;Before you can use RackCorp Object Storage, you&#39;ll need to &amp;quot;&lt;a href=&#34;https://www.rackcorp.com/signup&#34;&gt;sign up&lt;/a&gt;&amp;quot; for an account on our &amp;quot;&lt;a href=&#34;https://portal.rackcorp.com&#34;&gt;portal&lt;/a&gt;&amp;quot;.
Next you can create an &lt;code&gt;access key&lt;/code&gt;, a &lt;code&gt;secret key&lt;/code&gt; and &lt;code&gt;buckets&lt;/code&gt;, in your location of choice with ease.
These details are required for the next steps of configuration, when &lt;code&gt;rclone config&lt;/code&gt; asks for your &lt;code&gt;access_key_id&lt;/code&gt; and &lt;code&gt;secret_access_key&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Your config should end up looking a bit like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[RCS3-demo-config]
type = s3
provider = RackCorp
env_auth = true
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;rclone&#34;&gt;Rclone Serve S3&lt;/h3&gt;
&lt;p&gt;Rclone can serve any remote over the S3 protocol. For details see the
&lt;a href=&#34;https://rclone.org/commands/rclone_serve_http/&#34;&gt;rclone serve s3&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;For example, to serve &lt;code&gt;remote:path&lt;/code&gt; over s3, run the server like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will be compatible with an rclone remote which is defined like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[serves3]
type = s3
provider = Rclone
endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that setting &lt;code&gt;disable_multipart_uploads = true&lt;/code&gt; is to work around
&lt;a href=&#34;https://rclone.org/commands/rclone_serve_http/#bugs&#34;&gt;a bug&lt;/a&gt; which will be fixed in due course.&lt;/p&gt;
&lt;h3 id=&#34;scaleway&#34;&gt;Scaleway&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.scaleway.com/object-storage/&#34;&gt;Scaleway&lt;/a&gt; The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.&lt;/p&gt;
&lt;p&gt;Scaleway provides an S3 interface which can be configured for use with rclone like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[scaleway]
type = s3
provider = Scaleway
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
location_constraint = nl-ams
acl = private
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a href=&#34;https://www.scaleway.com/en/glacier-cold-storage/&#34;&gt;Scaleway Glacier&lt;/a&gt; is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the &amp;quot;GLACIER&amp;quot; &lt;code&gt;storage_class&lt;/code&gt;.
So you can configure your remote with the &lt;code&gt;storage_class = GLACIER&lt;/code&gt; option to upload directly to Scaleway Glacier. Don&#39;t forget that in this state you can&#39;t read files back after, you will need to restore them to &amp;quot;STANDARD&amp;quot; storage_class first before being able to read them (see &amp;quot;restore&amp;quot; section above)&lt;/p&gt;
&lt;h3 id=&#34;lyve&#34;&gt;Seagate Lyve Cloud&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.seagate.com/gb/en/services/cloud/storage/&#34;&gt;Seagate Lyve Cloud&lt;/a&gt; is an S3
compatible object storage platform from &lt;a href=&#34;https://seagate.com/&#34;&gt;Seagate&lt;/a&gt; intended for enterprise use.&lt;/p&gt;
&lt;p&gt;Here is a config run through for a remote called &lt;code&gt;remote&lt;/code&gt; - you may
choose a different name of course. Note that to create an access key
and secret key you will need to create a service account first.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose &lt;code&gt;s3&lt;/code&gt; backend&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose &lt;code&gt;LyveCloud&lt;/code&gt; as S3 provider&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Seagate Lyve Cloud
   \ (LyveCloud)
[snip]
provider&amp;gt; LyveCloud
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Take the default (just press enter) to enter access key and secret in the config file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; XXX
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; YYY
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Leave region blank&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Region to connect to.
Leave blank if you are using an S3 clone and you don&amp;#39;t have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Use this if unsure.
 1 | Will use v4 signatures and an empty region.
   \ ()
   / Use this only if v4 signatures don&amp;#39;t work.
 2 | E.g. pre Jewel/v10 CEPH.
   \ (other-v2-signature)
region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose an endpoint from the list&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Endpoint for S3 API.
Required when using an S3 clone.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Seagate Lyve Cloud US East 1 (Virginia)
   \ (s3.us-east-1.lyvecloud.seagate.com)
 2 / Seagate Lyve Cloud US West 1 (California)
   \ (s3.us-west-1.lyvecloud.seagate.com)
 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
   \ (s3.ap-southeast-1.lyvecloud.seagate.com)
endpoint&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Leave location constraint blank&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Choose default ACL (&lt;code&gt;private&lt;/code&gt;).&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
[snip]
acl&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And the config file should end up looking like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[remote]
type = s3
provider = LyveCloud
access_key_id = XXX
secret_access_key = YYY
endpoint = s3.us-east-1.lyvecloud.seagate.com
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;seaweedfs&#34;&gt;SeaweedFS&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/chrislusf/seaweedfs/&#34;&gt;SeaweedFS&lt;/a&gt; is a distributed storage system for
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
It has an S3 compatible object storage interface. SeaweedFS can also act as a
&lt;a href=&#34;https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage&#34;&gt;gateway to remote S3 compatible object store&lt;/a&gt;
to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost.&lt;/p&gt;
&lt;p&gt;Assuming the SeaweedFS are configured with &lt;code&gt;weed shell&lt;/code&gt; as such:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;gt; s3.bucket.create -name foo
&amp;gt; s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
{
  &amp;#34;identities&amp;#34;: [
    {
      &amp;#34;name&amp;#34;: &amp;#34;me&amp;#34;,
      &amp;#34;credentials&amp;#34;: [
        {
          &amp;#34;accessKey&amp;#34;: &amp;#34;any&amp;#34;,
          &amp;#34;secretKey&amp;#34;: &amp;#34;any&amp;#34;
        }
      ],
      &amp;#34;actions&amp;#34;: [
        &amp;#34;Read:foo&amp;#34;,
        &amp;#34;Write:foo&amp;#34;,
        &amp;#34;List:foo&amp;#34;,
        &amp;#34;Tagging:foo&amp;#34;,
        &amp;#34;Admin:foo&amp;#34;
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To use rclone with SeaweedFS, above configuration should end up with something like this in
your config:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[seaweedfs_s3]
type = s3
provider = SeaweedFS
access_key_id = any
secret_access_key = any
endpoint = localhost:8333
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So once set up, for example to copy files into a bucket&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy /path/to/files seaweedfs_s3:foo
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;wasabi&#34;&gt;Wasabi&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://wasabi.com&#34;&gt;Wasabi&lt;/a&gt; is a cloud-based object storage service for a
broad range of applications and use cases. Wasabi is designed for
individuals and organizations that require a high-performance,
reliable, and secure data storage infrastructure at minimal cost.&lt;/p&gt;
&lt;p&gt;Wasabi provides an S3 interface which can be configured for use with
rclone like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s&amp;gt; n
name&amp;gt; wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara)
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id&amp;gt; YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key&amp;gt; YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia, or Pacific Northwest.
   | Leave location constraint empty.
   \ &amp;#34;us-east-1&amp;#34;
[snip]
region&amp;gt; us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint&amp;gt; s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
   \ &amp;#34;&amp;#34;
[snip]
location_constraint&amp;gt;
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ &amp;#34;private&amp;#34;
[snip]
acl&amp;gt;
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ &amp;#34;&amp;#34;
 2 / AES256
   \ &amp;#34;AES256&amp;#34;
server_side_encryption&amp;gt;
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
 2 / Standard storage class
   \ &amp;#34;STANDARD&amp;#34;
 3 / Reduced redundancy storage class
   \ &amp;#34;REDUCED_REDUNDANCY&amp;#34;
 4 / Standard Infrequent Access storage class
   \ &amp;#34;STANDARD_IA&amp;#34;
storage_class&amp;gt;
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;alibaba-oss&#34;&gt;Alibaba OSS&lt;/h3&gt;
&lt;p&gt;Here is an example of making an &lt;a href=&#34;https://www.alibabacloud.com/product/oss/&#34;&gt;Alibaba Cloud (Aliyun) OSS&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; oss
Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
Choose your S3 provider.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ &amp;#34;AWS&amp;#34;
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ &amp;#34;Alibaba&amp;#34;
 3 / Ceph Object Storage
   \ &amp;#34;Ceph&amp;#34;
[snip]
provider&amp;gt; Alibaba
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (&amp;#34;false&amp;#34;).
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
access_key_id&amp;gt; accesskeyid
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
secret_access_key&amp;gt; secretaccesskey
Endpoint for OSS API.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / East China 1 (Hangzhou)
   \ &amp;#34;oss-cn-hangzhou.aliyuncs.com&amp;#34;
 2 / East China 2 (Shanghai)
   \ &amp;#34;oss-cn-shanghai.aliyuncs.com&amp;#34;
 3 / North China 1 (Qingdao)
   \ &amp;#34;oss-cn-qingdao.aliyuncs.com&amp;#34;
[snip]
endpoint&amp;gt; 1
Canned ACL used when creating buckets and storing or copying objects.

Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ &amp;#34;private&amp;#34;
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ &amp;#34;public-read&amp;#34;
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
[snip]
acl&amp;gt; 1
The storage class to use when storing new objects in OSS.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
 2 / Standard storage class
   \ &amp;#34;STANDARD&amp;#34;
 3 / Archive storage mode.
   \ &amp;#34;GLACIER&amp;#34;
 4 / Infrequent access storage mode.
   \ &amp;#34;STANDARD_IA&amp;#34;
storage_class&amp;gt; 1
Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; n
Remote config
--------------------
[oss]
type = s3
provider = Alibaba
env_auth = false
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = oss-cn-hangzhou.aliyuncs.com
acl = private
storage_class = Standard
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;china-mobile-ecloud-eos&#34;&gt;China Mobile Ecloud Elastic Object Storage (EOS)&lt;/h3&gt;
&lt;p&gt;Here is an example of making an &lt;a href=&#34;https:///ecloud.10086.cn/home/product-introduction/eos/&#34;&gt;China Mobile Ecloud Elastic Object Storage (EOS)&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; ChinaMobile
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 ...
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
 ...
Storage&amp;gt; s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 ...
 4 / China Mobile Ecloud Elastic Object Storage (EOS)
   \ (ChinaMobile)
 ...
provider&amp;gt; ChinaMobile
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt;
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; accesskeyid
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; secretaccesskey
Option endpoint.
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / The default endpoint - a good choice if you are unsure.
 1 | East China (Suzhou)
   \ (eos-wuxi-1.cmecloud.cn)
 2 / East China (Jinan)
   \ (eos-jinan-1.cmecloud.cn)
 3 / East China (Hangzhou)
   \ (eos-ningbo-1.cmecloud.cn)
 4 / East China (Shanghai-1)
   \ (eos-shanghai-1.cmecloud.cn)
 5 / Central China (Zhengzhou)
   \ (eos-zhengzhou-1.cmecloud.cn)
 6 / Central China (Changsha-1)
   \ (eos-hunan-1.cmecloud.cn)
 7 / Central China (Changsha-2)
   \ (eos-zhuzhou-1.cmecloud.cn)
 8 / South China (Guangzhou-2)
   \ (eos-guangzhou-1.cmecloud.cn)
 9 / South China (Guangzhou-3)
   \ (eos-dongguan-1.cmecloud.cn)
10 / North China (Beijing-1)
   \ (eos-beijing-1.cmecloud.cn)
11 / North China (Beijing-2)
   \ (eos-beijing-2.cmecloud.cn)
12 / North China (Beijing-3)
   \ (eos-beijing-4.cmecloud.cn)
13 / North China (Huhehaote)
   \ (eos-huhehaote-1.cmecloud.cn)
14 / Southwest China (Chengdu)
   \ (eos-chengdu-1.cmecloud.cn)
15 / Southwest China (Chongqing)
   \ (eos-chongqing-1.cmecloud.cn)
16 / Southwest China (Guiyang)
   \ (eos-guiyang-1.cmecloud.cn)
17 / Nouthwest China (Xian)
   \ (eos-xian-1.cmecloud.cn)
18 / Yunnan China (Kunming)
   \ (eos-yunnan.cmecloud.cn)
19 / Yunnan China (Kunming-2)
   \ (eos-yunnan-2.cmecloud.cn)
20 / Tianjin China (Tianjin)
   \ (eos-tianjin-1.cmecloud.cn)
21 / Jilin China (Changchun)
   \ (eos-jilin-1.cmecloud.cn)
22 / Hubei China (Xiangyan)
   \ (eos-hubei-1.cmecloud.cn)
23 / Jiangxi China (Nanchang)
   \ (eos-jiangxi-1.cmecloud.cn)
24 / Gansu China (Lanzhou)
   \ (eos-gansu-1.cmecloud.cn)
25 / Shanxi China (Taiyuan)
   \ (eos-shanxi-1.cmecloud.cn)
26 / Liaoning China (Shenyang)
   \ (eos-liaoning-1.cmecloud.cn)
27 / Hebei China (Shijiazhuang)
   \ (eos-hebei-1.cmecloud.cn)
28 / Fujian China (Xiamen)
   \ (eos-fujian-1.cmecloud.cn)
29 / Guangxi China (Nanning)
   \ (eos-guangxi-1.cmecloud.cn)
30 / Anhui China (Huainan)
   \ (eos-anhui-1.cmecloud.cn)
endpoint&amp;gt; 1
Option location_constraint.
Location constraint - must match endpoint.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / East China (Suzhou)
   \ (wuxi1)
 2 / East China (Jinan)
   \ (jinan1)
 3 / East China (Hangzhou)
   \ (ningbo1)
 4 / East China (Shanghai-1)
   \ (shanghai1)
 5 / Central China (Zhengzhou)
   \ (zhengzhou1)
 6 / Central China (Changsha-1)
   \ (hunan1)
 7 / Central China (Changsha-2)
   \ (zhuzhou1)
 8 / South China (Guangzhou-2)
   \ (guangzhou1)
 9 / South China (Guangzhou-3)
   \ (dongguan1)
10 / North China (Beijing-1)
   \ (beijing1)
11 / North China (Beijing-2)
   \ (beijing2)
12 / North China (Beijing-3)
   \ (beijing4)
13 / North China (Huhehaote)
   \ (huhehaote1)
14 / Southwest China (Chengdu)
   \ (chengdu1)
15 / Southwest China (Chongqing)
   \ (chongqing1)
16 / Southwest China (Guiyang)
   \ (guiyang1)
17 / Nouthwest China (Xian)
   \ (xian1)
18 / Yunnan China (Kunming)
   \ (yunnan)
19 / Yunnan China (Kunming-2)
   \ (yunnan2)
20 / Tianjin China (Tianjin)
   \ (tianjin1)
21 / Jilin China (Changchun)
   \ (jilin1)
22 / Hubei China (Xiangyan)
   \ (hubei1)
23 / Jiangxi China (Nanchang)
   \ (jiangxi1)
24 / Gansu China (Lanzhou)
   \ (gansu1)
25 / Shanxi China (Taiyuan)
   \ (shanxi1)
26 / Liaoning China (Shenyang)
   \ (liaoning1)
27 / Hebei China (Shijiazhuang)
   \ (hebei1)
28 / Fujian China (Xiamen)
   \ (fujian1)
29 / Guangxi China (Nanning)
   \ (guangxi1)
30 / Anhui China (Huainan)
   \ (anhui1)
location_constraint&amp;gt; 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
acl&amp;gt; private
Option server_side_encryption.
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / None
   \ ()
 2 / AES256
   \ (AES256)
server_side_encryption&amp;gt;
Option storage_class.
The storage class to use when storing new objects in ChinaMobile.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Default
   \ ()
 2 / Standard storage class
   \ (STANDARD)
 3 / Archive storage mode
   \ (GLACIER)
 4 / Infrequent access storage mode
   \ (STANDARD_IA)
storage_class&amp;gt;
Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n
--------------------
[ChinaMobile]
type = s3
provider = ChinaMobile
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = eos-wuxi-1.cmecloud.cn
location_constraint = wuxi1
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;leviia&#34;&gt;Leviia Cloud Object Storage&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.leviia.com/object-storage/&#34;&gt;Leviia Object Storage&lt;/a&gt;, backup and secure your data in a 100% French cloud, independent of GAFAM..&lt;/p&gt;
&lt;p&gt;To configure access to Leviia, follow the steps below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;rclone config&lt;/code&gt; and select &lt;code&gt;n&lt;/code&gt; for a new remote.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Give the name of the configuration. For example, name it &#39;leviia&#39;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;name&amp;gt; leviia
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Select &lt;code&gt;s3&lt;/code&gt; storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ (s3)
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Select &lt;code&gt;Leviia&lt;/code&gt; provider.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
   \ &amp;#34;AWS&amp;#34;
[snip]
15 / Leviia Object Storage
   \ (Leviia)
[snip]
provider&amp;gt; Leviia
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Enter your SecretId and SecretKey of Leviia.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (&amp;#34;false&amp;#34;).
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
access_key_id&amp;gt; ZnIx.xxxxxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
secret_access_key&amp;gt; xxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;Select endpoint for Leviia.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;   / The default endpoint
 1 | Leviia.
   \ (s3.leviia.com)
[snip]
endpoint&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;7&#34;&gt;
&lt;li&gt;Choose acl.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
[snip]
acl&amp;gt; 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n&amp;gt; n
Remote config
--------------------
[leviia]
- type: s3
- provider: Leviia
- access_key_id: ZnIx.xxxxxxx
- secret_access_key: xxxxxxxx
- endpoint: s3.leviia.com
- acl: private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
leviia                s3
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;liara-cloud&#34;&gt;Liara&lt;/h3&gt;
&lt;p&gt;Here is an example of making a &lt;a href=&#34;https://liara.ir/landing/object-storage&#34;&gt;Liara Object Storage&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s&amp;gt; n
name&amp;gt; Liara
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id&amp;gt; YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key&amp;gt; YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint
 1 | US Region, Northern Virginia, or Pacific Northwest.
   | Leave location constraint empty.
   \ &amp;#34;us-east-1&amp;#34;
[snip]
region&amp;gt;
Endpoint for S3 API.
Leave blank if using Liara to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint&amp;gt; storage.iran.liara.space
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ &amp;#34;private&amp;#34;
[snip]
acl&amp;gt;
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ &amp;#34;&amp;#34;
 2 / AES256
   \ &amp;#34;AES256&amp;#34;
server_side_encryption&amp;gt;
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
 2 / Standard storage class
   \ &amp;#34;STANDARD&amp;#34;
storage_class&amp;gt;
Remote config
--------------------
[Liara]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[Liara]
type = s3
provider = Liara
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;linode&#34;&gt;Linode&lt;/h3&gt;
&lt;p&gt;Here is an example of making a &lt;a href=&#34;https://www.linode.com/products/object-storage/&#34;&gt;Linode Object Storage&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n

Enter name for new remote.
name&amp;gt; linode

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
   \ (s3)
[snip]
Storage&amp;gt; s3

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Linode Object Storage
   \ (Linode)
[snip]
provider&amp;gt; Linode

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; ACCESS_KEY

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; SECRET_ACCESS_KEY

Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Atlanta, GA (USA), us-southeast-1
   \ (us-southeast-1.linodeobjects.com)
 2 / Chicago, IL (USA), us-ord-1
   \ (us-ord-1.linodeobjects.com)
 3 / Frankfurt (Germany), eu-central-1
   \ (eu-central-1.linodeobjects.com)
 4 / Milan (Italy), it-mil-1
   \ (it-mil-1.linodeobjects.com)
 5 / Newark, NJ (USA), us-east-1
   \ (us-east-1.linodeobjects.com)
 6 / Paris (France), fr-par-1
   \ (fr-par-1.linodeobjects.com)
 7 / Seattle, WA (USA), us-sea-1
   \ (us-sea-1.linodeobjects.com)
 8 / Singapore ap-south-1
   \ (ap-south-1.linodeobjects.com)
 9 / Stockholm (Sweden), se-sto-1
   \ (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
   \ (us-iad-1.linodeobjects.com)
endpoint&amp;gt; 3

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
[snip]
acl&amp;gt; 

Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n

Configuration complete.
Options:
- type: s3
- provider: Linode
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: eu-central-1.linodeobjects.com
Keep this &amp;#34;linode&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[linode]
type = s3
provider = Linode
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = eu-central-1.linodeobjects.com
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;magalu&#34;&gt;Magalu&lt;/h3&gt;
&lt;p&gt;Here is an example of making a &lt;a href=&#34;https://magalu.cloud/object-storage/&#34;&gt;Magalu Object Storage&lt;/a&gt;
configuration.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n

Enter name for new remote.
name&amp;gt; magalu

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...Magalu, ...and others
   \ (s3)
[snip]
Storage&amp;gt; s3

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Magalu Object Storage
   \ (Magalu)
[snip]
provider&amp;gt; Magalu

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 1

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; ACCESS_KEY

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; SECRET_ACCESS_KEY

Option endpoint.
Endpoint for Magalu Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / São Paulo, SP (BR), br-se1
   \ (br-se1.magaluobjects.com)
 2 / Fortaleza, CE (BR), br-ne1
   \ (br-ne1.magaluobjects.com)
endpoint&amp;gt; 2

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
[snip]
acl&amp;gt; 

Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n

Configuration complete.
Options:
- type: s3
- provider: magalu
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: br-ne1.magaluobjects.com
Keep this &amp;#34;magalu&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[magalu]
type = s3
provider = Magalu
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;arvan-cloud&#34;&gt;ArvanCloud&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://www.arvancloud.com/en/products/cloud-storage&#34;&gt;ArvanCloud&lt;/a&gt; ArvanCloud Object Storage goes beyond the limited traditional file storage.
It gives you access to backup and archived files and allows sharing.
Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.&lt;/p&gt;
&lt;p&gt;ArvanCloud provides an S3 interface which can be configured for use with
rclone like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s&amp;gt; n
name&amp;gt; ArvanCloud
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id&amp;gt; YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key&amp;gt; YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia, or Pacific Northwest.
   | Leave location constraint empty.
   \ &amp;#34;us-east-1&amp;#34;
[snip]
region&amp;gt; 
Endpoint for S3 API.
Leave blank if using ArvanCloud to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint&amp;gt; s3.arvanstorage.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for Iran-Tehran Region.
   \ &amp;#34;&amp;#34;
[snip]
location_constraint&amp;gt;
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ &amp;#34;private&amp;#34;
[snip]
acl&amp;gt;
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ &amp;#34;&amp;#34;
 2 / AES256
   \ &amp;#34;AES256&amp;#34;
server_side_encryption&amp;gt;
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
 2 / Standard storage class
   \ &amp;#34;STANDARD&amp;#34;
storage_class&amp;gt;
Remote config
--------------------
[ArvanCloud]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = ir-thr-at1
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[ArvanCloud]
type = s3
provider = ArvanCloud
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;tencent-cos&#34;&gt;Tencent COS&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://intl.cloud.tencent.com/product/cos&#34;&gt;Tencent Cloud Object Storage (COS)&lt;/a&gt; is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.&lt;/p&gt;
&lt;p&gt;To configure access to Tencent COS, follow the steps below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;rclone config&lt;/code&gt; and select &lt;code&gt;n&lt;/code&gt; for a new remote.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Give the name of the configuration. For example, name it &#39;cos&#39;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;name&amp;gt; cos
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Select &lt;code&gt;s3&lt;/code&gt; storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Select &lt;code&gt;TencentCOS&lt;/code&gt; provider.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
   \ &amp;#34;AWS&amp;#34;
[snip]
11 / Tencent Cloud Object Storage (COS)
   \ &amp;#34;TencentCOS&amp;#34;
[snip]
provider&amp;gt; TencentCOS
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;Enter your SecretId and SecretKey of Tencent Cloud.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (&amp;#34;false&amp;#34;).
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;
env_auth&amp;gt; 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
access_key_id&amp;gt; AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
secret_access_key&amp;gt; xxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;Select endpoint for Tencent COS. This is the standard endpoint for different region.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; 1 / Beijing Region.
   \ &amp;#34;cos.ap-beijing.myqcloud.com&amp;#34;
 2 / Nanjing Region.
   \ &amp;#34;cos.ap-nanjing.myqcloud.com&amp;#34;
 3 / Shanghai Region.
   \ &amp;#34;cos.ap-shanghai.myqcloud.com&amp;#34;
 4 / Guangzhou Region.
   \ &amp;#34;cos.ap-guangzhou.myqcloud.com&amp;#34;
[snip]
endpoint&amp;gt; 4
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;7&#34;&gt;
&lt;li&gt;Choose acl and storage class.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Owner gets Full_CONTROL. No one else has access rights (default).
   \ &amp;#34;default&amp;#34;
[snip]
acl&amp;gt; 1
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Default
   \ &amp;#34;&amp;#34;
[snip]
storage_class&amp;gt; 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n&amp;gt; n
Remote config
--------------------
[cos]
type = s3
provider = TencentCOS
env_auth = false
access_key_id = xxx
secret_access_key = xxx
endpoint = cos.ap-guangzhou.myqcloud.com
acl = default
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
cos                  s3
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;netease-nos&#34;&gt;Netease NOS&lt;/h3&gt;
&lt;p&gt;For Netease NOS configure as per the configurator &lt;code&gt;rclone config&lt;/code&gt;
setting the provider &lt;code&gt;Netease&lt;/code&gt;.  This will automatically set
&lt;code&gt;force_path_style = false&lt;/code&gt; which is necessary for it to run properly.&lt;/p&gt;
&lt;h3 id=&#34;petabox&#34;&gt;Petabox&lt;/h3&gt;
&lt;p&gt;Here is an example of making a &lt;a href=&#34;https://petabox.io/&#34;&gt;Petabox&lt;/a&gt;
configuration. First run:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;rclone config
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s&amp;gt; n

Enter name for new remote.
name&amp;gt; My Petabox Storage

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;
[snip]
Storage&amp;gt; s3

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Petabox Object Storage
   \ (Petabox)
[snip]
provider&amp;gt; Petabox

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 1

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; YOUR_ACCESS_KEY_ID

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; YOUR_SECRET_ACCESS_KEY

Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / US East (N. Virginia)
   \ (us-east-1)
 2 / Europe (Frankfurt)
   \ (eu-central-1)
 3 / Asia Pacific (Singapore)
   \ (ap-southeast-1)
 4 / Middle East (Bahrain)
   \ (me-south-1)
 5 / South America (São Paulo)
   \ (sa-east-1)
region&amp;gt; 1

Option endpoint.
Endpoint for Petabox S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
 1 / US East (N. Virginia)
   \ (s3.petabox.io)
 2 / US East (N. Virginia)
   \ (s3.us-east-1.petabox.io)
 3 / Europe (Frankfurt)
   \ (s3.eu-central-1.petabox.io)
 4 / Asia Pacific (Singapore)
   \ (s3.ap-southeast-1.petabox.io)
 5 / Middle East (Bahrain)
   \ (s3.me-south-1.petabox.io)
 6 / South America (São Paulo)
   \ (s3.sa-east-1.petabox.io)
endpoint&amp;gt; 1

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn&amp;#39;t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn&amp;#39;t copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
acl&amp;gt; 1

Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; No

Configuration complete.
Options:
- type: s3
- provider: Petabox
- access_key_id: YOUR_ACCESS_KEY_ID
- secret_access_key: YOUR_SECRET_ACCESS_KEY
- region: us-east-1
- endpoint: s3.petabox.io
Keep this &amp;#34;My Petabox Storage&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will leave the config file looking like this.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[My Petabox Storage]
type = s3
provider = Petabox
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
endpoint = s3.petabox.io
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;storj&#34;&gt;Storj&lt;/h3&gt;
&lt;p&gt;Storj is a decentralized cloud storage which can be used through its
native protocol or an S3 compatible gateway.&lt;/p&gt;
&lt;p&gt;The S3 compatible gateway is configured using &lt;code&gt;rclone config&lt;/code&gt; with a
type of &lt;code&gt;s3&lt;/code&gt; and with a provider name of &lt;code&gt;Storj&lt;/code&gt;. Here is an example
run of the configurator.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Type of storage to configure.
Storage&amp;gt; s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth&amp;gt; 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id&amp;gt; XXXX (as shown when creating the access grant)
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key&amp;gt; XXXX (as shown when creating the access grant)
Option endpoint.
Endpoint of the Shared Gateway.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / EU1 Shared Gateway
   \ (gateway.eu1.storjshare.io)
 2 / US1 Shared Gateway
   \ (gateway.us1.storjshare.io)
 3 / Asia-Pacific Shared Gateway
   \ (gateway.ap1.storjshare.io)
endpoint&amp;gt; 1 (as shown when creating the access grant)
Edit advanced config?
y) Yes
n) No (default)
y/n&amp;gt; n
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that s3 credentials are generated when you &lt;a href=&#34;https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage&#34;&gt;create an access
grant&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;backend-quirks&#34;&gt;Backend quirks&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;--chunk-size&lt;/code&gt; is forced to be 64 MiB or greater. This will use more
memory than the default of 5 MiB.&lt;/li&gt;
&lt;li&gt;Server side copy is disabled as it isn&#39;t currently supported in the
gateway.&lt;/li&gt;
&lt;li&gt;GetTier and SetTier are not supported.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;backend-bugs&#34;&gt;Backend bugs&lt;/h4&gt;
&lt;p&gt;Due to &lt;a href=&#34;https://github.com/storj/gateway-mt/issues/39&#34;&gt;issue #39&lt;/a&gt;
uploading multipart files via the S3 gateway causes them to lose their
metadata. For rclone&#39;s purpose this means that the modification time
is not stored, nor is any MD5SUM (if one is available from the
source).&lt;/p&gt;
&lt;p&gt;This has the following consequences:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using &lt;code&gt;rclone rcat&lt;/code&gt; will fail as the medatada doesn&#39;t match after upload&lt;/li&gt;
&lt;li&gt;Uploading files with &lt;code&gt;rclone mount&lt;/code&gt; will fail for the same reason
&lt;ul&gt;
&lt;li&gt;This can worked around by using &lt;code&gt;--vfs-cache-mode writes&lt;/code&gt; or &lt;code&gt;--vfs-cache-mode full&lt;/code&gt; or setting &lt;code&gt;--s3-upload-cutoff&lt;/code&gt; large&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Files uploaded via a multipart upload won&#39;t have their modtimes
&lt;ul&gt;
&lt;li&gt;This will mean that &lt;code&gt;rclone sync&lt;/code&gt; will likely keep trying to upload files bigger than &lt;code&gt;--s3-upload-cutoff&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;This can be worked around with &lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--size-only&lt;/code&gt; or setting &lt;code&gt;--s3-upload-cutoff&lt;/code&gt; large&lt;/li&gt;
&lt;li&gt;The maximum value for &lt;code&gt;--s3-upload-cutoff&lt;/code&gt; is 5GiB though&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One general purpose workaround is to set &lt;code&gt;--s3-upload-cutoff 5G&lt;/code&gt;. This
means that rclone will upload files smaller than 5GiB as single parts.
Note that this can be set in the config file with &lt;code&gt;upload_cutoff = 5G&lt;/code&gt;
or configured in the advanced settings. If you regularly transfer
files larger than 5G then using &lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--size-only&lt;/code&gt; in
&lt;code&gt;rclone sync&lt;/code&gt; is the recommended workaround.&lt;/p&gt;
&lt;h4 id=&#34;comparison-with-the-native-protocol&#34;&gt;Comparison with the native protocol&lt;/h4&gt;
&lt;p&gt;Use the &lt;a href=&#34;https://rclone.org/storj&#34;&gt;the native protocol&lt;/a&gt; to take advantage of
client-side encryption as well as to achieve the best possible
download performance. Uploads will be erasure-coded locally, thus a
1gb upload will result in 2.68gb of data being uploaded to storage
nodes across the network.&lt;/p&gt;
&lt;p&gt;Use this backend and the S3 compatible Hosted Gateway to increase
upload performance and reduce the load on your systems and network.
Uploads will be encrypted and erasure-coded server-side, thus a 1GB
upload will result in only in 1GB of data being uploaded to storage
nodes across the network.&lt;/p&gt;
&lt;p&gt;For more detailed comparison please check the documentation of the
&lt;a href=&#34;https://rclone.org/storj&#34;&gt;storj&lt;/a&gt; backend.&lt;/p&gt;
&lt;h2 id=&#34;memory-usage-memory&#34;&gt;Memory usage {memory}&lt;/h2&gt;
&lt;p&gt;The most common cause of rclone using lots of memory is a single
directory with millions of files in. Despite s3 not really having the
concepts of directories, rclone does the sync on a directory by
directory basis to be compatible with normal filing systems.&lt;/p&gt;
&lt;p&gt;Rclone loads each directory into memory as rclone objects. Each rclone
object takes 0.5k-1k of memory, so approximately 1GB per 1,000,000
files, and the sync for that directory does not begin until it is
entirely loaded in memory. So the sync can take a long time to start
for large directories.&lt;/p&gt;
&lt;p&gt;To sync a directory with 100,000,000 files in you would need approximately
100 GB of memory. At some point the amount of memory becomes difficult
to provide so there is
&lt;a href=&#34;https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files&#34;&gt;a workaround for this&lt;/a&gt;
which involves a bit of scripting.&lt;/p&gt;
&lt;p&gt;At some point rclone will gain a sync mode which is effectively this
workaround but built in to rclone.&lt;/p&gt;
&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;rclone about&lt;/code&gt; is not supported by the S3 backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy &lt;code&gt;mfs&lt;/code&gt; (most free space) as a member of an rclone union
remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/overview/#optional-features&#34;&gt;List of backends that do not support rclone about&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_about/&#34;&gt;rclone about&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;synology-c2&#34;&gt;Synology C2 Object Storage&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://c2.synology.com/en-global/object-storage/overview&#34;&gt;Synology C2 Object Storage&lt;/a&gt; provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty.&lt;/p&gt;
&lt;p&gt;The S3 compatible gateway is configured using &lt;code&gt;rclone config&lt;/code&gt; with a
type of &lt;code&gt;s3&lt;/code&gt; and with a provider name of &lt;code&gt;Synology&lt;/code&gt;. Here is an example
run of the configurator.&lt;/p&gt;
&lt;p&gt;First run:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will guide you through an interactive setup process.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config

n/s/q&amp;gt; n

Enter name for new remote.1
name&amp;gt; syno

Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value

XX / Amazon S3 Compliant Storage Providers including AWS, ...
   \ &amp;#34;s3&amp;#34;

Storage&amp;gt; s3

Choose your S3 provider.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 24 / Synology C2 Object Storage
   \ (Synology)

provider&amp;gt; Synology

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (&amp;#34;false&amp;#34;).
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ &amp;#34;false&amp;#34;
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ &amp;#34;true&amp;#34;

env_auth&amp;gt; 1

AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).

access_key_id&amp;gt; accesskeyid

AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).

secret_access_key&amp;gt; secretaccesskey

Region where your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Europe Region 1
   \ (eu-001)
 2 / Europe Region 2
   \ (eu-002)
 3 / US Region 1
   \ (us-001)
 4 / US Region 2
   \ (us-002)
 5 / Asia (Taiwan)
   \ (tw-001)

region &amp;gt; 1

Option endpoint.
Endpoint for Synology C2 Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / EU Endpoint 1
   \ (eu-001.s3.synologyc2.net)
 2 / US Endpoint 1
   \ (us-001.s3.synologyc2.net)
 3 / TW Endpoint 1
   \ (tw-001.s3.synologyc2.net)

endpoint&amp;gt; 1

Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint&amp;gt;

Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; y

Option no_check_bucket.
If set, don&amp;#39;t attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
Enter a boolean value (true or false). Press Enter for the default (true).

no_check_bucket&amp;gt; true

Configuration complete.
Options:
- type: s3
- provider: Synology
- region: eu-001
- endpoint: eu-001.s3.synologyc2.net
- no_check_bucket: true
Keep this &amp;#34;syno&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote

y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;</description>
    </item>
    
    <item>
      <title>Authors</title>
      <link>https://rclone.org/authors/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/authors/</guid>
      <description>&lt;h1 id=&#34;authors-and-contributors&#34;&gt;Authors and contributors&lt;/h1&gt;
&lt;h2 id=&#34;authors&#34;&gt;Authors&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Nick Craig-Wood &lt;a href=&#34;mailto:nick@craig-wood.com&#34;&gt;nick@craig-wood.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;contributors&#34;&gt;Contributors&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Alex Couper &lt;a href=&#34;mailto:amcouper@gmail.com&#34;&gt;amcouper@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Leonid Shalupov &lt;a href=&#34;mailto:leonid@shalupov.com&#34;&gt;leonid@shalupov.com&lt;/a&gt; &lt;a href=&#34;mailto:shalupov@diverse.org.ru&#34;&gt;shalupov@diverse.org.ru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Shimon Doodkin &lt;a href=&#34;mailto:helpmepro1@gmail.com&#34;&gt;helpmepro1@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Colin Nicholson &lt;a href=&#34;mailto:colin@colinn.com&#34;&gt;colin@colinn.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Klaus Post &lt;a href=&#34;mailto:klauspost@gmail.com&#34;&gt;klauspost@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sergey Tolmachev &lt;a href=&#34;mailto:tolsi.ru@gmail.com&#34;&gt;tolsi.ru@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adriano Aurélio Meirelles &lt;a href=&#34;mailto:adriano@atinge.com&#34;&gt;adriano@atinge.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;C. Bess &lt;a href=&#34;mailto:cbess@users.noreply.github.com&#34;&gt;cbess@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Burdeev &lt;a href=&#34;mailto:dibu28@gmail.com&#34;&gt;dibu28@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joseph Spurrier &lt;a href=&#34;mailto:github@josephspurrier.com&#34;&gt;github@josephspurrier.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Björn Harrtell &lt;a href=&#34;mailto:bjorn@wololo.org&#34;&gt;bjorn@wololo.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Xavier Lucas &lt;a href=&#34;mailto:xavier.lucas@corp.ovh.com&#34;&gt;xavier.lucas@corp.ovh.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Werner Beroux &lt;a href=&#34;mailto:werner@beroux.com&#34;&gt;werner@beroux.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brian Stengaard &lt;a href=&#34;mailto:brian@stengaard.eu&#34;&gt;brian@stengaard.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jakub Gedeon &lt;a href=&#34;mailto:jgedeon@sofi.com&#34;&gt;jgedeon@sofi.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jim Tittsler &lt;a href=&#34;mailto:jwt@onjapan.net&#34;&gt;jwt@onjapan.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michal Witkowski &lt;a href=&#34;mailto:michal@improbable.io&#34;&gt;michal@improbable.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fabian Ruff &lt;a href=&#34;mailto:fabian.ruff@sap.com&#34;&gt;fabian.ruff@sap.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Leigh Klotz &lt;a href=&#34;mailto:klotz@quixey.com&#34;&gt;klotz@quixey.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Romain Lapray &lt;a href=&#34;mailto:lapray.romain@gmail.com&#34;&gt;lapray.romain@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Justin R. Wilson &lt;a href=&#34;mailto:jrw972@gmail.com&#34;&gt;jrw972@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Antonio Messina &lt;a href=&#34;mailto:antonio.s.messina@gmail.com&#34;&gt;antonio.s.messina@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stefan G. Weichinger &lt;a href=&#34;mailto:office@oops.co.at&#34;&gt;office@oops.co.at&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Per Cederberg &lt;a href=&#34;mailto:cederberg@gmail.com&#34;&gt;cederberg@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Radek Šenfeld &lt;a href=&#34;mailto:rush@logic.cz&#34;&gt;rush@logic.cz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fredrik Fornwall &lt;a href=&#34;mailto:fredrik@fornwall.net&#34;&gt;fredrik@fornwall.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Asko Tamm &lt;a href=&#34;mailto:asko@deekit.net&#34;&gt;asko@deekit.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;xor-zz &lt;a href=&#34;mailto:xor@gstocco.com&#34;&gt;xor@gstocco.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tomasz Mazur &lt;a href=&#34;mailto:tmazur90@gmail.com&#34;&gt;tmazur90@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marco Paganini &lt;a href=&#34;mailto:paganini@paganini.net&#34;&gt;paganini@paganini.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Felix Bünemann &lt;a href=&#34;mailto:buenemann@louis.info&#34;&gt;buenemann@louis.info&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Durval Menezes &lt;a href=&#34;mailto:jmrclone@durval.com&#34;&gt;jmrclone@durval.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Luiz Carlos Rumbelsperger Viana &lt;a href=&#34;mailto:maxd13_luiz_carlos@hotmail.com&#34;&gt;maxd13_luiz_carlos@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stefan Breunig &lt;a href=&#34;mailto:stefan-github@yrden.de&#34;&gt;stefan-github@yrden.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alishan Ladhani &lt;a href=&#34;mailto:ali-l@users.noreply.github.com&#34;&gt;ali-l@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;0xJAKE &lt;a href=&#34;mailto:0xJAKE@users.noreply.github.com&#34;&gt;0xJAKE@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thibault Molleman &lt;a href=&#34;mailto:thibaultmol@users.noreply.github.com&#34;&gt;thibaultmol@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Scott McGillivray &lt;a href=&#34;mailto:scott.mcgillivray@gmail.com&#34;&gt;scott.mcgillivray@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bjørn Erik Pedersen &lt;a href=&#34;mailto:bjorn.erik.pedersen@gmail.com&#34;&gt;bjorn.erik.pedersen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lukas Loesche &lt;a href=&#34;mailto:lukas@mesosphere.io&#34;&gt;lukas@mesosphere.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;emyarod &lt;a href=&#34;mailto:emyarod@users.noreply.github.com&#34;&gt;emyarod@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;T.C. Ferguson &lt;a href=&#34;mailto:tcf909@gmail.com&#34;&gt;tcf909@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brandur &lt;a href=&#34;mailto:brandur@mutelight.org&#34;&gt;brandur@mutelight.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dario Giovannetti &lt;a href=&#34;mailto:dev@dariogiovannetti.net&#34;&gt;dev@dariogiovannetti.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Károly Oláh &lt;a href=&#34;mailto:okaresz@aol.com&#34;&gt;okaresz@aol.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jon Yergatian &lt;a href=&#34;mailto:jon@macfanatic.ca&#34;&gt;jon@macfanatic.ca&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack Schmidt &lt;a href=&#34;mailto:github@mowsey.org&#34;&gt;github@mowsey.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dedsec1 &lt;a href=&#34;mailto:Dedsec1@users.noreply.github.com&#34;&gt;Dedsec1@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hisham Zarka &lt;a href=&#34;mailto:hzarka@gmail.com&#34;&gt;hzarka@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jérôme Vizcaino &lt;a href=&#34;mailto:jerome.vizcaino@gmail.com&#34;&gt;jerome.vizcaino@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mike Tesch &lt;a href=&#34;mailto:mjt6129@rit.edu&#34;&gt;mjt6129@rit.edu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marvin Watson &lt;a href=&#34;mailto:marvwatson@users.noreply.github.com&#34;&gt;marvwatson@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Danny Tsai &lt;a href=&#34;mailto:danny8376@gmail.com&#34;&gt;danny8376@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yoni Jah &lt;a href=&#34;mailto:yonjah+git@gmail.com&#34;&gt;yonjah+git@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:yonjah+github@gmail.com&#34;&gt;yonjah+github@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stephen Harris &lt;a href=&#34;mailto:github@spuddy.org&#34;&gt;github@spuddy.org&lt;/a&gt; &lt;a href=&#34;mailto:sweharris@users.noreply.github.com&#34;&gt;sweharris@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ihor Dvoretskyi &lt;a href=&#34;mailto:ihor.dvoretskyi@gmail.com&#34;&gt;ihor.dvoretskyi@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jon Craton &lt;a href=&#34;mailto:jncraton@gmail.com&#34;&gt;jncraton@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hraban Luyat &lt;a href=&#34;mailto:hraban@0brg.net&#34;&gt;hraban@0brg.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael Ledin &lt;a href=&#34;mailto:mledin89@gmail.com&#34;&gt;mledin89@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Martin Kristensen &lt;a href=&#34;mailto:me@azgul.com&#34;&gt;me@azgul.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Too Much IO &lt;a href=&#34;mailto:toomuchio@users.noreply.github.com&#34;&gt;toomuchio@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anisse Astier &lt;a href=&#34;mailto:anisse@astier.eu&#34;&gt;anisse@astier.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zahiar Ahmed &lt;a href=&#34;mailto:zahiar@live.com&#34;&gt;zahiar@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Igor Kharin &lt;a href=&#34;mailto:igorkharin@gmail.com&#34;&gt;igorkharin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bill Zissimopoulos &lt;a href=&#34;mailto:billziss@navimatics.com&#34;&gt;billziss@navimatics.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bob Potter &lt;a href=&#34;mailto:bobby.potter@gmail.com&#34;&gt;bobby.potter@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Steven Lu &lt;a href=&#34;mailto:tacticalazn@gmail.com&#34;&gt;tacticalazn@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sjur Fredriksen &lt;a href=&#34;mailto:sjurtf@ifi.uio.no&#34;&gt;sjurtf@ifi.uio.no&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ruwbin &lt;a href=&#34;mailto:hubus12345@gmail.com&#34;&gt;hubus12345@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fabian Möller &lt;a href=&#34;mailto:fabianm88@gmail.com&#34;&gt;fabianm88@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:f.moeller@nynex.de&#34;&gt;f.moeller@nynex.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Edward Q. Bridges &lt;a href=&#34;mailto:github@eqbridges.com&#34;&gt;github@eqbridges.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vasiliy Tolstov &lt;a href=&#34;mailto:v.tolstov@selfip.ru&#34;&gt;v.tolstov@selfip.ru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Harshavardhana &lt;a href=&#34;mailto:harsha@minio.io&#34;&gt;harsha@minio.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;sainaen &lt;a href=&#34;mailto:sainaen@gmail.com&#34;&gt;sainaen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gdm85 &lt;a href=&#34;mailto:gdm85@users.noreply.github.com&#34;&gt;gdm85@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yaroslav Halchenko &lt;a href=&#34;mailto:debian@onerussian.com&#34;&gt;debian@onerussian.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;John Papandriopoulos &lt;a href=&#34;mailto:jpap@users.noreply.github.com&#34;&gt;jpap@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zhiming Wang &lt;a href=&#34;mailto:zmwangx@gmail.com&#34;&gt;zmwangx@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andy Pilate &lt;a href=&#34;mailto:cubox@cubox.me&#34;&gt;cubox@cubox.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oliver Heyme &lt;a href=&#34;mailto:olihey@googlemail.com&#34;&gt;olihey@googlemail.com&lt;/a&gt; &lt;a href=&#34;mailto:olihey@users.noreply.github.com&#34;&gt;olihey@users.noreply.github.com&lt;/a&gt; &lt;a href=&#34;mailto:de8olihe@lego.com&#34;&gt;de8olihe@lego.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wuyu &lt;a href=&#34;mailto:wuyu@yunify.com&#34;&gt;wuyu@yunify.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrei Dragomir &lt;a href=&#34;mailto:adragomi@adobe.com&#34;&gt;adragomi@adobe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Christian Brüggemann &lt;a href=&#34;mailto:mail@cbruegg.com&#34;&gt;mail@cbruegg.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alex McGrath Kraak &lt;a href=&#34;mailto:amkdude@gmail.com&#34;&gt;amkdude@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;bpicode &lt;a href=&#34;mailto:bjoern.pirnay@googlemail.com&#34;&gt;bjoern.pirnay@googlemail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Daniel Jagszent &lt;a href=&#34;mailto:daniel@jagszent.de&#34;&gt;daniel@jagszent.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Josiah White &lt;a href=&#34;mailto:thegenius2009@gmail.com&#34;&gt;thegenius2009@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ishuah Kariuki &lt;a href=&#34;mailto:kariuki@ishuah.com&#34;&gt;kariuki@ishuah.com&lt;/a&gt; &lt;a href=&#34;mailto:ishuah91@gmail.com&#34;&gt;ishuah91@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jan Varho &lt;a href=&#34;mailto:jan@varho.org&#34;&gt;jan@varho.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Girish Ramakrishnan &lt;a href=&#34;mailto:girish@cloudron.io&#34;&gt;girish@cloudron.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LingMan &lt;a href=&#34;mailto:LingMan@users.noreply.github.com&#34;&gt;LingMan@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jacob McNamee &lt;a href=&#34;mailto:jacobmcnamee@gmail.com&#34;&gt;jacobmcnamee@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jersou &lt;a href=&#34;mailto:jertux@gmail.com&#34;&gt;jertux@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;thierry &lt;a href=&#34;mailto:thierry@substantiel.fr&#34;&gt;thierry@substantiel.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Simon Leinen &lt;a href=&#34;mailto:simon.leinen@gmail.com&#34;&gt;simon.leinen@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:ubuntu@s3-test.novalocal&#34;&gt;ubuntu@s3-test.novalocal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dan Dascalescu &lt;a href=&#34;mailto:ddascalescu+github@gmail.com&#34;&gt;ddascalescu+github@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jason Rose &lt;a href=&#34;mailto:jason@jro.io&#34;&gt;jason@jro.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrew Starr-Bochicchio &lt;a href=&#34;mailto:a.starr.b@gmail.com&#34;&gt;a.starr.b@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;John Leach &lt;a href=&#34;mailto:john@johnleach.co.uk&#34;&gt;john@johnleach.co.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Corban Raun &lt;a href=&#34;mailto:craun@instructure.com&#34;&gt;craun@instructure.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pierre Carlson &lt;a href=&#34;mailto:mpcarl@us.ibm.com&#34;&gt;mpcarl@us.ibm.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ernest Borowski &lt;a href=&#34;mailto:er.borowski@gmail.com&#34;&gt;er.borowski@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Remus Bunduc &lt;a href=&#34;mailto:remus.bunduc@gmail.com&#34;&gt;remus.bunduc@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Iakov Davydov &lt;a href=&#34;mailto:iakov.davydov@unil.ch&#34;&gt;iakov.davydov@unil.ch&lt;/a&gt; &lt;a href=&#34;mailto:dav05.gith@myths.ru&#34;&gt;dav05.gith@myths.ru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jakub Tasiemski &lt;a href=&#34;mailto:tasiemski@gmail.com&#34;&gt;tasiemski@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Minor &lt;a href=&#34;mailto:dminor@saymedia.com&#34;&gt;dminor@saymedia.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim Cooijmans &lt;a href=&#34;mailto:cooijmans.tim@gmail.com&#34;&gt;cooijmans.tim@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Laurence &lt;a href=&#34;mailto:liuxy6@gmail.com&#34;&gt;liuxy6@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Giovanni Pizzi &lt;a href=&#34;mailto:gio.piz@gmail.com&#34;&gt;gio.piz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Filip Bartodziej &lt;a href=&#34;mailto:filipbartodziej@gmail.com&#34;&gt;filipbartodziej@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jon Fautley &lt;a href=&#34;mailto:jon@dead.li&#34;&gt;jon@dead.li&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;lewapm &lt;a href=&#34;mailto:32110057+lewapm@users.noreply.github.com&#34;&gt;32110057+lewapm@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yassine Imounachen &lt;a href=&#34;mailto:yassine256@gmail.com&#34;&gt;yassine256@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chris Redekop &lt;a href=&#34;mailto:chris-redekop@users.noreply.github.com&#34;&gt;chris-redekop@users.noreply.github.com&lt;/a&gt; &lt;a href=&#34;mailto:chris.redekop@gmail.com&#34;&gt;chris.redekop@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jon Fautley &lt;a href=&#34;mailto:jon@adenoid.appstal.co.uk&#34;&gt;jon@adenoid.appstal.co.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Will Gunn &lt;a href=&#34;mailto:WillGunn@users.noreply.github.com&#34;&gt;WillGunn@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lucas Bremgartner &lt;a href=&#34;mailto:lucas@bremis.ch&#34;&gt;lucas@bremis.ch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jody Frankowski &lt;a href=&#34;mailto:jody.frankowski@gmail.com&#34;&gt;jody.frankowski@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andreas Roussos &lt;a href=&#34;mailto:arouss1980@gmail.com&#34;&gt;arouss1980@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nbuchanan &lt;a href=&#34;mailto:nbuchanan@utah.gov&#34;&gt;nbuchanan@utah.gov&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Durval Menezes &lt;a href=&#34;mailto:rclone@durval.com&#34;&gt;rclone@durval.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Victor &lt;a href=&#34;mailto:vb-github@viblo.se&#34;&gt;vb-github@viblo.se&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mateusz &lt;a href=&#34;mailto:pabian.mateusz@gmail.com&#34;&gt;pabian.mateusz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Daniel Loader &lt;a href=&#34;mailto:spicypixel@gmail.com&#34;&gt;spicypixel@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David0rk &lt;a href=&#34;mailto:davidork@gmail.com&#34;&gt;davidork@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexander Neumann &lt;a href=&#34;mailto:alexander@bumpern.de&#34;&gt;alexander@bumpern.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Giri Badanahatti &amp;lt;gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local&amp;gt;&lt;/li&gt;
&lt;li&gt;Leo R. Lundgren &lt;a href=&#34;mailto:leo@finalresort.org&#34;&gt;leo@finalresort.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wolfv &lt;a href=&#34;mailto:wolfv6@users.noreply.github.com&#34;&gt;wolfv6@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dave Pedu &lt;a href=&#34;mailto:dave@davepedu.com&#34;&gt;dave@davepedu.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stefan Lindblom &lt;a href=&#34;mailto:lindblom@spotify.com&#34;&gt;lindblom@spotify.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;seuffert &lt;a href=&#34;mailto:oliver@seuffert.biz&#34;&gt;oliver@seuffert.biz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gbadanahatti &lt;a href=&#34;mailto:37121690+gbadanahatti@users.noreply.github.com&#34;&gt;37121690+gbadanahatti@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Keith Goldfarb &lt;a href=&#34;mailto:barkofdelight@gmail.com&#34;&gt;barkofdelight@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Steve Kriss &lt;a href=&#34;mailto:steve@heptio.com&#34;&gt;steve@heptio.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chih-Hsuan Yen &lt;a href=&#34;mailto:yan12125@gmail.com&#34;&gt;yan12125@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexander Neumann &lt;a href=&#34;mailto:fd0@users.noreply.github.com&#34;&gt;fd0@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matt Holt &lt;a href=&#34;mailto:mholt@users.noreply.github.com&#34;&gt;mholt@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Eri Bastos &lt;a href=&#34;mailto:bastos.eri@gmail.com&#34;&gt;bastos.eri@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael P. Dubner &lt;a href=&#34;mailto:pywebmail@list.ru&#34;&gt;pywebmail@list.ru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Antoine GIRARD &lt;a href=&#34;mailto:sapk@users.noreply.github.com&#34;&gt;sapk@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mateusz Piotrowski &lt;a href=&#34;mailto:mpp302@gmail.com&#34;&gt;mpp302@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Animosity022 &lt;a href=&#34;mailto:animosity22@users.noreply.github.com&#34;&gt;animosity22@users.noreply.github.com&lt;/a&gt; &lt;a href=&#34;mailto:earl.texter@gmail.com&#34;&gt;earl.texter@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Baumgartner &lt;a href=&#34;mailto:pete@lincolnloop.com&#34;&gt;pete@lincolnloop.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Craig Rachel &lt;a href=&#34;mailto:craig@craigrachel.com&#34;&gt;craig@craigrachel.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael G. Noll &lt;a href=&#34;mailto:miguno@users.noreply.github.com&#34;&gt;miguno@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;hensur &lt;a href=&#34;mailto:me@hensur.de&#34;&gt;me@hensur.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oliver Heyme &lt;a href=&#34;mailto:de8olihe@lego.com&#34;&gt;de8olihe@lego.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Richard Yang &lt;a href=&#34;mailto:richard@yenforyang.com&#34;&gt;richard@yenforyang.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Piotr Oleszczyk &lt;a href=&#34;mailto:piotr.oleszczyk@gmail.com&#34;&gt;piotr.oleszczyk@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rodrigo &lt;a href=&#34;mailto:rodarima@gmail.com&#34;&gt;rodarima@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;NoLooseEnds &lt;a href=&#34;mailto:NoLooseEnds@users.noreply.github.com&#34;&gt;NoLooseEnds@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jakub Karlicek &lt;a href=&#34;mailto:jakub@karlicek.me&#34;&gt;jakub@karlicek.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;John Clayton &lt;a href=&#34;mailto:john@codemonkeylabs.com&#34;&gt;john@codemonkeylabs.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kasper Byrdal Nielsen &lt;a href=&#34;mailto:byrdal76@gmail.com&#34;&gt;byrdal76@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benjamin Joseph Dag &lt;a href=&#34;mailto:bjdag1234@users.noreply.github.com&#34;&gt;bjdag1234@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;themylogin &lt;a href=&#34;mailto:themylogin@gmail.com&#34;&gt;themylogin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Onno Zweers &lt;a href=&#34;mailto:onno.zweers@surfsara.nl&#34;&gt;onno.zweers@surfsara.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jasper Lievisse Adriaanse &lt;a href=&#34;mailto:jasper@humppa.nl&#34;&gt;jasper@humppa.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;sandeepkru &lt;a href=&#34;mailto:sandeep.ummadi@gmail.com&#34;&gt;sandeep.ummadi@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:sandeepkru@users.noreply.github.com&#34;&gt;sandeepkru@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HerrH &lt;a href=&#34;mailto:atomtigerzoo@users.noreply.github.com&#34;&gt;atomtigerzoo@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrew &lt;a href=&#34;mailto:4030760+sparkyman215@users.noreply.github.com&#34;&gt;4030760+sparkyman215@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dan smith &lt;a href=&#34;mailto:XX1011@gmail.com&#34;&gt;XX1011@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oleg Kovalov &lt;a href=&#34;mailto:iamolegkovalov@gmail.com&#34;&gt;iamolegkovalov@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ruben Vandamme &lt;a href=&#34;mailto:github-com-00ff86@vandamme.email&#34;&gt;github-com-00ff86@vandamme.email&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cnly &lt;a href=&#34;mailto:minecnly@gmail.com&#34;&gt;minecnly@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andres Alvarez &lt;a href=&#34;mailto:1671935+kir4h@users.noreply.github.com&#34;&gt;1671935+kir4h@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;reddi1 &lt;a href=&#34;mailto:xreddi@gmail.com&#34;&gt;xreddi@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matt Tucker &lt;a href=&#34;mailto:matthewtckr@gmail.com&#34;&gt;matthewtckr@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sebastian Bünger &lt;a href=&#34;mailto:buengese@gmail.com&#34;&gt;buengese@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:buengese@protonmail.com&#34;&gt;buengese@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Martin Polden &lt;a href=&#34;mailto:mpolden@mpolden.no&#34;&gt;mpolden@mpolden.no&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alex Chen &lt;a href=&#34;mailto:Cnly@users.noreply.github.com&#34;&gt;Cnly@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Denis &lt;a href=&#34;mailto:deniskovpen@gmail.com&#34;&gt;deniskovpen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;bsteiss &lt;a href=&#34;mailto:35940619+bsteiss@users.noreply.github.com&#34;&gt;35940619+bsteiss@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cédric Connes &lt;a href=&#34;mailto:cedric.connes@gmail.com&#34;&gt;cedric.connes@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dr. Tobias Quathamer &lt;a href=&#34;mailto:toddy15@users.noreply.github.com&#34;&gt;toddy15@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dcpu &lt;a href=&#34;mailto:42736967+dcpu@users.noreply.github.com&#34;&gt;42736967+dcpu@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sheldon Rupp &lt;a href=&#34;mailto:me@shel.io&#34;&gt;me@shel.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;albertony &lt;a href=&#34;mailto:12441419+albertony@users.noreply.github.com&#34;&gt;12441419+albertony@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;cron410 &lt;a href=&#34;mailto:cron410@gmail.com&#34;&gt;cron410@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anagh Kumar Baranwal &lt;a href=&#34;mailto:6824881+darthShadow@users.noreply.github.com&#34;&gt;6824881+darthShadow@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Felix Brucker &lt;a href=&#34;mailto:felix@felixbrucker.com&#34;&gt;felix@felixbrucker.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Santiago Rodríguez &lt;a href=&#34;mailto:scollazo@users.noreply.github.com&#34;&gt;scollazo@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Craig Miskell &lt;a href=&#34;mailto:craig.miskell@fluxfederation.com&#34;&gt;craig.miskell@fluxfederation.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Antoine GIRARD &lt;a href=&#34;mailto:sapk@sapk.fr&#34;&gt;sapk@sapk.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joanna Marek &lt;a href=&#34;mailto:joanna.marek@u2i.com&#34;&gt;joanna.marek@u2i.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;frenos &lt;a href=&#34;mailto:frenos@users.noreply.github.com&#34;&gt;frenos@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ssaqua &lt;a href=&#34;mailto:ssaqua@users.noreply.github.com&#34;&gt;ssaqua@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;xnaas &lt;a href=&#34;mailto:me@xnaas.info&#34;&gt;me@xnaas.info&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Frantisek Fuka &lt;a href=&#34;mailto:fuka@fuxoft.cz&#34;&gt;fuka@fuxoft.cz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Kohout &lt;a href=&#34;mailto:pauljkohout@yahoo.com&#34;&gt;pauljkohout@yahoo.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dcpu &lt;a href=&#34;mailto:43330287+dcpu@users.noreply.github.com&#34;&gt;43330287+dcpu@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jackyzy823 &lt;a href=&#34;mailto:jackyzy823@gmail.com&#34;&gt;jackyzy823@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Haguenauer &lt;a href=&#34;mailto:ml@kurokatta.org&#34;&gt;ml@kurokatta.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;teresy &lt;a href=&#34;mailto:hi.teresy@gmail.com&#34;&gt;hi.teresy@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;buergi &lt;a href=&#34;mailto:patbuergi@gmx.de&#34;&gt;patbuergi@gmx.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Florian Gamboeck &lt;a href=&#34;mailto:mail@floga.de&#34;&gt;mail@floga.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ralf Hemberger &lt;a href=&#34;mailto:10364191+rhemberger@users.noreply.github.com&#34;&gt;10364191+rhemberger@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Scott Edlund &lt;a href=&#34;mailto:sedlund@users.noreply.github.com&#34;&gt;sedlund@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Erik Swanson &lt;a href=&#34;mailto:erik@retailnext.net&#34;&gt;erik@retailnext.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jake Coggiano &lt;a href=&#34;mailto:jake@stripe.com&#34;&gt;jake@stripe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;brused27 &lt;a href=&#34;mailto:brused27@noemailaddress&#34;&gt;brused27@noemailaddress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Kaminski &lt;a href=&#34;mailto:kaminski@istori.com&#34;&gt;kaminski@istori.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Henry Ptasinski &lt;a href=&#34;mailto:henry@logout.com&#34;&gt;henry@logout.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexander &lt;a href=&#34;mailto:kharkovalexander@gmail.com&#34;&gt;kharkovalexander@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Garry McNulty &lt;a href=&#34;mailto:garrmcnu@gmail.com&#34;&gt;garrmcnu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mathieu Carbou &lt;a href=&#34;mailto:mathieu.carbou@gmail.com&#34;&gt;mathieu.carbou@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mark Otway &lt;a href=&#34;mailto:mark@otway.com&#34;&gt;mark@otway.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;William Cocker &lt;a href=&#34;mailto:37018962+WilliamCocker@users.noreply.github.com&#34;&gt;37018962+WilliamCocker@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;François Leurent &lt;a href=&#34;mailto:131.js@cloudyks.org&#34;&gt;131.js@cloudyks.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Arkadius Stefanski &lt;a href=&#34;mailto:arkste@gmail.com&#34;&gt;arkste@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jay &lt;a href=&#34;mailto:dev@jaygoel.com&#34;&gt;dev@jaygoel.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;andrea rota &lt;a href=&#34;mailto:a@xelera.eu&#34;&gt;a@xelera.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nicolov &lt;a href=&#34;mailto:nicolov@users.noreply.github.com&#34;&gt;nicolov@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matt Joiner &lt;a href=&#34;mailto:anacrolix@gmail.com&#34;&gt;anacrolix@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dario Guzik &lt;a href=&#34;mailto:dario@guzik.com.ar&#34;&gt;dario@guzik.com.ar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;qip &lt;a href=&#34;mailto:qip@users.noreply.github.com&#34;&gt;qip@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yair@unicorn &lt;a href=&#34;mailto:yair@unicorn&#34;&gt;yair@unicorn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matt Robinson &lt;a href=&#34;mailto:brimstone@the.narro.ws&#34;&gt;brimstone@the.narro.ws&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kayrus &lt;a href=&#34;mailto:kay.diam@gmail.com&#34;&gt;kay.diam@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rémy Léone &lt;a href=&#34;mailto:remy.leone@gmail.com&#34;&gt;remy.leone@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Wojciech Smigielski &lt;a href=&#34;mailto:wojciech.hieronim.smigielski@gmail.com&#34;&gt;wojciech.hieronim.smigielski@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;weetmuts &lt;a href=&#34;mailto:oehrstroem@gmail.com&#34;&gt;oehrstroem@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jonathan &lt;a href=&#34;mailto:vanillajonathan@users.noreply.github.com&#34;&gt;vanillajonathan@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;James Carpenter &lt;a href=&#34;mailto:orbsmiv@users.noreply.github.com&#34;&gt;orbsmiv@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vince &lt;a href=&#34;mailto:vince0villamora@gmail.com&#34;&gt;vince0villamora@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nestar47 &lt;a href=&#34;mailto:47841759+Nestar47@users.noreply.github.com&#34;&gt;47841759+Nestar47@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Six &lt;a href=&#34;mailto:brbsix@gmail.com&#34;&gt;brbsix@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexandru Bumbacea &lt;a href=&#34;mailto:alexandru.bumbacea@booking.com&#34;&gt;alexandru.bumbacea@booking.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;calisro &lt;a href=&#34;mailto:robert.calistri@gmail.com&#34;&gt;robert.calistri@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dr.Rx &lt;a href=&#34;mailto:david.rey@nventive.com&#34;&gt;david.rey@nventive.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;marcintustin &lt;a href=&#34;mailto:marcintustin@users.noreply.github.com&#34;&gt;marcintustin@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jaKa Močnik &lt;a href=&#34;mailto:jaka@koofr.net&#34;&gt;jaka@koofr.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fionera &lt;a href=&#34;mailto:fionera@fionera.de&#34;&gt;fionera@fionera.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dan Walters &lt;a href=&#34;mailto:dan@walters.io&#34;&gt;dan@walters.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Danil Semelenov &lt;a href=&#34;mailto:sgtpep@users.noreply.github.com&#34;&gt;sgtpep@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;xopez &lt;a href=&#34;mailto:28950736+xopez@users.noreply.github.com&#34;&gt;28950736+xopez@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ben Boeckel &lt;a href=&#34;mailto:mathstuf@gmail.com&#34;&gt;mathstuf@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Manu &lt;a href=&#34;mailto:manu@snapdragon.cc&#34;&gt;manu@snapdragon.cc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kyle E. Mitchell &lt;a href=&#34;mailto:kyle@kemitchell.com&#34;&gt;kyle@kemitchell.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gary Kim &lt;a href=&#34;mailto:gary@garykim.dev&#34;&gt;gary@garykim.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jon &lt;a href=&#34;mailto:jonathn@github.com&#34;&gt;jonathn@github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jeff Quinn &lt;a href=&#34;mailto:jeffrey.quinn@bluevoyant.com&#34;&gt;jeffrey.quinn@bluevoyant.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Berbec &lt;a href=&#34;mailto:peter@berbec.com&#34;&gt;peter@berbec.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;didil &lt;a href=&#34;mailto:1284255+didil@users.noreply.github.com&#34;&gt;1284255+didil@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;id01 &lt;a href=&#34;mailto:gaviniboom@gmail.com&#34;&gt;gaviniboom@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Robert Marko &lt;a href=&#34;mailto:robimarko@gmail.com&#34;&gt;robimarko@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Philip Harvey &lt;a href=&#34;mailto:32467456+pharveybattelle@users.noreply.github.com&#34;&gt;32467456+pharveybattelle@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;JorisE &lt;a href=&#34;mailto:JorisE@users.noreply.github.com&#34;&gt;JorisE@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;garry415 &lt;a href=&#34;mailto:garry.415@gmail.com&#34;&gt;garry.415@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;forgems &lt;a href=&#34;mailto:forgems@gmail.com&#34;&gt;forgems@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Florian Apolloner &lt;a href=&#34;mailto:florian@apolloner.eu&#34;&gt;florian@apolloner.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aleksandar Janković &lt;a href=&#34;mailto:office@ajankovic.com&#34;&gt;office@ajankovic.com&lt;/a&gt; &lt;a href=&#34;mailto:ajankovic@users.noreply.github.com&#34;&gt;ajankovic@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Maran &lt;a href=&#34;mailto:maran@protonmail.com&#34;&gt;maran@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nguyenhuuluan434 &lt;a href=&#34;mailto:nguyenhuuluan434@gmail.com&#34;&gt;nguyenhuuluan434@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Laura Hausmann &lt;a href=&#34;mailto:zotan@zotan.pw&#34;&gt;zotan@zotan.pw&lt;/a&gt; &lt;a href=&#34;mailto:laura@hausmann.dev&#34;&gt;laura@hausmann.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yparitcher &lt;a href=&#34;mailto:y@paritcher.com&#34;&gt;y@paritcher.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;AbelThar &lt;a href=&#34;mailto:abela.tharen@gmail.com&#34;&gt;abela.tharen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matti Niemenmaa &lt;a href=&#34;mailto:matti.niemenmaa+git@iki.fi&#34;&gt;matti.niemenmaa+git@iki.fi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Russell Davis &lt;a href=&#34;mailto:russelldavis@users.noreply.github.com&#34;&gt;russelldavis@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yi FU &lt;a href=&#34;mailto:yi.fu@tink.se&#34;&gt;yi.fu@tink.se&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Millar &lt;a href=&#34;mailto:paul.millar@desy.de&#34;&gt;paul.millar@desy.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;justinalin &lt;a href=&#34;mailto:justinalin@qnap.com&#34;&gt;justinalin@qnap.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;EliEron &lt;a href=&#34;mailto:subanimehd@gmail.com&#34;&gt;subanimehd@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;justina777 &lt;a href=&#34;mailto:chiahuei.lin@gmail.com&#34;&gt;chiahuei.lin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chaitanya Bankanhal &lt;a href=&#34;mailto:bchaitanya15@gmail.com&#34;&gt;bchaitanya15@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michał Matczuk &lt;a href=&#34;mailto:michal@scylladb.com&#34;&gt;michal@scylladb.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Macavirus &lt;a href=&#34;mailto:macavirus@zoho.com&#34;&gt;macavirus@zoho.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Abhinav Sharma &lt;a href=&#34;mailto:abhi18av@outlook.com&#34;&gt;abhi18av@outlook.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ginvine &lt;a href=&#34;mailto:34869051+ginvine@users.noreply.github.com&#34;&gt;34869051+ginvine@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Patrick Wang &lt;a href=&#34;mailto:mail6543210@yahoo.com.tw&#34;&gt;mail6543210@yahoo.com.tw&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cenk Alti &lt;a href=&#34;mailto:cenkalti@gmail.com&#34;&gt;cenkalti@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andreas Chlupka &lt;a href=&#34;mailto:andy@chlupka.com&#34;&gt;andy@chlupka.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alfonso Montero &lt;a href=&#34;mailto:amontero@tinet.org&#34;&gt;amontero@tinet.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ivan Andreev &lt;a href=&#34;mailto:ivandeex@gmail.com&#34;&gt;ivandeex@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Baumgold &lt;a href=&#34;mailto:david@davidbaumgold.com&#34;&gt;david@davidbaumgold.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lars Lehtonen &lt;a href=&#34;mailto:lars.lehtonen@gmail.com&#34;&gt;lars.lehtonen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matei David &lt;a href=&#34;mailto:matei.david@gmail.com&#34;&gt;matei.david@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David &lt;a href=&#34;mailto:david.bramwell@endemolshine.com&#34;&gt;david.bramwell@endemolshine.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthony Rusdi &lt;a href=&#34;mailto:33247310+antrusd@users.noreply.github.com&#34;&gt;33247310+antrusd@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Richard Patel &lt;a href=&#34;mailto:me@terorie.dev&#34;&gt;me@terorie.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;庄天翼 &lt;a href=&#34;mailto:zty0826@gmail.com&#34;&gt;zty0826@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SwitchJS &lt;a href=&#34;mailto:dev@switchjs.com&#34;&gt;dev@switchjs.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Raphael &lt;a href=&#34;mailto:PowershellNinja@users.noreply.github.com&#34;&gt;PowershellNinja@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sezal Agrawal &lt;a href=&#34;mailto:sezalagrawal@gmail.com&#34;&gt;sezalagrawal@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tyler &lt;a href=&#34;mailto:TylerNakamura@users.noreply.github.com&#34;&gt;TylerNakamura@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brett Dutro &lt;a href=&#34;mailto:brett.dutro@gmail.com&#34;&gt;brett.dutro@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vighnesh SK &lt;a href=&#34;mailto:booterror99@gmail.com&#34;&gt;booterror99@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Arijit Biswas &lt;a href=&#34;mailto:dibbyo456@gmail.com&#34;&gt;dibbyo456@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michele Caci &lt;a href=&#34;mailto:michele.caci@gmail.com&#34;&gt;michele.caci@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;AlexandrBoltris &lt;a href=&#34;mailto:ua2fgb@gmail.com&#34;&gt;ua2fgb@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bryce Larson &lt;a href=&#34;mailto:blarson@saltstack.com&#34;&gt;blarson@saltstack.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Carlos Ferreyra &lt;a href=&#34;mailto:crypticmind@gmail.com&#34;&gt;crypticmind@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Saksham Khanna &lt;a href=&#34;mailto:sakshamkhanna@outlook.com&#34;&gt;sakshamkhanna@outlook.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dausruddin &lt;a href=&#34;mailto:5763466+dausruddin@users.noreply.github.com&#34;&gt;5763466+dausruddin@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;zero-24 &lt;a href=&#34;mailto:zero-24@users.noreply.github.com&#34;&gt;zero-24@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Xiaoxing Ye &lt;a href=&#34;mailto:ye@xiaoxing.us&#34;&gt;ye@xiaoxing.us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Barry Muldrey &lt;a href=&#34;mailto:barry@muldrey.net&#34;&gt;barry@muldrey.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sebastian Brandt &lt;a href=&#34;mailto:sebastian.brandt@friday.de&#34;&gt;sebastian.brandt@friday.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marco Molteni &lt;a href=&#34;mailto:marco.molteni@mailbox.org&#34;&gt;marco.molteni@mailbox.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ankur Gupta &lt;a href=&#34;mailto:7876747+ankur0493@users.noreply.github.com&#34;&gt;7876747+ankur0493@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Maciej Zimnoch &lt;a href=&#34;mailto:maciej@scylladb.com&#34;&gt;maciej@scylladb.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;anuar45 &lt;a href=&#34;mailto:serdaliyev.anuar@gmail.com&#34;&gt;serdaliyev.anuar@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fernando &lt;a href=&#34;mailto:ferferga@users.noreply.github.com&#34;&gt;ferferga@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Cole &lt;a href=&#34;mailto:david.cole@sohonet.com&#34;&gt;david.cole@sohonet.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Wei He &lt;a href=&#34;mailto:git@weispot.com&#34;&gt;git@weispot.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Outvi V &lt;a href=&#34;mailto:19144373+outloudvi@users.noreply.github.com&#34;&gt;19144373+outloudvi@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thomas Kriechbaumer &lt;a href=&#34;mailto:thomas@kriechbaumer.name&#34;&gt;thomas@kriechbaumer.name&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tennix &lt;a href=&#34;mailto:tennix@users.noreply.github.com&#34;&gt;tennix@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ole Schütt &lt;a href=&#34;mailto:ole@schuett.name&#34;&gt;ole@schuett.name&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kuang-che Wu &lt;a href=&#34;mailto:kcwu@csie.org&#34;&gt;kcwu@csie.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thomas Eales &lt;a href=&#34;mailto:wingsuit@users.noreply.github.com&#34;&gt;wingsuit@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Tinsley &lt;a href=&#34;mailto:paul.tinsley@vitalsource.com&#34;&gt;paul.tinsley@vitalsource.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Felix Hungenberg &lt;a href=&#34;mailto:git@shiftgeist.com&#34;&gt;git@shiftgeist.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benjamin Richter &lt;a href=&#34;mailto:github@dev.telepath.de&#34;&gt;github@dev.telepath.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;landall &lt;a href=&#34;mailto:cst_zf@qq.com&#34;&gt;cst_zf@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;thestigma &lt;a href=&#34;mailto:thestigma@gmail.com&#34;&gt;thestigma@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jtagcat &lt;a href=&#34;mailto:38327267+jtagcat@users.noreply.github.com&#34;&gt;38327267+jtagcat@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Damon Permezel &lt;a href=&#34;mailto:permezel@me.com&#34;&gt;permezel@me.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;boosh &lt;a href=&#34;mailto:boosh@users.noreply.github.com&#34;&gt;boosh@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;unbelauscht &lt;a href=&#34;mailto:58393353+unbelauscht@users.noreply.github.com&#34;&gt;58393353+unbelauscht@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Motonori IWAMURO &lt;a href=&#34;mailto:vmi@nifty.com&#34;&gt;vmi@nifty.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benjapol Worakan &lt;a href=&#34;mailto:benwrk@live.com&#34;&gt;benwrk@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dave Koston &lt;a href=&#34;mailto:dave.koston@stackpath.com&#34;&gt;dave.koston@stackpath.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Durval Menezes &lt;a href=&#34;mailto:DurvalMenezes@users.noreply.github.com&#34;&gt;DurvalMenezes@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim Gallant &lt;a href=&#34;mailto:me@timgallant.us&#34;&gt;me@timgallant.us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Frederick Zhang &lt;a href=&#34;mailto:frederick888@tsundere.moe&#34;&gt;frederick888@tsundere.moe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;valery1707 &lt;a href=&#34;mailto:valery1707@gmail.com&#34;&gt;valery1707@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yves G &lt;a href=&#34;mailto:theYinYeti@yalis.fr&#34;&gt;theYinYeti@yalis.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Shing Kit Chan &lt;a href=&#34;mailto:chanshingkit@gmail.com&#34;&gt;chanshingkit@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Franklyn Tackitt &lt;a href=&#34;mailto:franklyn@tackitt.net&#34;&gt;franklyn@tackitt.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Robert-André Mauchin &lt;a href=&#34;mailto:zebob.m@gmail.com&#34;&gt;zebob.m@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;evileye &lt;a href=&#34;mailto:48332831+ibiruai@users.noreply.github.com&#34;&gt;48332831+ibiruai@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joachim Brandon LeBlanc &lt;a href=&#34;mailto:brandon@leblanc.codes&#34;&gt;brandon@leblanc.codes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Patryk Jakuszew &lt;a href=&#34;mailto:patryk.jakuszew@gmail.com&#34;&gt;patryk.jakuszew@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;fishbullet &lt;a href=&#34;mailto:shindu666@gmail.com&#34;&gt;shindu666@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;greatroar &amp;lt;@&amp;gt;&lt;/li&gt;
&lt;li&gt;Bernd Schoolmann &lt;a href=&#34;mailto:mail@quexten.com&#34;&gt;mail@quexten.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Elan Ruusamäe &lt;a href=&#34;mailto:glen@pld-linux.org&#34;&gt;glen@pld-linux.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Max Sum &lt;a href=&#34;mailto:max@lolyculture.com&#34;&gt;max@lolyculture.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mark Spieth &lt;a href=&#34;mailto:mspieth@users.noreply.github.com&#34;&gt;mspieth@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;harry &lt;a href=&#34;mailto:me@harry.plus&#34;&gt;me@harry.plus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Samantha McVey &lt;a href=&#34;mailto:samantham@posteo.net&#34;&gt;samantham@posteo.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack Anderson &lt;a href=&#34;mailto:jack.anderson@metaswitch.com&#34;&gt;jack.anderson@metaswitch.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael G &lt;a href=&#34;mailto:draget@speciesm.net&#34;&gt;draget@speciesm.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brandon Philips &lt;a href=&#34;mailto:brandon@ifup.org&#34;&gt;brandon@ifup.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Daven &lt;a href=&#34;mailto:dooven@users.noreply.github.com&#34;&gt;dooven@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Martin Stone &lt;a href=&#34;mailto:martin@d7415.co.uk&#34;&gt;martin@d7415.co.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Bramwell &lt;a href=&#34;mailto:13053834+dbramwell@users.noreply.github.com&#34;&gt;13053834+dbramwell@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sunil Patra &lt;a href=&#34;mailto:snl_su@live.com&#34;&gt;snl_su@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adam Stroud &lt;a href=&#34;mailto:adam.stroud@gmail.com&#34;&gt;adam.stroud@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kush &lt;a href=&#34;mailto:kushsharma@users.noreply.github.com&#34;&gt;kushsharma@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matan Rosenberg &lt;a href=&#34;mailto:matan129@gmail.com&#34;&gt;matan129@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gitch1 &lt;a href=&#34;mailto:63495046+gitch1@users.noreply.github.com&#34;&gt;63495046+gitch1@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ElonH &lt;a href=&#34;mailto:elonhhuang@gmail.com&#34;&gt;elonhhuang@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fred &lt;a href=&#34;mailto:fred@creativeprojects.tech&#34;&gt;fred@creativeprojects.tech&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sébastien Gross &lt;a href=&#34;mailto:renard@users.noreply.github.com&#34;&gt;renard@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Maxime Suret &lt;a href=&#34;mailto:11944422+msuret@users.noreply.github.com&#34;&gt;11944422+msuret@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Caleb Case &lt;a href=&#34;mailto:caleb@storj.io&#34;&gt;caleb@storj.io&lt;/a&gt; &lt;a href=&#34;mailto:calebcase@gmail.com&#34;&gt;calebcase@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ben Zenker &lt;a href=&#34;mailto:imbenzenker@gmail.com&#34;&gt;imbenzenker@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Martin Michlmayr &lt;a href=&#34;mailto:tbm@cyrius.com&#34;&gt;tbm@cyrius.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brandon McNama &lt;a href=&#34;mailto:bmcnama@pagerduty.com&#34;&gt;bmcnama@pagerduty.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Daniel Slyman &lt;a href=&#34;mailto:github@skylayer.eu&#34;&gt;github@skylayer.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alex Guerrero &lt;a href=&#34;mailto:guerrero@users.noreply.github.com&#34;&gt;guerrero@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matteo Pietro Dazzi &lt;a href=&#34;mailto:matteopietro.dazzi@gft.com&#34;&gt;matteopietro.dazzi@gft.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;edwardxml &lt;a href=&#34;mailto:56691903+edwardxml@users.noreply.github.com&#34;&gt;56691903+edwardxml@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Roman Kredentser &lt;a href=&#34;mailto:shareed2k@gmail.com&#34;&gt;shareed2k@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kamil Trzciński &lt;a href=&#34;mailto:ayufan@ayufan.eu&#34;&gt;ayufan@ayufan.eu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zac Rubin &lt;a href=&#34;mailto:z-0@users.noreply.github.com&#34;&gt;z-0@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vincent Feltz&lt;/li&gt;
&lt;li&gt;Heiko Bornholdt &lt;a href=&#34;mailto:bornholdt@informatik.uni-hamburg.de&#34;&gt;bornholdt@informatik.uni-hamburg.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matteo Pietro Dazzi &lt;a href=&#34;mailto:matteopietro.dazzi@gmail.com&#34;&gt;matteopietro.dazzi@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jtagcat &lt;a href=&#34;mailto:gitlab@c7.ee&#34;&gt;gitlab@c7.ee&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Petri Salminen &lt;a href=&#34;mailto:petri@salminen.dev&#34;&gt;petri@salminen.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim Burke &lt;a href=&#34;mailto:tim.burke@gmail.com&#34;&gt;tim.burke@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kai Lüke &lt;a href=&#34;mailto:kai@kinvolk.io&#34;&gt;kai@kinvolk.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Garrett Squire &lt;a href=&#34;mailto:github@garrettsquire.com&#34;&gt;github@garrettsquire.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Evan Harris &lt;a href=&#34;mailto:eharris@puremagic.com&#34;&gt;eharris@puremagic.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kevin &lt;a href=&#34;mailto:keyam@microsoft.com&#34;&gt;keyam@microsoft.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Morten Linderud &lt;a href=&#34;mailto:morten@linderud.pw&#34;&gt;morten@linderud.pw&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Ustalov &lt;a href=&#34;mailto:dmitry.ustalov@gmail.com&#34;&gt;dmitry.ustalov@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack &lt;a href=&#34;mailto:196648+jdeng@users.noreply.github.com&#34;&gt;196648+jdeng@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kcris &lt;a href=&#34;mailto:cristian.tarsoaga@gmail.com&#34;&gt;cristian.tarsoaga@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;tyhuber1 &lt;a href=&#34;mailto:68970760+tyhuber1@users.noreply.github.com&#34;&gt;68970760+tyhuber1@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Ibarra &lt;a href=&#34;mailto:david.ibarra@realty.com&#34;&gt;david.ibarra@realty.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tim Gallant &lt;a href=&#34;mailto:tim@lilt.com&#34;&gt;tim@lilt.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kaloyan Raev &lt;a href=&#34;mailto:kaloyan@storj.io&#34;&gt;kaloyan@storj.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jay McEntire &lt;a href=&#34;mailto:jay.mcentire@gmail.com&#34;&gt;jay.mcentire@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Leo Luan &lt;a href=&#34;mailto:leoluan@us.ibm.com&#34;&gt;leoluan@us.ibm.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;aus &lt;a href=&#34;mailto:549081+aus@users.noreply.github.com&#34;&gt;549081+aus@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aaron Gokaslan &lt;a href=&#34;mailto:agokaslan@fb.com&#34;&gt;agokaslan@fb.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Egor Margineanu &lt;a href=&#34;mailto:egmar@users.noreply.github.com&#34;&gt;egmar@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lucas Kanashiro &lt;a href=&#34;mailto:lucas.kanashiro@canonical.com&#34;&gt;lucas.kanashiro@canonical.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;WarpedPixel &lt;a href=&#34;mailto:WarpedPixel@users.noreply.github.com&#34;&gt;WarpedPixel@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sam Edwards &lt;a href=&#34;mailto:sam@samedwards.ca&#34;&gt;sam@samedwards.ca&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wjielai &lt;a href=&#34;mailto:gouki0123@gmail.com&#34;&gt;gouki0123@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Muffin King &lt;a href=&#34;mailto:jinxz_k@live.com&#34;&gt;jinxz_k@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Christopher Stewart &lt;a href=&#34;mailto:6573710+1f47a@users.noreply.github.com&#34;&gt;6573710+1f47a@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Russell Cattelan &lt;a href=&#34;mailto:cattelan@digitalelves.com&#34;&gt;cattelan@digitalelves.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gyutw &lt;a href=&#34;mailto:30371241+gyutw@users.noreply.github.com&#34;&gt;30371241+gyutw@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hekmon &lt;a href=&#34;mailto:edouardhur@gmail.com&#34;&gt;edouardhur@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LaSombra &lt;a href=&#34;mailto:lasombra@users.noreply.github.com&#34;&gt;lasombra@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dov Murik &lt;a href=&#34;mailto:dov.murik@gmail.com&#34;&gt;dov.murik@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ameer Dawood &lt;a href=&#34;mailto:ameer1234567890@gmail.com&#34;&gt;ameer1234567890@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dan Hipschman &lt;a href=&#34;mailto:dan.hipschman@opendoor.com&#34;&gt;dan.hipschman@opendoor.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Josh Soref &lt;a href=&#34;mailto:jsoref@users.noreply.github.com&#34;&gt;jsoref@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David &lt;a href=&#34;mailto:david@staron.nl&#34;&gt;david@staron.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ingo &lt;a href=&#34;mailto:ingo@hoffmann.cx&#34;&gt;ingo@hoffmann.cx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adam Plánský &lt;a href=&#34;mailto:adamplansky@users.noreply.github.com&#34;&gt;adamplansky@users.noreply.github.com&lt;/a&gt; &lt;a href=&#34;mailto:adamplansky@gmail.com&#34;&gt;adamplansky@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Manish Gupta &lt;a href=&#34;mailto:manishgupta.ait@gmail.com&#34;&gt;manishgupta.ait@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Deepak Sah &lt;a href=&#34;mailto:sah.sslpu@gmail.com&#34;&gt;sah.sslpu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marcin Zelent &lt;a href=&#34;mailto:marcin@zelent.net&#34;&gt;marcin@zelent.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;zhucan &lt;a href=&#34;mailto:zhucan.k8s@gmail.com&#34;&gt;zhucan.k8s@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;James Lim &lt;a href=&#34;mailto:james.lim@samsara.com&#34;&gt;james.lim@samsara.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Laurens Janssen &lt;a href=&#34;mailto:BD69BM@insim.biz&#34;&gt;BD69BM@insim.biz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bob Bagwill &lt;a href=&#34;mailto:bobbagwill@gmail.com&#34;&gt;bobbagwill@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nathan Collins &lt;a href=&#34;mailto:colli372@msu.edu&#34;&gt;colli372@msu.edu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;lostheli&lt;/li&gt;
&lt;li&gt;kelv &lt;a href=&#34;mailto:kelvin@acks.org&#34;&gt;kelvin@acks.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Milly &lt;a href=&#34;mailto:milly.ca@gmail.com&#34;&gt;milly.ca@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gtorelly &lt;a href=&#34;mailto:gtorelly@gmail.com&#34;&gt;gtorelly@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brad Ackerman &lt;a href=&#34;mailto:brad@facefault.org&#34;&gt;brad@facefault.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mitsuo Heijo &lt;a href=&#34;mailto:mitsuo.heijo@gmail.com&#34;&gt;mitsuo.heijo@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Claudio Bantaloukas &lt;a href=&#34;mailto:rockdreamer@gmail.com&#34;&gt;rockdreamer@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benjamin Gustin &lt;a href=&#34;mailto:gustin.ben@gmail.com&#34;&gt;gustin.ben@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ingo Weiss &lt;a href=&#34;mailto:ingo@redhat.com&#34;&gt;ingo@redhat.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kerry Su &lt;a href=&#34;mailto:me@sshockwave.net&#34;&gt;me@sshockwave.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ilyess Bachiri &lt;a href=&#34;mailto:ilyess.bachiri@sonder.com&#34;&gt;ilyess.bachiri@sonder.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yury Stankevich &lt;a href=&#34;mailto:urykhy@gmail.com&#34;&gt;urykhy@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kice &lt;a href=&#34;mailto:wslikerqs@gmail.com&#34;&gt;wslikerqs@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Denis Neuling &lt;a href=&#34;mailto:denisneuling@gmail.com&#34;&gt;denisneuling@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Janne Johansson &lt;a href=&#34;mailto:icepic.dz@gmail.com&#34;&gt;icepic.dz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Patrik Nordlén &lt;a href=&#34;mailto:patriki@gmail.com&#34;&gt;patriki@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CokeMine &lt;a href=&#34;mailto:aptx4561@gmail.com&#34;&gt;aptx4561@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sơn Trần-Nguyễn &lt;a href=&#34;mailto:github@sntran.com&#34;&gt;github@sntran.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;lluuaapp &lt;a href=&#34;mailto:266615+lluuaapp@users.noreply.github.com&#34;&gt;266615+lluuaapp@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zach Kipp &lt;a href=&#34;mailto:kipp.zach@gmail.com&#34;&gt;kipp.zach@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Riccardo Iaconelli &lt;a href=&#34;mailto:riccardo@kde.org&#34;&gt;riccardo@kde.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sakuragawa Misty &lt;a href=&#34;mailto:gyc990326@gmail.com&#34;&gt;gyc990326@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nicolas Rueff &lt;a href=&#34;mailto:nicolas@rueff.fr&#34;&gt;nicolas@rueff.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pau Rodriguez-Estivill &lt;a href=&#34;mailto:prodrigestivill@gmail.com&#34;&gt;prodrigestivill@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bob Pusateri &lt;a href=&#34;mailto:BobPusateri@users.noreply.github.com&#34;&gt;BobPusateri@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alex JOST &lt;a href=&#34;mailto:25005220+dimejo@users.noreply.github.com&#34;&gt;25005220+dimejo@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexey Tabakman &lt;a href=&#34;mailto:samosad.ru@gmail.com&#34;&gt;samosad.ru@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Sze &lt;a href=&#34;mailto:sze.david@gmail.com&#34;&gt;sze.david@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;cynthia kwok &lt;a href=&#34;mailto:cynthia.m.kwok@gmail.com&#34;&gt;cynthia.m.kwok@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Miron Veryanskiy &lt;a href=&#34;mailto:MironVeryanskiy@gmail.com&#34;&gt;MironVeryanskiy@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;K265 &lt;a href=&#34;mailto:k.265@qq.com&#34;&gt;k.265@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vesnyx &lt;a href=&#34;mailto:Vesnyx@users.noreply.github.com&#34;&gt;Vesnyx@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Chepurovskiy &lt;a href=&#34;mailto:me@dm3ch.net&#34;&gt;me@dm3ch.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rauno Ots &lt;a href=&#34;mailto:rauno.ots@cgi.com&#34;&gt;rauno.ots@cgi.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Georg Neugschwandtner &lt;a href=&#34;mailto:georg.neugschwandtner@gmx.net&#34;&gt;georg.neugschwandtner@gmx.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;pvalls &lt;a href=&#34;mailto:polvallsrue@gmail.com&#34;&gt;polvallsrue@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Robert Thomas &lt;a href=&#34;mailto:31854736+wolveix@users.noreply.github.com&#34;&gt;31854736+wolveix@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Romeo Kienzler &lt;a href=&#34;mailto:romeo.kienzler@gmail.com&#34;&gt;romeo.kienzler@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;tYYGH &lt;a href=&#34;mailto:tYYGH@users.noreply.github.com&#34;&gt;tYYGH@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;georne &lt;a href=&#34;mailto:77802995+georne@users.noreply.github.com&#34;&gt;77802995+georne@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Maxwell Calman &lt;a href=&#34;mailto:mcalman@MacBook-Pro.local&#34;&gt;mcalman@MacBook-Pro.local&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Naveen Honest Raj &lt;a href=&#34;mailto:naveendurai19@gmail.com&#34;&gt;naveendurai19@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lucas Messenger &lt;a href=&#34;mailto:lmesseng@cisco.com&#34;&gt;lmesseng@cisco.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Manish Kumar &lt;a href=&#34;mailto:krmanish260@gmail.com&#34;&gt;krmanish260@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;x0b &lt;a href=&#34;mailto:x0bdev@gmail.com&#34;&gt;x0bdev@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CERN through the CS3MESH4EOSC Project&lt;/li&gt;
&lt;li&gt;Nick Gaya &lt;a href=&#34;mailto:nicholasgaya+github@gmail.com&#34;&gt;nicholasgaya+github@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ashok Gelal &lt;a href=&#34;mailto:401055+ashokgelal@users.noreply.github.com&#34;&gt;401055+ashokgelal@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dominik Mydlil &lt;a href=&#34;mailto:dominik.mydlil@outlook.com&#34;&gt;dominik.mydlil@outlook.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nazar Mishturak &lt;a href=&#34;mailto:nazarmx@gmail.com&#34;&gt;nazarmx@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ansh Mittal &lt;a href=&#34;mailto:iamAnshMittal@gmail.com&#34;&gt;iamAnshMittal@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;noabody &lt;a href=&#34;mailto:noabody@yahoo.com&#34;&gt;noabody@yahoo.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;OleFrost &lt;a href=&#34;mailto:82263101+olefrost@users.noreply.github.com&#34;&gt;82263101+olefrost@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kenny Parsons &lt;a href=&#34;mailto:kennyparsons93@gmail.com&#34;&gt;kennyparsons93@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jeffrey Tolar &lt;a href=&#34;mailto:tolar.jeffrey@gmail.com&#34;&gt;tolar.jeffrey@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jtagcat &lt;a href=&#34;mailto:git-514635f7@jtag.cat&#34;&gt;git-514635f7@jtag.cat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tatsuya Noyori &lt;a href=&#34;mailto:63089076+public-tatsuya-noyori@users.noreply.github.com&#34;&gt;63089076+public-tatsuya-noyori@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;lewisxy &lt;a href=&#34;mailto:lewisxy@users.noreply.github.com&#34;&gt;lewisxy@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nolan Woods &lt;a href=&#34;mailto:nolan_w@sfu.ca&#34;&gt;nolan_w@sfu.ca&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gautam Kumar &lt;a href=&#34;mailto:25435568+gautamajay52@users.noreply.github.com&#34;&gt;25435568+gautamajay52@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chris Macklin &lt;a href=&#34;mailto:chris.macklin@10xgenomics.com&#34;&gt;chris.macklin@10xgenomics.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Antoon Prins &lt;a href=&#34;mailto:antoon.prins@surfsara.nl&#34;&gt;antoon.prins@surfsara.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexey Ivanov &lt;a href=&#34;mailto:rbtz@dropbox.com&#34;&gt;rbtz@dropbox.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Serge Pouliquen &lt;a href=&#34;mailto:sp31415@free.fr&#34;&gt;sp31415@free.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;acsfer &lt;a href=&#34;mailto:carlos@reendex.com&#34;&gt;carlos@reendex.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tom &lt;a href=&#34;mailto:tom@tom-fitzhenry.me.uk&#34;&gt;tom@tom-fitzhenry.me.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tyson Moore &lt;a href=&#34;mailto:tyson@tyson.me&#34;&gt;tyson@tyson.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;database64128 &lt;a href=&#34;mailto:free122448@hotmail.com&#34;&gt;free122448@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chris Lu &lt;a href=&#34;mailto:chrislusf@users.noreply.github.com&#34;&gt;chrislusf@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Reid Buzby &lt;a href=&#34;mailto:reid@rethink.software&#34;&gt;reid@rethink.software&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;darrenrhs &lt;a href=&#34;mailto:darrenrhs@gmail.com&#34;&gt;darrenrhs@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Florian Penzkofer &lt;a href=&#34;mailto:fp@nullptr.de&#34;&gt;fp@nullptr.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Xuanchen Wu &lt;a href=&#34;mailto:117010292@link.cuhk.edu.cn&#34;&gt;117010292@link.cuhk.edu.cn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;partev &lt;a href=&#34;mailto:petrosyan@gmail.com&#34;&gt;petrosyan@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Sitnikov &lt;a href=&#34;mailto:fo2@inbox.ru&#34;&gt;fo2@inbox.ru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Haochen Tong &lt;a href=&#34;mailto:i@hexchain.org&#34;&gt;i@hexchain.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael Hanselmann &lt;a href=&#34;mailto:public@hansmi.ch&#34;&gt;public@hansmi.ch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chuan Zh &lt;a href=&#34;mailto:zhchuan7@gmail.com&#34;&gt;zhchuan7@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Antoine GIRARD &lt;a href=&#34;mailto:antoine.girard@sapk.fr&#34;&gt;antoine.girard@sapk.fr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Justin Winokur (Jwink3101) &lt;a href=&#34;mailto:Jwink3101@users.noreply.github.com&#34;&gt;Jwink3101@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mariano Absatz (git) &lt;a href=&#34;mailto:scm@baby.com.ar&#34;&gt;scm@baby.com.ar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Greg Sadetsky &lt;a href=&#34;mailto:lepetitg@gmail.com&#34;&gt;lepetitg@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yedamo &lt;a href=&#34;mailto:logindaveye@gmail.com&#34;&gt;logindaveye@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;hota &lt;a href=&#34;mailto:lindwurm.q@gmail.com&#34;&gt;lindwurm.q@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;vinibali &lt;a href=&#34;mailto:vinibali1@gmail.com&#34;&gt;vinibali1@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ken Enrique Morel &lt;a href=&#34;mailto:ken.morel.santana@gmail.com&#34;&gt;ken.morel.santana@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Justin Hellings &lt;a href=&#34;mailto:justin.hellings@gmail.com&#34;&gt;justin.hellings@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Parth Shukla &lt;a href=&#34;mailto:pparth@pparth.net&#34;&gt;pparth@pparth.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wzl &lt;a href=&#34;mailto:wangzl31@outlook.com&#34;&gt;wangzl31@outlook.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HNGamingUK &lt;a href=&#34;mailto:connor@earnshawhome.co.uk&#34;&gt;connor@earnshawhome.co.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jonta &lt;a href=&#34;mailto:359397+Jonta@users.noreply.github.com&#34;&gt;359397+Jonta@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;YenForYang &lt;a href=&#34;mailto:YenForYang@users.noreply.github.com&#34;&gt;YenForYang@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SimJoSt / Joda Stößer &lt;a href=&#34;mailto:git@simjo.st&#34;&gt;git@simjo.st&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Logeshwaran &lt;a href=&#34;mailto:waranlogesh@gmail.com&#34;&gt;waranlogesh@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rajat Goel &lt;a href=&#34;mailto:rajat@dropbox.com&#34;&gt;rajat@dropbox.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;r0kk3rz &lt;a href=&#34;mailto:r0kk3rz@gmail.com&#34;&gt;r0kk3rz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matthew Sevey &lt;a href=&#34;mailto:mjsevey@gmail.com&#34;&gt;mjsevey@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Filip Rysavy &lt;a href=&#34;mailto:fil@siasky.net&#34;&gt;fil@siasky.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ian Levesque &lt;a href=&#34;mailto:ian@ianlevesque.org&#34;&gt;ian@ianlevesque.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thomas Stachl &lt;a href=&#34;mailto:thomas@stachl.me&#34;&gt;thomas@stachl.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Bogatov &lt;a href=&#34;mailto:git#v1@kaction.cc&#34;&gt;git#v1@kaction.cc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;thomae &lt;a href=&#34;mailto:4493560+thomae@users.noreply.github.com&#34;&gt;4493560+thomae@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;trevyn &lt;a href=&#34;mailto:trevyn-git@protonmail.com&#34;&gt;trevyn-git@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Liu &lt;a href=&#34;mailto:david.yx.liu@oracle.com&#34;&gt;david.yx.liu@oracle.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chris Nelson &lt;a href=&#34;mailto:stuff@cjnaz.com&#34;&gt;stuff@cjnaz.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Felix Bünemann &lt;a href=&#34;mailto:felix.buenemann@gmail.com&#34;&gt;felix.buenemann@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Atílio Antônio &lt;a href=&#34;mailto:atiliodadalto@hotmail.com&#34;&gt;atiliodadalto@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Carlo Mion &lt;a href=&#34;mailto:mion00@gmail.com&#34;&gt;mion00@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chris Lu &lt;a href=&#34;mailto:chris.lu@gmail.com&#34;&gt;chris.lu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vitor Arruda &lt;a href=&#34;mailto:vitor.pimenta.arruda@gmail.com&#34;&gt;vitor.pimenta.arruda@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;bbabich &lt;a href=&#34;mailto:bbabich@datamossa.com&#34;&gt;bbabich@datamossa.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David &lt;a href=&#34;mailto:dp.davide.palma@gmail.com&#34;&gt;dp.davide.palma@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Borna Butkovic &lt;a href=&#34;mailto:borna@favicode.net&#34;&gt;borna@favicode.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fredric Arklid &lt;a href=&#34;mailto:fredric.arklid@consid.se&#34;&gt;fredric.arklid@consid.se&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andy Jackson &lt;a href=&#34;mailto:Andrew.Jackson@bl.uk&#34;&gt;Andrew.Jackson@bl.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sinan Tan &lt;a href=&#34;mailto:i@tinytangent.com&#34;&gt;i@tinytangent.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;deinferno &lt;a href=&#34;mailto:14363193+deinferno@users.noreply.github.com&#34;&gt;14363193+deinferno@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;rsapkf &lt;a href=&#34;mailto:rsapkfff@pm.me&#34;&gt;rsapkfff@pm.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Will Holtz &lt;a href=&#34;mailto:wholtz@gmail.com&#34;&gt;wholtz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GGG KILLER &lt;a href=&#34;mailto:gggkiller2@gmail.com&#34;&gt;gggkiller2@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Logeshwaran Murugesan &lt;a href=&#34;mailto:logeshwaran@testpress.in&#34;&gt;logeshwaran@testpress.in&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lu Wang &lt;a href=&#34;mailto:coolwanglu@gmail.com&#34;&gt;coolwanglu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bumsu Hyeon &lt;a href=&#34;mailto:ksitht@gmail.com&#34;&gt;ksitht@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Shmz Ozggrn &lt;a href=&#34;mailto:98463324+ShmzOzggrn@users.noreply.github.com&#34;&gt;98463324+ShmzOzggrn@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kim &lt;a href=&#34;mailto:kim@jotta.no&#34;&gt;kim@jotta.no&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Niels van de Weem &lt;a href=&#34;mailto:n.van.de.weem@smile.nl&#34;&gt;n.van.de.weem@smile.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Koopa &lt;a href=&#34;mailto:codingkoopa@gmail.com&#34;&gt;codingkoopa@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yunhai Luo &lt;a href=&#34;mailto:yunhai-luo@hotmail.com&#34;&gt;yunhai-luo@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Charlie Jiang &lt;a href=&#34;mailto:w@chariri.moe&#34;&gt;w@chariri.moe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alain Nussbaumer &lt;a href=&#34;mailto:alain.nussbaumer@alleluia.ch&#34;&gt;alain.nussbaumer@alleluia.ch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vanessasaurus &lt;a href=&#34;mailto:814322+vsoch@users.noreply.github.com&#34;&gt;814322+vsoch@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Isaac Levy &lt;a href=&#34;mailto:isaac.r.levy@gmail.com&#34;&gt;isaac.r.levy@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gourav T &lt;a href=&#34;mailto:workflowautomation@protonmail.com&#34;&gt;workflowautomation@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paulo Martins &lt;a href=&#34;mailto:paulo.pontes.m@gmail.com&#34;&gt;paulo.pontes.m@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;viveknathani &lt;a href=&#34;mailto:viveknathani2402@gmail.com&#34;&gt;viveknathani2402@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Eng Zer Jun &lt;a href=&#34;mailto:engzerjun@gmail.com&#34;&gt;engzerjun@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Abhiraj &lt;a href=&#34;mailto:abhiraj.official15@gmail.com&#34;&gt;abhiraj.official15@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Márton Elek &lt;a href=&#34;mailto:elek@apache.org&#34;&gt;elek@apache.org&lt;/a&gt; &lt;a href=&#34;mailto:elek@users.noreply.github.com&#34;&gt;elek@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vincent Murphy &lt;a href=&#34;mailto:vdm@vdm.ie&#34;&gt;vdm@vdm.ie&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ctrl-q &lt;a href=&#34;mailto:34975747+ctrl-q@users.noreply.github.com&#34;&gt;34975747+ctrl-q@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nil Alexandrov &lt;a href=&#34;mailto:nalexand@akamai.com&#34;&gt;nalexand@akamai.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GuoXingbin &lt;a href=&#34;mailto:101376330+guoxingbin@users.noreply.github.com&#34;&gt;101376330+guoxingbin@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Berkan Teber &lt;a href=&#34;mailto:berkan@berkanteber.com&#34;&gt;berkan@berkanteber.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tobias Klauser &lt;a href=&#34;mailto:tklauser@distanz.ch&#34;&gt;tklauser@distanz.ch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;KARBOWSKI Piotr &lt;a href=&#34;mailto:piotr.karbowski@gmail.com&#34;&gt;piotr.karbowski@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GH &lt;a href=&#34;mailto:geeklihui@foxmail.com&#34;&gt;geeklihui@foxmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;rafma0 &lt;a href=&#34;mailto:int.main@gmail.com&#34;&gt;int.main@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adrien Rey-Jarthon &lt;a href=&#34;mailto:jobs@adrienjarthon.com&#34;&gt;jobs@adrienjarthon.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nick Gooding &lt;a href=&#34;mailto:73336146+nickgooding@users.noreply.github.com&#34;&gt;73336146+nickgooding@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Leroy van Logchem &lt;a href=&#34;mailto:lr.vanlogchem@gmail.com&#34;&gt;lr.vanlogchem@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zsolt Ero &lt;a href=&#34;mailto:zsolt.ero@gmail.com&#34;&gt;zsolt.ero@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lesmiscore &lt;a href=&#34;mailto:nao20010128@gmail.com&#34;&gt;nao20010128@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ehsantdy &lt;a href=&#34;mailto:ehsan.tadayon@arvancloud.com&#34;&gt;ehsan.tadayon@arvancloud.com&lt;/a&gt; &lt;a href=&#34;mailto:ehsantadayon85@gmail.com&#34;&gt;ehsantadayon85@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SwazRGB &lt;a href=&#34;mailto:65694696+swazrgb@users.noreply.github.com&#34;&gt;65694696+swazrgb@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mateusz Puczyński &lt;a href=&#34;mailto:mati6095@gmail.com&#34;&gt;mati6095@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael C Tiernan - MIT-Research Computing Project &lt;a href=&#34;mailto:mtiernan@mit.edu&#34;&gt;mtiernan@mit.edu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kaspian &lt;a href=&#34;mailto:34658474+KaspianDev@users.noreply.github.com&#34;&gt;34658474+KaspianDev@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Werner &lt;a href=&#34;mailto:EvilOlaf@users.noreply.github.com&#34;&gt;EvilOlaf@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hugal31 &lt;a href=&#34;mailto:hugo.laloge@gmail.com&#34;&gt;hugo.laloge@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Christian Galo &lt;a href=&#34;mailto:36752715+cgalo5758@users.noreply.github.com&#34;&gt;36752715+cgalo5758@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Erik van Velzen &lt;a href=&#34;mailto:erik@evanv.nl&#34;&gt;erik@evanv.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Derek Battams &lt;a href=&#34;mailto:derek@battams.ca&#34;&gt;derek@battams.ca&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul &lt;a href=&#34;mailto:devnoname120@gmail.com&#34;&gt;devnoname120@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SimonLiu &lt;a href=&#34;mailto:simonliu009@users.noreply.github.com&#34;&gt;simonliu009@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hugo Laloge &lt;a href=&#34;mailto:hla@lescompanions.com&#34;&gt;hla@lescompanions.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mr-Kanister &lt;a href=&#34;mailto:68117355+Mr-Kanister@users.noreply.github.com&#34;&gt;68117355+Mr-Kanister@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rob Pickerill &lt;a href=&#34;mailto:r.pickerill@gmail.com&#34;&gt;r.pickerill@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrey &lt;a href=&#34;mailto:to.merge@gmail.com&#34;&gt;to.merge@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Eric Wolf &lt;a href=&#34;mailto:19wolf@gmail.com&#34;&gt;19wolf@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nick &lt;a href=&#34;mailto:nick.naumann@mailbox.tu-dresden.de&#34;&gt;nick.naumann@mailbox.tu-dresden.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jason Zheng &lt;a href=&#34;mailto:jszheng17@gmail.com&#34;&gt;jszheng17@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matthew Vernon &lt;a href=&#34;mailto:mvernon@wikimedia.org&#34;&gt;mvernon@wikimedia.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Noah Hsu &lt;a href=&#34;mailto:i@nn.ci&#34;&gt;i@nn.ci&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;m00594701 &lt;a href=&#34;mailto:mengpengbo@huawei.com&#34;&gt;mengpengbo@huawei.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Art M. Gallagher &lt;a href=&#34;mailto:artmg50@gmail.com&#34;&gt;artmg50@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sven Gerber &lt;a href=&#34;mailto:49589423+svengerber@users.noreply.github.com&#34;&gt;49589423+svengerber@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CrossR &lt;a href=&#34;mailto:r.cross@lancaster.ac.uk&#34;&gt;r.cross@lancaster.ac.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Maciej Radzikowski &lt;a href=&#34;mailto:maciej@radzikowski.com.pl&#34;&gt;maciej@radzikowski.com.pl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Scott Grimes &lt;a href=&#34;mailto:scott.grimes@spaciq.com&#34;&gt;scott.grimes@spaciq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Phil Shackleton &lt;a href=&#34;mailto:71221528+philshacks@users.noreply.github.com&#34;&gt;71221528+philshacks@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;eNV25 &lt;a href=&#34;mailto:env252525@gmail.com&#34;&gt;env252525@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Caleb &lt;a href=&#34;mailto:inventor96@users.noreply.github.com&#34;&gt;inventor96@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;J-P Treen &lt;a href=&#34;mailto:jp@wraptious.com&#34;&gt;jp@wraptious.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Martin Czygan &lt;a href=&#34;mailto:53705+miku@users.noreply.github.com&#34;&gt;53705+miku@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;buda &lt;a href=&#34;mailto:sandrojijavadze@protonmail.com&#34;&gt;sandrojijavadze@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;mirekphd &lt;a href=&#34;mailto:36706320+mirekphd@users.noreply.github.com&#34;&gt;36706320+mirekphd@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;vyloy &lt;a href=&#34;mailto:vyloy@qq.com&#34;&gt;vyloy@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthrazz &lt;a href=&#34;mailto:25553648+Anthrazz@users.noreply.github.com&#34;&gt;25553648+Anthrazz@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;zzr93 &lt;a href=&#34;mailto:34027824+zzr93@users.noreply.github.com&#34;&gt;34027824+zzr93@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Norman &lt;a href=&#34;mailto:penorman@mac.com&#34;&gt;penorman@mac.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lorenzo Maiorfi &lt;a href=&#34;mailto:maiorfi@gmail.com&#34;&gt;maiorfi@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Claudio Maradonna &lt;a href=&#34;mailto:penguyman@stronzi.org&#34;&gt;penguyman@stronzi.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ovidiu Victor Tatar &lt;a href=&#34;mailto:ovi.tatar@googlemail.com&#34;&gt;ovi.tatar@googlemail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Evan Spensley &lt;a href=&#34;mailto:epspensley@gmail.com&#34;&gt;epspensley@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yen Hu &lt;a href=&#34;mailto:61753151+0x59656e@users.noreply.github.com&#34;&gt;61753151+0x59656e@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Steve Kowalik &lt;a href=&#34;mailto:steven@wedontsleep.org&#34;&gt;steven@wedontsleep.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jordi Gonzalez Muñoz &lt;a href=&#34;mailto:jordigonzm@gmail.com&#34;&gt;jordigonzm@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joram Schrijver &lt;a href=&#34;mailto:i@joram.io&#34;&gt;i@joram.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mark Trolley &lt;a href=&#34;mailto:marktrolley@gmail.com&#34;&gt;marktrolley@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;João Henrique Franco &lt;a href=&#34;mailto:joaohenrique.franco@gmail.com&#34;&gt;joaohenrique.franco@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;anonion &lt;a href=&#34;mailto:aman207@users.noreply.github.com&#34;&gt;aman207@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ryan Morey &lt;a href=&#34;mailto:4590343+rmorey@users.noreply.github.com&#34;&gt;4590343+rmorey@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Simon Bos &lt;a href=&#34;mailto:simonbos9@gmail.com&#34;&gt;simonbos9@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;YFdyh000 &lt;a href=&#34;mailto:yfdyh000@gmail.com&#34;&gt;yfdyh000@gmail.com&lt;/a&gt;  * Josh Soref &lt;a href=&#34;mailto:2119212+jsoref@users.noreply.github.com&#34;&gt;2119212+jsoref@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Øyvind Heddeland Instefjord &lt;a href=&#34;mailto:instefjord@outlook.com&#34;&gt;instefjord@outlook.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dmitry Deniskin &lt;a href=&#34;mailto:110819396+ddeniskin@users.noreply.github.com&#34;&gt;110819396+ddeniskin@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexander Knorr &lt;a href=&#34;mailto:106825+opexxx@users.noreply.github.com&#34;&gt;106825+opexxx@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Richard Bateman &lt;a href=&#34;mailto:richard@batemansr.us&#34;&gt;richard@batemansr.us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dimitri Papadopoulos Orfanos &lt;a href=&#34;mailto:3234522+DimitriPapadopoulos@users.noreply.github.com&#34;&gt;3234522+DimitriPapadopoulos@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lorenzo Milesi &lt;a href=&#34;mailto:lorenzo.milesi@yetopen.com&#34;&gt;lorenzo.milesi@yetopen.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Isaac Aymerich &lt;a href=&#34;mailto:isaac.aymerich@gmail.com&#34;&gt;isaac.aymerich@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;YanceyChiew &lt;a href=&#34;mailto:35898533+YanceyChiew@users.noreply.github.com&#34;&gt;35898533+YanceyChiew@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Manoj Ghosh &lt;a href=&#34;mailto:msays2000@gmail.com&#34;&gt;msays2000@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bachue Zhou &lt;a href=&#34;mailto:bachue.shu@gmail.com&#34;&gt;bachue.shu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Manoj Ghosh &lt;a href=&#34;mailto:manoj.ghosh@oracle.com&#34;&gt;manoj.ghosh@oracle.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tom Mombourquette &lt;a href=&#34;mailto:tom@devnode.com&#34;&gt;tom@devnode.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Robert Newson &lt;a href=&#34;mailto:rnewson@apache.org&#34;&gt;rnewson@apache.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Samuel Johnson &lt;a href=&#34;mailto:esamueljohnson@gmail.com&#34;&gt;esamueljohnson@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;coultonluke &lt;a href=&#34;mailto:luke@luke.org.uk&#34;&gt;luke@luke.org.uk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthony Pessy &lt;a href=&#34;mailto:anthony@cogniteev.com&#34;&gt;anthony@cogniteev.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Philip Harvey &lt;a href=&#34;mailto:pharvey@battelleecology.org&#34;&gt;pharvey@battelleecology.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dgouju &lt;a href=&#34;mailto:dgouju@users.noreply.github.com&#34;&gt;dgouju@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Clément Notin &lt;a href=&#34;mailto:clement.notin@gmail.com&#34;&gt;clement.notin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;x3-apptech &lt;a href=&#34;mailto:66947598+x3-apptech@users.noreply.github.com&#34;&gt;66947598+x3-apptech@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Arnie97 &lt;a href=&#34;mailto:arnie97@gmail.com&#34;&gt;arnie97@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Roel Arents &lt;a href=&#34;mailto:2691308+roelarents@users.noreply.github.com&#34;&gt;2691308+roelarents@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aaron Gokaslan &lt;a href=&#34;mailto:aaronGokaslan@gmail.com&#34;&gt;aaronGokaslan@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;techknowlogick &lt;a href=&#34;mailto:matti@mdranta.net&#34;&gt;matti@mdranta.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;rkettelerij &lt;a href=&#34;mailto:richard@mindloops.nl&#34;&gt;richard@mindloops.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kamui &lt;a href=&#34;mailto:fin-kamui@pm.me&#34;&gt;fin-kamui@pm.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;asdffdsazqqq &lt;a href=&#34;mailto:90116442+asdffdsazqqq@users.noreply.github.com&#34;&gt;90116442+asdffdsazqqq@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nathaniel Wesley Filardo &lt;a href=&#34;mailto:nfilardo@microsoft.com&#34;&gt;nfilardo@microsoft.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ycdtosa &lt;a href=&#34;mailto:ycdtosa@users.noreply.github.com&#34;&gt;ycdtosa@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Erik Agterdenbos &lt;a href=&#34;mailto:agterdenbos@users.noreply.github.com&#34;&gt;agterdenbos@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kevin Verstaen &lt;a href=&#34;mailto:48050031+kverstae@users.noreply.github.com&#34;&gt;48050031+kverstae@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MohammadReza &lt;a href=&#34;mailto:mrvashian@gmail.com&#34;&gt;mrvashian@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;vanplus &lt;a href=&#34;mailto:60313789+vanplus@users.noreply.github.com&#34;&gt;60313789+vanplus@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack &lt;a href=&#34;mailto:16779171+jkpe@users.noreply.github.com&#34;&gt;16779171+jkpe@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Abdullah Saglam &lt;a href=&#34;mailto:abdullah.saglam@stonebranch.com&#34;&gt;abdullah.saglam@stonebranch.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marks Polakovs &lt;a href=&#34;mailto:github@markspolakovs.me&#34;&gt;github@markspolakovs.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;piyushgarg &lt;a href=&#34;mailto:piyushgarg80@gmail.com&#34;&gt;piyushgarg80@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kaloyan Raev &lt;a href=&#34;mailto:kaloyan-raev@users.noreply.github.com&#34;&gt;kaloyan-raev@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;IMTheNachoMan &lt;a href=&#34;mailto:imthenachoman@gmail.com&#34;&gt;imthenachoman@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;alankrit &lt;a href=&#34;mailto:alankrit@google.com&#34;&gt;alankrit@google.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bryan Kaplan &lt;a href=&#34;mailto:#@bryankaplan.com&#34;&gt;#@bryankaplan.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LXY &lt;a href=&#34;mailto:767763591@qq.com&#34;&gt;767763591@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Simmon Li (he/him) &lt;a href=&#34;mailto:li.simmon@gmail.com&#34;&gt;li.simmon@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;happyxhw &lt;a href=&#34;mailto:44490504+happyxhw@users.noreply.github.com&#34;&gt;44490504+happyxhw@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Simmon Li (he/him) &lt;a href=&#34;mailto:hello@crespire.dev&#34;&gt;hello@crespire.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Matthias Baur &lt;a href=&#34;mailto:baurmatt@users.noreply.github.com&#34;&gt;baurmatt@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hunter Wittenborn &lt;a href=&#34;mailto:hunter@hunterwittenborn.com&#34;&gt;hunter@hunterwittenborn.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;logopk &lt;a href=&#34;mailto:peter@kreuser.name&#34;&gt;peter@kreuser.name&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gerard Bosch &lt;a href=&#34;mailto:30733556+gerardbosch@users.noreply.github.com&#34;&gt;30733556+gerardbosch@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ToBeFree &lt;a href=&#34;mailto:github@tfrei.de&#34;&gt;github@tfrei.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;NodudeWasTaken &lt;a href=&#34;mailto:75137537+NodudeWasTaken@users.noreply.github.com&#34;&gt;75137537+NodudeWasTaken@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Brunner &lt;a href=&#34;mailto:peter@lugoues.net&#34;&gt;peter@lugoues.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ninh Pham &lt;a href=&#34;mailto:dongian.rapclubkhtn@gmail.com&#34;&gt;dongian.rapclubkhtn@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ryan Caezar Itang &lt;a href=&#34;mailto:sitiom@proton.me&#34;&gt;sitiom@proton.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Brunner &lt;a href=&#34;mailto:peter@psykhe.com&#34;&gt;peter@psykhe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Leandro Sacchet &lt;a href=&#34;mailto:leandro.sacchet@animati.com.br&#34;&gt;leandro.sacchet@animati.com.br&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dependabot[bot] &amp;lt;49699333+dependabot[bot]@users.noreply.github.com&amp;gt;&lt;/li&gt;
&lt;li&gt;cycneuramus &lt;a href=&#34;mailto:56681631+cycneuramus@users.noreply.github.com&#34;&gt;56681631+cycneuramus@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Arnavion &lt;a href=&#34;mailto:me@arnavion.dev&#34;&gt;me@arnavion.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Christopher Merry &lt;a href=&#34;mailto:christopher.merry@mlb.com&#34;&gt;christopher.merry@mlb.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thibault Coupin &lt;a href=&#34;mailto:thibault.coupin@gmail.com&#34;&gt;thibault.coupin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Richard Tweed &lt;a href=&#34;mailto:RichardoC@users.noreply.github.com&#34;&gt;RichardoC@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zach Kipp &lt;a href=&#34;mailto:Zacho2@users.noreply.github.com&#34;&gt;Zacho2@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yuudi &lt;a href=&#34;mailto:26199752+yuudi@users.noreply.github.com&#34;&gt;26199752+yuudi@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;NickIAm &lt;a href=&#34;mailto:NickIAm@users.noreply.github.com&#34;&gt;NickIAm@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Juang, Yi-Lin &lt;a href=&#34;mailto:frankyjuang@gmail.com&#34;&gt;frankyjuang@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jumbi77 &lt;a href=&#34;mailto:jumbi77@users.noreply.github.com&#34;&gt;jumbi77@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Aditya Basu &lt;a href=&#34;mailto:ab.aditya.basu@gmail.com&#34;&gt;ab.aditya.basu@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ed &lt;a href=&#34;mailto:s@ocv.me&#34;&gt;s@ocv.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Drew Parsons &lt;a href=&#34;mailto:dparsons@emerall.com&#34;&gt;dparsons@emerall.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joel &lt;a href=&#34;mailto:joelnb@users.noreply.github.com&#34;&gt;joelnb@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wiserain &lt;a href=&#34;mailto:mail275@gmail.com&#34;&gt;mail275@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Roel Arents &lt;a href=&#34;mailto:roel.arents@kadaster.nl&#34;&gt;roel.arents@kadaster.nl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Shyim &lt;a href=&#34;mailto:github@shyim.de&#34;&gt;github@shyim.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rintze Zelle &lt;a href=&#34;mailto:78232505+rzelle-lallemand@users.noreply.github.com&#34;&gt;78232505+rzelle-lallemand@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Damo &lt;a href=&#34;mailto:damoclark@users.noreply.github.com&#34;&gt;damoclark@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;WeidiDeng &lt;a href=&#34;mailto:weidi_deng@icloud.com&#34;&gt;weidi_deng@icloud.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Brian Starkey &lt;a href=&#34;mailto:stark3y@gmail.com&#34;&gt;stark3y@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jladbrook &lt;a href=&#34;mailto:jhladbrook@gmail.com&#34;&gt;jhladbrook@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Loren Gordon &lt;a href=&#34;mailto:lorengordon@users.noreply.github.com&#34;&gt;lorengordon@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;dlitster &lt;a href=&#34;mailto:davidlitster@gmail.com&#34;&gt;davidlitster@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tobias Gion &lt;a href=&#34;mailto:tobias@gion.io&#34;&gt;tobias@gion.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jānis Bebrītis &lt;a href=&#34;mailto:janis.bebritis@wunder.io&#34;&gt;janis.bebritis@wunder.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adam K &lt;a href=&#34;mailto:github.com@ak.tidy.email&#34;&gt;github.com@ak.tidy.email&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrei Smirnov &lt;a href=&#34;mailto:smirnov.captain@gmail.com&#34;&gt;smirnov.captain@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Janne Hellsten &lt;a href=&#34;mailto:jjhellst@gmail.com&#34;&gt;jjhellst@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;cc &lt;a href=&#34;mailto:12904584+shvc@users.noreply.github.com&#34;&gt;12904584+shvc@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tareq Sharafy &lt;a href=&#34;mailto:tareq.sha@gmail.com&#34;&gt;tareq.sha@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kapitainsky &lt;a href=&#34;mailto:dariuszb@me.com&#34;&gt;dariuszb@me.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;douchen &lt;a href=&#34;mailto:playgoobug@gmail.com&#34;&gt;playgoobug@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sam Lai &lt;a href=&#34;mailto:70988+slai@users.noreply.github.com&#34;&gt;70988+slai@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;URenko &lt;a href=&#34;mailto:18209292+URenko@users.noreply.github.com&#34;&gt;18209292+URenko@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stanislav Gromov &lt;a href=&#34;mailto:kullfar@gmail.com&#34;&gt;kullfar@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paulo Schreiner &lt;a href=&#34;mailto:paulo.schreiner@delivion.de&#34;&gt;paulo.schreiner@delivion.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mariusz Suchodolski &lt;a href=&#34;mailto:mariusz@suchodol.ski&#34;&gt;mariusz@suchodol.ski&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;danielkrajnik &lt;a href=&#34;mailto:dan94kra@gmail.com&#34;&gt;dan94kra@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Fern &lt;a href=&#34;mailto:github@0xc0dedbad.com&#34;&gt;github@0xc0dedbad.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;zzq &lt;a href=&#34;mailto:i@zhangzqs.cn&#34;&gt;i@zhangzqs.cn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;mac-15 &lt;a href=&#34;mailto:usman.ilamdin@phpstudios.com&#34;&gt;usman.ilamdin@phpstudios.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sawada Tsunayoshi &lt;a href=&#34;mailto:34431649+TsunayoshiSawada@users.noreply.github.com&#34;&gt;34431649+TsunayoshiSawada@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dean Attali &lt;a href=&#34;mailto:daattali@gmail.com&#34;&gt;daattali@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fjodor42 &lt;a href=&#34;mailto:molgaard@gmail.com&#34;&gt;molgaard@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;BakaWang &lt;a href=&#34;mailto:wa11579@hotmail.com&#34;&gt;wa11579@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mahad &lt;a href=&#34;mailto:56235065+Mahad-lab@users.noreply.github.com&#34;&gt;56235065+Mahad-lab@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vladislav Vorobev &lt;a href=&#34;mailto:x.miere@gmail.com&#34;&gt;x.miere@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;darix &lt;a href=&#34;mailto:darix@users.noreply.github.com&#34;&gt;darix@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benjamin &lt;a href=&#34;mailto:36415086+bbenjamin-sys@users.noreply.github.com&#34;&gt;36415086+bbenjamin-sys@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chun-Hung Tseng &lt;a href=&#34;mailto:henrybear327@users.noreply.github.com&#34;&gt;henrybear327@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ricardo D&#39;O. Albanus &lt;a href=&#34;mailto:rdalbanus@users.noreply.github.com&#34;&gt;rdalbanus@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;gabriel-suela &lt;a href=&#34;mailto:gscsuela@gmail.com&#34;&gt;gscsuela@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tiago Boeing &lt;a href=&#34;mailto:contato@tiagoboeing.com&#34;&gt;contato@tiagoboeing.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Edwin Mackenzie-Owen &lt;a href=&#34;mailto:edwin.mowen@gmail.com&#34;&gt;edwin.mowen@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Niklas Hambüchen &lt;a href=&#34;mailto:mail@nh2.me&#34;&gt;mail@nh2.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yuudi &lt;a href=&#34;mailto:yuudi@users.noreply.github.com&#34;&gt;yuudi@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zach &lt;a href=&#34;mailto:github@prozach.org&#34;&gt;github@prozach.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nielash &lt;a href=&#34;mailto:31582349+nielash@users.noreply.github.com&#34;&gt;31582349+nielash@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Julian Lepinski &lt;a href=&#34;mailto:lepinsk@users.noreply.github.com&#34;&gt;lepinsk@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Raymond Berger &lt;a href=&#34;mailto:RayBB@users.noreply.github.com&#34;&gt;RayBB@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nihaal Sangha &lt;a href=&#34;mailto:nihaal.git@gmail.com&#34;&gt;nihaal.git@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Masamune3210 &lt;a href=&#34;mailto:1053504+Masamune3210@users.noreply.github.com&#34;&gt;1053504+Masamune3210@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;James Braza &lt;a href=&#34;mailto:jamesbraza@gmail.com&#34;&gt;jamesbraza@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;antoinetran &lt;a href=&#34;mailto:antoinetran@users.noreply.github.com&#34;&gt;antoinetran@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;alexia &lt;a href=&#34;mailto:me@alexia.lol&#34;&gt;me@alexia.lol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nielash &lt;a href=&#34;mailto:nielronash@gmail.com&#34;&gt;nielronash@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vitor Gomes &lt;a href=&#34;mailto:vitor.gomes@delivion.de&#34;&gt;vitor.gomes@delivion.de&lt;/a&gt; &lt;a href=&#34;mailto:mail@vitorgomes.com&#34;&gt;mail@vitorgomes.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jacob Hands &lt;a href=&#34;mailto:jacob@gogit.io&#34;&gt;jacob@gogit.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;hideo aoyama &lt;a href=&#34;mailto:100831251+boukendesho@users.noreply.github.com&#34;&gt;100831251+boukendesho@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Roberto Ricci &lt;a href=&#34;mailto:io@r-ricci.it&#34;&gt;io@r-ricci.it&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bjørn Smith &lt;a href=&#34;mailto:bjornsmith@gmail.com&#34;&gt;bjornsmith@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alishan Ladhani &lt;a href=&#34;mailto:8869764+aladh@users.noreply.github.com&#34;&gt;8869764+aladh@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;zjx20 &lt;a href=&#34;mailto:zhoujianxiong2@gmail.com&#34;&gt;zhoujianxiong2@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oksana &lt;a href=&#34;mailto:142890647+oks-maytech@users.noreply.github.com&#34;&gt;142890647+oks-maytech@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Volodymyr Kit &lt;a href=&#34;mailto:v.kit@maytech.net&#34;&gt;v.kit@maytech.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Pedersen &lt;a href=&#34;mailto:limero@me.com&#34;&gt;limero@me.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Drew Stinnett &lt;a href=&#34;mailto:drew@drewlink.com&#34;&gt;drew@drewlink.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pat Patterson &lt;a href=&#34;mailto:pat@backblaze.com&#34;&gt;pat@backblaze.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Herby Gillot &lt;a href=&#34;mailto:herby.gillot@gmail.com&#34;&gt;herby.gillot@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nikita Shoshin &lt;a href=&#34;mailto:shoshin_nikita@fastmail.com&#34;&gt;shoshin_nikita@fastmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;rinsuki &lt;a href=&#34;mailto:428rinsuki+git@gmail.com&#34;&gt;428rinsuki+git@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Beyond Meat &lt;a href=&#34;mailto:51850644+beyondmeat@users.noreply.github.com&#34;&gt;51850644+beyondmeat@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Saleh Dindar &lt;a href=&#34;mailto:salh@fb.com&#34;&gt;salh@fb.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Volodymyr &lt;a href=&#34;mailto:142890760+vkit-maytech@users.noreply.github.com&#34;&gt;142890760+vkit-maytech@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gabriel Espinoza &lt;a href=&#34;mailto:31670639+gspinoza@users.noreply.github.com&#34;&gt;31670639+gspinoza@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Keigo Imai &lt;a href=&#34;mailto:keigo.imai@gmail.com&#34;&gt;keigo.imai@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ivan Yanitra &lt;a href=&#34;mailto:iyanitra@tesla-consulting.com&#34;&gt;iyanitra@tesla-consulting.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;alfish2000 &lt;a href=&#34;mailto:alfish2000@gmail.com&#34;&gt;alfish2000@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wuxingzhong &lt;a href=&#34;mailto:qq330332812@gmail.com&#34;&gt;qq330332812@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Adithya Kumar &lt;a href=&#34;mailto:akumar42@protonmail.com&#34;&gt;akumar42@protonmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tayo-pasedaRJ &lt;a href=&#34;mailto:138471223+Tayo-pasedaRJ@users.noreply.github.com&#34;&gt;138471223+Tayo-pasedaRJ@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Peter Kreuser &lt;a href=&#34;mailto:logo@kreuser.name&#34;&gt;logo@kreuser.name&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Piyush &lt;!-- raw HTML omitted --&gt;&lt;/li&gt;
&lt;li&gt;fotile96 &lt;a href=&#34;mailto:fotile96@users.noreply.github.com&#34;&gt;fotile96@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Luc Ritchie &lt;a href=&#34;mailto:luc.ritchie@gmail.com&#34;&gt;luc.ritchie@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;cynful &lt;a href=&#34;mailto:cynful@users.noreply.github.com&#34;&gt;cynful@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;wjielai &lt;a href=&#34;mailto:wjielai@tencent.com&#34;&gt;wjielai@tencent.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack Deng &lt;a href=&#34;mailto:jackdeng@gmail.com&#34;&gt;jackdeng@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mikubill &lt;a href=&#34;mailto:31246794+Mikubill@users.noreply.github.com&#34;&gt;31246794+Mikubill@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Artur Neumann &lt;a href=&#34;mailto:artur@jankaritech.com&#34;&gt;artur@jankaritech.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Saw-jan &lt;a href=&#34;mailto:saw.jan.grg3e@gmail.com&#34;&gt;saw.jan.grg3e@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oksana Zhykina &lt;a href=&#34;mailto:o.zhykina@maytech.net&#34;&gt;o.zhykina@maytech.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;karan &lt;a href=&#34;mailto:karan.gupta92@gmail.com&#34;&gt;karan.gupta92@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;viktor &lt;a href=&#34;mailto:viktor@yakovchuk.net&#34;&gt;viktor@yakovchuk.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;moongdal &lt;a href=&#34;mailto:moongdal@tutanota.com&#34;&gt;moongdal@tutanota.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mina Galić &lt;a href=&#34;mailto:freebsd@igalic.co&#34;&gt;freebsd@igalic.co&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alen Šiljak &lt;a href=&#34;mailto:dev@alensiljak.eu.org&#34;&gt;dev@alensiljak.eu.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;你知道未来吗 &lt;a href=&#34;mailto:rkonfj@gmail.com&#34;&gt;rkonfj@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Abhinav Dhiman &lt;a href=&#34;mailto:8640877+ahnv@users.noreply.github.com&#34;&gt;8640877+ahnv@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;halms &lt;a href=&#34;mailto:7513146+halms@users.noreply.github.com&#34;&gt;7513146+halms@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ben-ba &lt;a href=&#34;mailto:benjamin.brauner@gmx.de&#34;&gt;benjamin.brauner@gmx.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Eli Orzitzer &lt;a href=&#34;mailto:e_orz@yahoo.com&#34;&gt;e_orz@yahoo.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anthony Metzidis &lt;a href=&#34;mailto:anthony.metzidis@gmail.com&#34;&gt;anthony.metzidis@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;emyarod &lt;a href=&#34;mailto:afw5059@gmail.com&#34;&gt;afw5059@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;keongalvin &lt;a href=&#34;mailto:keongalvin@gmail.com&#34;&gt;keongalvin@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;rarspace01 &lt;a href=&#34;mailto:rarspace01@users.noreply.github.com&#34;&gt;rarspace01@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Stern &lt;a href=&#34;mailto:paulstern45@gmail.com&#34;&gt;paulstern45@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Nikhil Ahuja &lt;a href=&#34;mailto:nikhilahuja@live.com&#34;&gt;nikhilahuja@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Harshit Budhraja &lt;a href=&#34;mailto:52413945+harshit-budhraja@users.noreply.github.com&#34;&gt;52413945+harshit-budhraja@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tera &lt;a href=&#34;mailto:24725862+teraa@users.noreply.github.com&#34;&gt;24725862+teraa@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kyle Reynolds &lt;a href=&#34;mailto:kylereynoldsdev@gmail.com&#34;&gt;kylereynoldsdev@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael Eischer &lt;a href=&#34;mailto:michael.eischer@gmx.de&#34;&gt;michael.eischer@gmx.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thomas Müller &lt;a href=&#34;mailto:1005065+DeepDiver1975@users.noreply.github.com&#34;&gt;1005065+DeepDiver1975@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;DanielEgbers &lt;a href=&#34;mailto:27849724+DanielEgbers@users.noreply.github.com&#34;&gt;27849724+DanielEgbers@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Jack Provance &lt;a href=&#34;mailto:49460795+njprov@users.noreply.github.com&#34;&gt;49460795+njprov@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gabriel Ramos &lt;a href=&#34;mailto:109390599+gabrielramos02@users.noreply.github.com&#34;&gt;109390599+gabrielramos02@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dan McArdle &lt;a href=&#34;mailto:d@nmcardle.com&#34;&gt;d@nmcardle.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Joe Cai &lt;a href=&#34;mailto:joe.cai@bigcommerce.com&#34;&gt;joe.cai@bigcommerce.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Anders Swanson &lt;a href=&#34;mailto:anders.swanson@oracle.com&#34;&gt;anders.swanson@oracle.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;huajin tong &lt;a href=&#34;mailto:137764712+thirdkeyword@users.noreply.github.com&#34;&gt;137764712+thirdkeyword@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;John-Paul Smith &lt;a href=&#34;mailto:john-paulsmith@users.noreply.github.com&#34;&gt;john-paulsmith@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;racerole &lt;a href=&#34;mailto:148756161+racerole@users.noreply.github.com&#34;&gt;148756161+racerole@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gachoud Philippe &lt;a href=&#34;mailto:ph.gachoud@gmail.com&#34;&gt;ph.gachoud@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;YukiUnHappy &lt;a href=&#34;mailto:saberhana@yandex.com&#34;&gt;saberhana@yandex.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kyle Reynolds &lt;a href=&#34;mailto:kyle.reynolds@bridgerphotonics.com&#34;&gt;kyle.reynolds@bridgerphotonics.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Lewis Hook &lt;a href=&#34;mailto:lewis@hook.im&#34;&gt;lewis@hook.im&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;hoyho &lt;a href=&#34;mailto:luohaihao@gmail.com&#34;&gt;luohaihao@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vitaly &lt;a href=&#34;mailto:9034218+gvitali@users.noreply.github.com&#34;&gt;9034218+gvitali@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;iotmaestro &lt;a href=&#34;mailto:iotmaestro@proton.me&#34;&gt;iotmaestro@proton.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;psychopatt &lt;a href=&#34;mailto:66741203+psychopatt@users.noreply.github.com&#34;&gt;66741203+psychopatt@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alex Garel &lt;a href=&#34;mailto:alex@garel.org&#34;&gt;alex@garel.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Warrentheo &lt;a href=&#34;mailto:warrentheo@hotmail.com&#34;&gt;warrentheo@hotmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alexandre Lavigne &lt;a href=&#34;mailto:lavigne958@gmail.com&#34;&gt;lavigne958@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yoelvini &lt;a href=&#34;mailto:134453420+yoelvini@users.noreply.github.com&#34;&gt;134453420+yoelvini@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Erisa A &lt;a href=&#34;mailto:erisa@cloudflare.com&#34;&gt;erisa@cloudflare.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pieter van Oostrum &lt;a href=&#34;mailto:pieter@vanoostrum.org&#34;&gt;pieter@vanoostrum.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;jakzoe &lt;a href=&#34;mailto:155812065+jakzoe@users.noreply.github.com&#34;&gt;155812065+jakzoe@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;guangwu &lt;a href=&#34;mailto:guoguangwu@magic-shield.com&#34;&gt;guoguangwu@magic-shield.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;static-moonlight &lt;a href=&#34;mailto:107991124+static-moonlight@users.noreply.github.com&#34;&gt;107991124+static-moonlight@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yudrywet &lt;a href=&#34;mailto:yudeyao@yeah.net&#34;&gt;yudeyao@yeah.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Butanediol &lt;a href=&#34;mailto:git@xnh.app&#34;&gt;git@xnh.app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dave Nicolson &lt;a href=&#34;mailto:david.nicolson@gmail.com&#34;&gt;david.nicolson@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Katia Esposito &lt;a href=&#34;mailto:katia@linux.com&#34;&gt;katia@linux.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;pawsey-kbuckley &lt;a href=&#34;mailto:36438302+pawsey-kbuckley@users.noreply.github.com&#34;&gt;36438302+pawsey-kbuckley@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;hidewrong &lt;a href=&#34;mailto:167099254+hidewrong@users.noreply.github.com&#34;&gt;167099254+hidewrong@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michael Terry &lt;a href=&#34;mailto:mike@mterry.name&#34;&gt;mike@mterry.name&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sunny &lt;a href=&#34;mailto:25066078+LoSunny@users.noreply.github.com&#34;&gt;25066078+LoSunny@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;overallteach &lt;a href=&#34;mailto:cricis@foxmail.com&#34;&gt;cricis@foxmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;JT Olio &lt;a href=&#34;mailto:jt@olio.lol&#34;&gt;jt@olio.lol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Evan McBeth &lt;a href=&#34;mailto:64177332+AtomicRobotMan0101@users.noreply.github.com&#34;&gt;64177332+AtomicRobotMan0101@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dominik Joe Pantůček &lt;a href=&#34;mailto:dominik.pantucek@trustica.cz&#34;&gt;dominik.pantucek@trustica.cz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yumeiyin &lt;a href=&#34;mailto:155420652+yumeiyin@users.noreply.github.com&#34;&gt;155420652+yumeiyin@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bruno Fernandes &lt;a href=&#34;mailto:54373093+folkzb@users.noreply.github.com&#34;&gt;54373093+folkzb@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thomas Schneider &lt;a href=&#34;mailto:tspam.github@brainfuck.space&#34;&gt;tspam.github@brainfuck.space&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Charles Hamilton &lt;a href=&#34;mailto:52973156+chamilton-ccn@users.noreply.github.com&#34;&gt;52973156+chamilton-ccn@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tomasz Melcer &lt;a href=&#34;mailto:tomasz@melcer.pl&#34;&gt;tomasz@melcer.pl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Michał Dzienisiewicz &lt;a href=&#34;mailto:michal.piotr.dz@gmail.com&#34;&gt;michal.piotr.dz@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Florian Klink &lt;a href=&#34;mailto:flokli@flokli.de&#34;&gt;flokli@flokli.de&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bill Fraser &lt;a href=&#34;mailto:bill@wfraser.dev&#34;&gt;bill@wfraser.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Thearas &lt;a href=&#34;mailto:thearas850@gmail.com&#34;&gt;thearas850@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Filipe Herculano &lt;a href=&#34;mailto:fifo_@live.com&#34;&gt;fifo_@live.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Russ Bubley &lt;a href=&#34;mailto:russ.bubley@googlemail.com&#34;&gt;russ.bubley@googlemail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paul Collins &lt;a href=&#34;mailto:paul.collins@canonical.com&#34;&gt;paul.collins@canonical.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tomasz Melcer &lt;a href=&#34;mailto:liori@exroot.org&#34;&gt;liori@exroot.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;itsHenry &lt;a href=&#34;mailto:2671230065@qq.com&#34;&gt;2671230065@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ke Wang &lt;a href=&#34;mailto:me@ke.wang&#34;&gt;me@ke.wang&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;AThePeanut4 &lt;a href=&#34;mailto:49614525+AThePeanut4@users.noreply.github.com&#34;&gt;49614525+AThePeanut4@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tobias Markus &lt;a href=&#34;mailto:tobbi.bugs@googlemail.com&#34;&gt;tobbi.bugs@googlemail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ernie Hershey &lt;a href=&#34;mailto:github@ernie.org&#34;&gt;github@ernie.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Will Miles &lt;a href=&#34;mailto:wmiles@sgl.com&#34;&gt;wmiles@sgl.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;David Seifert &lt;a href=&#34;mailto:16636962+SoapGentoo@users.noreply.github.com&#34;&gt;16636962+SoapGentoo@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fornax &lt;a href=&#34;mailto:wimbrand96@gmail.com&#34;&gt;wimbrand96@gmail.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sam Harrison &lt;a href=&#34;mailto:sam.harrison@files.com&#34;&gt;sam.harrison@files.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Péter Bozsó &lt;a href=&#34;mailto:3806723+peterbozso@users.noreply.github.com&#34;&gt;3806723+peterbozso@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Georg Welzel &lt;a href=&#34;mailto:gwelzel@mailbox.org&#34;&gt;gwelzel@mailbox.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;John Oxley &lt;a href=&#34;mailto:john.oxley@gmail.com&#34;&gt;john.oxley@gmail.com&lt;/a&gt; &lt;a href=&#34;mailto:joxley@meta.com&#34;&gt;joxley@meta.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pawel Palucha &lt;a href=&#34;mailto:pawel.palucha@aetion.com&#34;&gt;pawel.palucha@aetion.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;crystalstall &lt;a href=&#34;mailto:crystalruby@qq.com&#34;&gt;crystalruby@qq.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;nipil &lt;a href=&#34;mailto:nipil@users.noreply.github.com&#34;&gt;nipil@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;yuval-cloudinary &lt;a href=&#34;mailto:46710068+yuval-cloudinary@users.noreply.github.com&#34;&gt;46710068+yuval-cloudinary@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mathieu Moreau &lt;a href=&#34;mailto:mrx23dot@users.noreply.github.com&#34;&gt;mrx23dot@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;fsantagostinobietti &lt;a href=&#34;mailto:6057026+fsantagostinobietti@users.noreply.github.com&#34;&gt;6057026+fsantagostinobietti@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Oleg Kunitsyn &lt;a href=&#34;mailto:114359669+hiddenmarten@users.noreply.github.com&#34;&gt;114359669+hiddenmarten@users.noreply.github.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>B2</title>
      <link>https://rclone.org/b2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/b2/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode13s0hbhb-backblaze-b2&#34;&gt;&lt;i class=&#34;fa fa-fire&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Backblaze B2&lt;/h1&gt;
&lt;p&gt;B2 is &lt;a href=&#34;https://www.backblaze.com/cloud-storage&#34;&gt;Backblaze&#39;s cloud storage system&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Paths are specified as &lt;code&gt;remote:bucket&lt;/code&gt; (or &lt;code&gt;remote:&lt;/code&gt; for the &lt;code&gt;lsd&lt;/code&gt;
command.)  You may put subdirectories in too, e.g. &lt;code&gt;remote:bucket/path/to/dir&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of making a b2 configuration.  First run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process.  To authenticate
you will either need your Account ID (a short hex number) and Master
Application Key (a long hex number) OR an Application Key, which is the
recommended method. See below for further details on generating and using
an Application Key.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
q) Quit config
n/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Backblaze B2
   \ &amp;#34;b2&amp;#34;
[snip]
Storage&amp;gt; b2
Account ID or Application Key ID
account&amp;gt; 123456789abc
Application Key
key&amp;gt; 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
endpoint&amp;gt;
Remote config
Configuration complete.
Options:
- type: b2
- account: 123456789abc
- key: 0123456789abcdef0123456789abcdef0123456789
- endpoint:
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This remote is called &lt;code&gt;remote&lt;/code&gt; and can now be used like this&lt;/p&gt;
&lt;p&gt;See all buckets&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a new bucket&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone mkdir remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List the contents of a bucket&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sync &lt;code&gt;/home/local/directory&lt;/code&gt; to the remote bucket, deleting any
excess files in the bucket.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone sync --interactive /home/local/directory remote:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;application-keys&#34;&gt;Application Keys&lt;/h3&gt;
&lt;p&gt;B2 supports multiple &lt;a href=&#34;https://www.backblaze.com/docs/cloud-storage-application-keys&#34;&gt;Application Keys for different access permission
to B2 Buckets&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can use these with rclone too; you will need to use rclone version 1.43
or later.&lt;/p&gt;
&lt;p&gt;Follow Backblaze&#39;s docs to create an Application Key with the required
permission and add the &lt;code&gt;applicationKeyId&lt;/code&gt; as the &lt;code&gt;account&lt;/code&gt; and the
&lt;code&gt;Application Key&lt;/code&gt; itself as the &lt;code&gt;key&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that you must put the &lt;em&gt;applicationKeyId&lt;/em&gt; as the &lt;code&gt;account&lt;/code&gt; – you
can&#39;t use the master Account ID.  If you try then B2 will return 401
errors.&lt;/p&gt;
&lt;h3 id=&#34;fast-list&#34;&gt;--fast-list&lt;/h3&gt;
&lt;p&gt;This remote supports &lt;code&gt;--fast-list&lt;/code&gt; which allows you to use fewer
transactions in exchange for more memory. See the &lt;a href=&#34;https://rclone.org/docs/#fast-list&#34;&gt;rclone
docs&lt;/a&gt; for more details.&lt;/p&gt;
&lt;h3 id=&#34;modification-times&#34;&gt;Modification times&lt;/h3&gt;
&lt;p&gt;The modification time is stored as metadata on the object as
&lt;code&gt;X-Bz-Info-src_last_modified_millis&lt;/code&gt; as milliseconds since 1970-01-01
in the Backblaze standard.  Other tools should be able to use this as
a modified time.&lt;/p&gt;
&lt;p&gt;Modified times are used in syncing and are fully supported. Note that
if a modification time needs to be updated on an object then it will
create a new version of the object.&lt;/p&gt;
&lt;h3 id=&#34;restricted-filename-characters&#34;&gt;Restricted filename characters&lt;/h3&gt;
&lt;p&gt;In addition to the &lt;a href=&#34;https://rclone.org/overview/#restricted-characters&#34;&gt;default restricted characters set&lt;/a&gt;
the following characters are also replaced:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;\&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x5C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＼&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Invalid UTF-8 bytes will also be &lt;a href=&#34;https://rclone.org/overview/#invalid-utf8&#34;&gt;replaced&lt;/a&gt;,
as they can&#39;t be used in JSON strings.&lt;/p&gt;
&lt;p&gt;Note that in 2020-05 Backblaze started allowing \ characters in file
names. Rclone hasn&#39;t changed its encoding as this could cause syncs to
re-transfer files. If you want rclone not to replace \ then see the
&lt;code&gt;--b2-encoding&lt;/code&gt; flag below and remove the &lt;code&gt;BackSlash&lt;/code&gt; from the
string. This can be set in the config.&lt;/p&gt;
&lt;h3 id=&#34;sha1-checksums&#34;&gt;SHA1 checksums&lt;/h3&gt;
&lt;p&gt;The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.&lt;/p&gt;
&lt;p&gt;Large files (bigger than the limit in &lt;code&gt;--b2-upload-cutoff&lt;/code&gt;) which are
uploaded in chunks will store their SHA1 on the object as
&lt;code&gt;X-Bz-Info-large_file_sha1&lt;/code&gt; as recommended by Backblaze.&lt;/p&gt;
&lt;p&gt;For a large file to be uploaded with an SHA1 checksum, the source
needs to support SHA1 checksums. The local disk supports SHA1
checksums so large file transfers from local disk will have an SHA1.
See &lt;a href=&#34;https://rclone.org/overview/#features&#34;&gt;the overview&lt;/a&gt; for exactly which remotes
support SHA1.&lt;/p&gt;
&lt;p&gt;Sources which don&#39;t support SHA1, in particular &lt;code&gt;crypt&lt;/code&gt; will upload
large files without SHA1 checksums.  This may be fixed in the future
(see &lt;a href=&#34;https://github.com/rclone/rclone/issues/1767&#34;&gt;#1767&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Files sizes below &lt;code&gt;--b2-upload-cutoff&lt;/code&gt; will always have an SHA1
regardless of the source.&lt;/p&gt;
&lt;h3 id=&#34;transfers&#34;&gt;Transfers&lt;/h3&gt;
&lt;p&gt;Backblaze recommends that you do lots of transfers simultaneously for
maximum speed.  In tests from my SSD equipped laptop the optimum
setting is about &lt;code&gt;--transfers 32&lt;/code&gt; though higher numbers may be used
for a slight speed improvement. The optimum number for you may vary
depending on your hardware, how big the files are, how much you want
to load your computer, etc.  The default of &lt;code&gt;--transfers 4&lt;/code&gt; is
definitely too low for Backblaze B2 though.&lt;/p&gt;
&lt;p&gt;Note that uploading big files (bigger than 200 MiB by default) will use
a 96 MiB RAM buffer by default.  There can be at most &lt;code&gt;--transfers&lt;/code&gt; of
these in use at any moment, so this sets the upper limit on the memory
used.&lt;/p&gt;
&lt;h3 id=&#34;versions&#34;&gt;Versions&lt;/h3&gt;
&lt;p&gt;The default setting of B2 is to keep old versions of files. This means
when rclone uploads a new version of a file it creates a &lt;a href=&#34;https://www.backblaze.com/docs/cloud-storage-file-versions&#34;&gt;new version
of it&lt;/a&gt;.
Likewise when you delete a file, the old version will be marked hidden
and still be available.&lt;/p&gt;
&lt;p&gt;Whether B2 keeps old versions of files or not can be adjusted on a per
bucket basis using the &amp;quot;Lifecycle settings&amp;quot; on the B2 control panel or
when creating the bucket using the &lt;a href=&#34;#b2-lifecycle&#34;&gt;--b2-lifecycle&lt;/a&gt;
flag or after creation using the &lt;a href=&#34;#lifecycle&#34;&gt;rclone backend lifecycle&lt;/a&gt;
command.&lt;/p&gt;
&lt;p&gt;You may opt in to a &amp;quot;hard delete&amp;quot; of files with the &lt;code&gt;--b2-hard-delete&lt;/code&gt;
flag which permanently removes files on deletion instead of hiding
them.&lt;/p&gt;
&lt;p&gt;Old versions of files, where available, are visible using the
&lt;code&gt;--b2-versions&lt;/code&gt; flag.&lt;/p&gt;
&lt;p&gt;It is also possible to view a bucket as it was at a certain point in time,
using the &lt;code&gt;--b2-version-at&lt;/code&gt; flag. This will show the file versions as they
were at that time, showing files that have been deleted afterwards, and
hiding files that were created since.&lt;/p&gt;
&lt;p&gt;If you wish to remove all the old versions, and unfinished large file
uploads, then you can use the &lt;code&gt;rclone cleanup remote:bucket&lt;/code&gt; command
which will delete all the old versions of files, leaving the current ones
intact.  You can also supply a path and only old versions under that path
will be deleted, e.g. &lt;code&gt;rclone cleanup remote:bucket/path/to/stuff&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;cleanup&lt;/code&gt; will remove partially uploaded files from the bucket
if they are more than a day old. If you want more control over the
expiry date then run &lt;code&gt;rclone backend cleanup b2:bucket -o max-age=1h&lt;/code&gt;
to remove all unfinished large file uploads older than one hour, leaving
old versions intact.&lt;/p&gt;
&lt;p&gt;If you wish to remove all the old versions, leaving current files and
unfinished large files intact, then you can use the
&lt;a href=&#34;#cleanup-hidden&#34;&gt;&lt;code&gt;rclone backend cleanup-hidden remote:bucket&lt;/code&gt;&lt;/a&gt;
command. You can also supply a path and only old versions under that
path will be deleted, e.g.
&lt;code&gt;rclone backend cleanup-hidden remote:bucket/path/to/stuff&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When you &lt;code&gt;purge&lt;/code&gt; a bucket, the current and the old versions will be
deleted then the bucket will be deleted.&lt;/p&gt;
&lt;p&gt;However &lt;code&gt;delete&lt;/code&gt; will cause the current versions of the files to
become hidden old versions.&lt;/p&gt;
&lt;p&gt;Here is a session showing the listing and retrieval of an old
version followed by a &lt;code&gt;cleanup&lt;/code&gt; of the old versions.&lt;/p&gt;
&lt;p&gt;Show current version and all the versions with &lt;code&gt;--b2-versions&lt;/code&gt; flag.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q ls b2:cleanup-test
        9 one.txt

$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt
        8 one-v2016-07-04-141032-000.txt
       16 one-v2016-07-04-141003-000.txt
       15 one-v2016-07-02-155621-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Retrieve an old version&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Clean up all the old versions and show that they&#39;ve gone.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q cleanup b2:cleanup-test

$ rclone -q ls b2:cleanup-test
        9 one.txt

$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;versions-naming-caveat&#34;&gt;Versions naming caveat&lt;/h4&gt;
&lt;p&gt;When using &lt;code&gt;--b2-versions&lt;/code&gt; flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions&#39; names
are created by inserting timestamp between file name and its extension.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;        9 file.txt
        8 file-v2023-07-17-161032-000.txt
       16 file-v2023-06-15-141003-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If there are real files present with the same names as versions, then
behaviour of &lt;code&gt;--b2-versions&lt;/code&gt; can be unpredictable.&lt;/p&gt;
&lt;h3 id=&#34;data-usage&#34;&gt;Data usage&lt;/h3&gt;
&lt;p&gt;It is useful to know how many requests are sent to the server in different scenarios.&lt;/p&gt;
&lt;p&gt;All copy commands send the following 4 requests:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;b2_list_file_names&lt;/code&gt; request will be sent once for every 1k files
in the remote path, providing the checksum and modification time of
the listed files. As of version 1.33 issue
&lt;a href=&#34;https://github.com/rclone/rclone/issues/818&#34;&gt;#818&lt;/a&gt; causes extra requests
to be sent when using B2 with Crypt. When a copy operation does not
require any files to be uploaded, no more requests will be sent.&lt;/p&gt;
&lt;p&gt;Uploading files that do not require chunking, will send 2 requests per
file upload:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Uploading files requiring chunking, will send 2 requests (one each to
start and finish the upload) and another 2 requests for each chunk:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;versions-1&#34;&gt;Versions&lt;/h4&gt;
&lt;p&gt;Versions can be viewed with the &lt;code&gt;--b2-versions&lt;/code&gt; flag. When it is set
rclone will show and act on older versions of files.  For example&lt;/p&gt;
&lt;p&gt;Listing without &lt;code&gt;--b2-versions&lt;/code&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q ls b2:cleanup-test
        9 one.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And with&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt
        8 one-v2016-07-04-141032-000.txt
       16 one-v2016-07-04-141003-000.txt
       15 one-v2016-07-02-155621-000.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Showing that the current version is unchanged but older versions can
be seen.  These have the UTC date that they were uploaded to the
server to the nearest millisecond appended to them.&lt;/p&gt;
&lt;p&gt;Note that when using &lt;code&gt;--b2-versions&lt;/code&gt; no file write operations are
permitted, so you can&#39;t upload files or delete them.&lt;/p&gt;
&lt;h3 id=&#34;b2-and-rclone-link&#34;&gt;B2 and rclone link&lt;/h3&gt;
&lt;p&gt;Rclone supports generating file share links for private B2 buckets.
They can either be for a file for example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;./rclone link B2:bucket/path/to/file.txt
https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or if run on a directory you will get:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;./rclone link B2:bucket/path
https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;you can then use the authorization token (the part of the url from the
&lt;code&gt;?Authorization=&lt;/code&gt; on) on any file path under that directory. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to b2 (Backblaze B2).&lt;/p&gt;
&lt;h4 id=&#34;b2-account&#34;&gt;--b2-account&lt;/h4&gt;
&lt;p&gt;Account ID or Application Key ID.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      account&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_ACCOUNT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-key&#34;&gt;--b2-key&lt;/h4&gt;
&lt;p&gt;Application Key.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      key&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_KEY&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-hard-delete&#34;&gt;--b2-hard-delete&lt;/h4&gt;
&lt;p&gt;Permanently delete files on remote removal, otherwise hide files.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      hard_delete&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_HARD_DELETE&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to b2 (Backblaze B2).&lt;/p&gt;
&lt;h4 id=&#34;b2-endpoint&#34;&gt;--b2-endpoint&lt;/h4&gt;
&lt;p&gt;Endpoint for the service.&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      endpoint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_ENDPOINT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-test-mode&#34;&gt;--b2-test-mode&lt;/h4&gt;
&lt;p&gt;A flag string for X-Bz-Test-Mode header for debugging.&lt;/p&gt;
&lt;p&gt;This is for debugging purposes only. Setting it to one of the strings
below will cause b2 to return specific errors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;fail_some_uploads&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;expire_some_account_authorization_tokens&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;force_cap_exceeded&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These will be set in the &amp;quot;X-Bz-Test-Mode&amp;quot; header which is documented
in the &lt;a href=&#34;https://www.backblaze.com/docs/cloud-storage-integration-checklist&#34;&gt;b2 integrations checklist&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      test_mode&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_TEST_MODE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-versions&#34;&gt;--b2-versions&lt;/h4&gt;
&lt;p&gt;Include old versions in directory listings.&lt;/p&gt;
&lt;p&gt;Note that when using this no file write operations are permitted,
so you can&#39;t upload files or delete them.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      versions&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_VERSIONS&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-version-at&#34;&gt;--b2-version-at&lt;/h4&gt;
&lt;p&gt;Show file versions as they were at the specified time.&lt;/p&gt;
&lt;p&gt;Note that when using this no file write operations are permitted,
so you can&#39;t upload files or delete them.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      version_at&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_VERSION_AT&lt;/li&gt;
&lt;li&gt;Type:        Time&lt;/li&gt;
&lt;li&gt;Default:     off&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-upload-cutoff&#34;&gt;--b2-upload-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to chunked upload.&lt;/p&gt;
&lt;p&gt;Files above this size will be uploaded in chunks of &amp;quot;--b2-chunk-size&amp;quot;.&lt;/p&gt;
&lt;p&gt;This value should be set no larger than 4.657 GiB (== 5 GB).&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_UPLOAD_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     200Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-copy-cutoff&#34;&gt;--b2-copy-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to multipart copy.&lt;/p&gt;
&lt;p&gt;Any files larger than this that need to be server-side copied will be
copied in chunks of this size.&lt;/p&gt;
&lt;p&gt;The minimum is 0 and the maximum is 4.6 GiB.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      copy_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_COPY_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     4Gi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-chunk-size&#34;&gt;--b2-chunk-size&lt;/h4&gt;
&lt;p&gt;Upload chunk size.&lt;/p&gt;
&lt;p&gt;When uploading large files, chunk the file into this size.&lt;/p&gt;
&lt;p&gt;Must fit in memory. These chunks are buffered in memory and there
might a maximum of &amp;quot;--transfers&amp;quot; chunks in progress at once.&lt;/p&gt;
&lt;p&gt;5,000,000 Bytes is the minimum size.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_CHUNK_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     96Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-upload-concurrency&#34;&gt;--b2-upload-concurrency&lt;/h4&gt;
&lt;p&gt;Concurrency for multipart uploads.&lt;/p&gt;
&lt;p&gt;This is the number of chunks of the same file that are uploaded
concurrently.&lt;/p&gt;
&lt;p&gt;Note that chunks are stored in memory and there may be up to
&amp;quot;--transfers&amp;quot; * &amp;quot;--b2-upload-concurrency&amp;quot; chunks stored at once
in memory.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_concurrency&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_UPLOAD_CONCURRENCY&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     4&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-disable-checksum&#34;&gt;--b2-disable-checksum&lt;/h4&gt;
&lt;p&gt;Disable checksums for large (&amp;gt; upload cutoff) files.&lt;/p&gt;
&lt;p&gt;Normally rclone will calculate the SHA1 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      disable_checksum&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_DISABLE_CHECKSUM&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-download-url&#34;&gt;--b2-download-url&lt;/h4&gt;
&lt;p&gt;Custom endpoint for downloads.&lt;/p&gt;
&lt;p&gt;This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Rclone works with private buckets by sending an &amp;quot;Authorization&amp;quot; header.
If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.&lt;/p&gt;
&lt;p&gt;The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with &amp;quot;{download_url}/file/{bucket_name}/{path}&amp;quot;.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://mysubdomain.mydomain.tld&#34;&gt;https://mysubdomain.mydomain.tld&lt;/a&gt;
(No trailing &amp;quot;/&amp;quot;, &amp;quot;file&amp;quot; or &amp;quot;bucket&amp;quot;)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      download_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_DOWNLOAD_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-download-auth-duration&#34;&gt;--b2-download-auth-duration&lt;/h4&gt;
&lt;p&gt;Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.&lt;/p&gt;
&lt;p&gt;This is used in combination with &amp;quot;rclone link&amp;quot; for making files
accessible to the public and sets the duration before the download
authorization token will expire.&lt;/p&gt;
&lt;p&gt;The minimum value is 1 second. The maximum value is one week.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      download_auth_duration&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_DOWNLOAD_AUTH_DURATION&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     1w&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-memory-pool-flush-time&#34;&gt;--b2-memory-pool-flush-time&lt;/h4&gt;
&lt;p&gt;How often internal memory buffer pools will be flushed. (no longer used)&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      memory_pool_flush_time&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_MEMORY_POOL_FLUSH_TIME&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     1m0s&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-memory-pool-use-mmap&#34;&gt;--b2-memory-pool-use-mmap&lt;/h4&gt;
&lt;p&gt;Whether to use mmap buffers in internal memory pool. (no longer used)&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      memory_pool_use_mmap&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_MEMORY_POOL_USE_MMAP&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-lifecycle&#34;&gt;--b2-lifecycle&lt;/h4&gt;
&lt;p&gt;Set the number of days deleted files should be kept when creating a bucket.&lt;/p&gt;
&lt;p&gt;On bucket creation, this parameter is used to create a lifecycle rule
for the entire bucket.&lt;/p&gt;
&lt;p&gt;If lifecycle is 0 (the default) it does not create a lifecycle rule so
the default B2 behaviour applies. This is to create versions of files
on delete and overwrite and to keep them indefinitely.&lt;/p&gt;
&lt;p&gt;If lifecycle is &amp;gt;0 then it creates a single rule setting the number of
days before a file that is deleted or overwritten is deleted
permanently. This is known as daysFromHidingToDeleting in the b2 docs.&lt;/p&gt;
&lt;p&gt;The minimum value for this parameter is 1 day.&lt;/p&gt;
&lt;p&gt;You can also enable hard_delete in the config also which will mean
deletions won&#39;t cause versions but overwrites will still cause
versions to be made.&lt;/p&gt;
&lt;p&gt;See: &lt;a href=&#34;#lifecycle&#34;&gt;rclone backend lifecycle&lt;/a&gt; for setting lifecycles after bucket creation.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      lifecycle&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_LIFECYCLE&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     0&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-encoding&#34;&gt;--b2-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;b2-description&#34;&gt;--b2-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_B2_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;backend-commands&#34;&gt;Backend commands&lt;/h2&gt;
&lt;p&gt;Here are the commands specific to the b2 backend.&lt;/p&gt;
&lt;p&gt;Run them with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend COMMAND remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The help below will explain what arguments each command takes.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/commands/rclone_backend/&#34;&gt;backend&lt;/a&gt; command for more
info on how to pass options and arguments.&lt;/p&gt;
&lt;p&gt;These can be run on a running backend using the rc command
&lt;a href=&#34;https://rclone.org/rc/#backend-command&#34;&gt;backend/command&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;lifecycle&#34;&gt;lifecycle&lt;/h3&gt;
&lt;p&gt;Read or set the lifecycle for a bucket&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend lifecycle remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command can be used to read or set the lifecycle for a bucket.&lt;/p&gt;
&lt;p&gt;Usage Examples:&lt;/p&gt;
&lt;p&gt;To show the current lifecycle rules:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend lifecycle b2:bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will dump something like this showing the lifecycle rules.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
    {
        &amp;quot;daysFromHidingToDeleting&amp;quot;: 1,
        &amp;quot;daysFromUploadingToHiding&amp;quot;: null,
        &amp;quot;fileNamePrefix&amp;quot;: &amp;quot;&amp;quot;
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If there are no lifecycle rules (the default) then it will just return [].&lt;/p&gt;
&lt;p&gt;To reset the current lifecycle rules:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will run and then print the new lifecycle rules as above.&lt;/p&gt;
&lt;p&gt;Rclone only lets you set lifecycles for the whole bucket with the
fileNamePrefix = &amp;quot;&amp;quot;.&lt;/p&gt;
&lt;p&gt;You can&#39;t disable versioning with B2. The best you can do is to set
the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won&#39;t cause versions but
overwrites will still cause versions to be made.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See: &lt;a href=&#34;https://www.backblaze.com/docs/cloud-storage-lifecycle-rules&#34;&gt;https://www.backblaze.com/docs/cloud-storage-lifecycle-rules&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;daysFromHidingToDeleting&amp;quot;: After a file has been hidden for this many days it is deleted. 0 is off.&lt;/li&gt;
&lt;li&gt;&amp;quot;daysFromUploadingToHiding&amp;quot;: This many days after uploading a file is hidden&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cleanup&#34;&gt;cleanup&lt;/h3&gt;
&lt;p&gt;Remove unfinished large file uploads.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.&lt;/p&gt;
&lt;p&gt;Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.&lt;/p&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;max-age&amp;quot;: Max age of upload to delete&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cleanup-hidden&#34;&gt;cleanup-hidden&lt;/h3&gt;
&lt;p&gt;Remove old versions of files.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup-hidden remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command removes any old hidden versions of files.&lt;/p&gt;
&lt;p&gt;Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend cleanup-hidden b2:bucket/path/to/dir
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;rclone about&lt;/code&gt; is not supported by the B2 backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy &lt;code&gt;mfs&lt;/code&gt; (most free space) as a member of an rclone union
remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/overview/#optional-features&#34;&gt;List of backends that do not support rclone about&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_about/&#34;&gt;rclone about&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Bisync</title>
      <link>https://rclone.org/bisync/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/bisync/</guid>
      <description>&lt;h2 id=&#34;bisync&#34;&gt;Bisync&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;bisync&lt;/code&gt; is &lt;strong&gt;in beta&lt;/strong&gt; and is considered an &lt;strong&gt;advanced command&lt;/strong&gt;, so use with care.
Make sure you have read and understood the entire &lt;a href=&#34;https://rclone.org/bisync&#34;&gt;manual&lt;/a&gt; (especially the &lt;a href=&#34;#limitations&#34;&gt;Limitations&lt;/a&gt; section) before using, or data loss can result. Questions can be asked in the &lt;a href=&#34;https://forum.rclone.org/&#34;&gt;Rclone Forum&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;getting-started&#34;&gt;Getting started&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://rclone.org/install/&#34;&gt;Install rclone&lt;/a&gt; and setup your remotes.&lt;/li&gt;
&lt;li&gt;Bisync will create its working directory
at &lt;code&gt;~/.cache/rclone/bisync&lt;/code&gt; on Linux, &lt;code&gt;/Users/yourusername/Library/Caches/rclone/bisync&lt;/code&gt; on Mac,
or &lt;code&gt;C:\Users\MyLogin\AppData\Local\rclone\bisync&lt;/code&gt; on Windows.
Make sure that this location is writable.&lt;/li&gt;
&lt;li&gt;Run bisync with the &lt;code&gt;--resync&lt;/code&gt; flag, specifying the paths
to the local and remote sync directory roots.&lt;/li&gt;
&lt;li&gt;For successive sync runs, leave off the &lt;code&gt;--resync&lt;/code&gt; flag. (&lt;strong&gt;Important!&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;Consider using a &lt;a href=&#34;#filtering&#34;&gt;filters file&lt;/a&gt; for excluding
unnecessary files and directories from the sync.&lt;/li&gt;
&lt;li&gt;Consider setting up the &lt;a href=&#34;#check-access&#34;&gt;--check-access&lt;/a&gt; feature
for safety.&lt;/li&gt;
&lt;li&gt;On Linux or Mac, consider setting up a &lt;a href=&#34;#cron&#34;&gt;crontab entry&lt;/a&gt;. bisync can
safely run in concurrent cron jobs thanks to lock files it maintains.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, your first command might look like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If all looks good, run it again without &lt;code&gt;--dry-run&lt;/code&gt;. After that, remove &lt;code&gt;--resync&lt;/code&gt; as well.&lt;/p&gt;
&lt;p&gt;Here is a typical run log (with timestamps removed for clarity):&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone bisync /testdir/path1/ /testdir/path2/ --verbose
INFO  : Synching Path1 &amp;#34;/testdir/path1/&amp;#34; with Path2 &amp;#34;/testdir/path2/&amp;#34;
INFO  : Path1 checking for diffs
INFO  : - Path1    File is new                         - file11.txt
INFO  : - Path1    File is newer                       - file2.txt
INFO  : - Path1    File is newer                       - file5.txt
INFO  : - Path1    File is newer                       - file7.txt
INFO  : - Path1    File was deleted                    - file4.txt
INFO  : - Path1    File was deleted                    - file6.txt
INFO  : - Path1    File was deleted                    - file8.txt
INFO  : Path1:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
INFO  : Path2 checking for diffs
INFO  : - Path2    File is new                         - file10.txt
INFO  : - Path2    File is newer                       - file1.txt
INFO  : - Path2    File is newer                       - file5.txt
INFO  : - Path2    File is newer                       - file6.txt
INFO  : - Path2    File was deleted                    - file3.txt
INFO  : - Path2    File was deleted                    - file7.txt
INFO  : - Path2    File was deleted                    - file8.txt
INFO  : Path2:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
INFO  : Applying changes
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file11.txt
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file2.txt
INFO  : - Path2    Queue delete                        - /testdir/path2/file4.txt
NOTICE: - WARNING  New or changed in both paths        - file5.txt
NOTICE: - Path1    Renaming Path1 copy                 - /testdir/path1/file5.txt..path1
NOTICE: - Path1    Queue copy to Path2                 - /testdir/path2/file5.txt..path1
NOTICE: - Path2    Renaming Path2 copy                 - /testdir/path2/file5.txt..path2
NOTICE: - Path2    Queue copy to Path1                 - /testdir/path1/file5.txt..path2
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file6.txt
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file7.txt
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file1.txt
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file10.txt
INFO  : - Path1    Queue delete                        - /testdir/path1/file3.txt
INFO  : - Path2    Do queued copies to                 - Path1
INFO  : - Path1    Do queued copies to                 - Path2
INFO  : -          Do queued deletes on                - Path1
INFO  : -          Do queued deletes on                - Path2
INFO  : Updating listings
INFO  : Validating listings for Path1 &amp;#34;/testdir/path1/&amp;#34; vs Path2 &amp;#34;/testdir/path2/&amp;#34;
INFO  : Bisync successful
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;command-line-syntax&#34;&gt;Command line syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone bisync --help
Usage:
  rclone bisync remote1:path1 remote2:path2 [flags]

Positional arguments:
  Path1, Path2  Local path, or remote storage with &amp;#39;:&amp;#39; plus optional path.
                Type &amp;#39;rclone listremotes&amp;#39; for list of configured remotes.

Optional Flags:
      --backup-dir1 string                   --backup-dir for Path1. Must be a non-overlapping path on the same remote.
      --backup-dir2 string                   --backup-dir for Path2. Must be a non-overlapping path on the same remote.
      --check-access                         Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
      --check-filename string                Filename for --check-access (default: RCLONE_TEST)
      --check-sync string                    Controls comparison of final listings: true|false|only (default: true) (default &amp;#34;true&amp;#34;)
      --compare string                       Comma-separated list of bisync-specific compare options ex. &amp;#39;size,modtime,checksum&amp;#39; (default: &amp;#39;size,modtime&amp;#39;)
      --conflict-loser ConflictLoserAction   Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
      --conflict-resolve string              Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default &amp;#34;none&amp;#34;)
      --conflict-suffix string               Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: &amp;#39;conflict&amp;#39;)
      --create-empty-src-dirs                Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
      --download-hash                        Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
      --filters-file string                  Read filtering patterns from a file
      --force                                Bypass --max-delete safety check and run the sync. Consider using with --verbose
  -h, --help                                 help for bisync
      --ignore-listing-checksum              Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
      --max-lock Duration                    Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
      --no-cleanup                           Retain working files (useful for troubleshooting and testing).
      --no-slow-hash                         Ignore listing checksums only on backends where they are slow
      --recover                              Automatically recover from interruptions without requiring --resync.
      --remove-empty-dirs                    Remove ALL empty directories at the final cleanup step.
      --resilient                            Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
  -1, --resync                               Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
      --resync-mode string                   During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default &amp;#34;none&amp;#34;)
      --retries int                          Retry operations this many times if they fail (requires --resilient). (default 3)
      --retries-sleep Duration               Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
      --slow-hash-sync-only                  Ignore slow checksums for listings and deltas, but still consider them during sync calls.
      --workdir string                       Use custom working dir - useful for testing. (default: {WORKDIR})
      --max-delete PERCENT                   Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%)
  -n, --dry-run                              Go through the motions - No files are copied/deleted.
  -v, --verbose                              Increases logging verbosity. May be specified more than once for more details.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Arbitrary rclone flags may be specified on the
&lt;a href=&#34;https://rclone.org/commands/rclone_bisync/&#34;&gt;bisync command line&lt;/a&gt;, for example
&lt;code&gt;rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s&lt;/code&gt;
Note that interactions of various rclone flags with bisync process flow
has not been fully tested yet.&lt;/p&gt;
&lt;h3 id=&#34;paths&#34;&gt;Paths&lt;/h3&gt;
&lt;p&gt;Path1 and Path2 arguments may be references to any mix of local directory
paths (absolute or relative), UNC paths (&lt;code&gt;//server/share/path&lt;/code&gt;),
Windows drive paths (with a drive letter and &lt;code&gt;:&lt;/code&gt;) or configured
&lt;a href=&#34;https://rclone.org/docs/#syntax-of-remote-paths&#34;&gt;remotes&lt;/a&gt; with optional subdirectory paths.
Cloud references are distinguished by having a &lt;code&gt;:&lt;/code&gt; in the argument
(see &lt;a href=&#34;#windows&#34;&gt;Windows support&lt;/a&gt; below).&lt;/p&gt;
&lt;p&gt;Path1 and Path2 are treated equally, in that neither has priority for
file changes (except during &lt;a href=&#34;#resync&#34;&gt;&lt;code&gt;--resync&lt;/code&gt;&lt;/a&gt;), and access efficiency does not change whether a remote
is on Path1 or Path2.&lt;/p&gt;
&lt;p&gt;The listings in bisync working directory (default: &lt;code&gt;~/.cache/rclone/bisync&lt;/code&gt;)
are named based on the Path1 and Path2 arguments so that separate syncs
to individual directories within the tree may be set up, e.g.:
&lt;code&gt;path_to_local_tree..dropbox_subdir.lst&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Any empty directories after the sync on both the Path1 and Path2
filesystems are not deleted by default, unless &lt;code&gt;--create-empty-src-dirs&lt;/code&gt; is specified.
If the &lt;code&gt;--remove-empty-dirs&lt;/code&gt; flag is specified, then both paths will have ALL empty directories purged
as the last step in the process.&lt;/p&gt;
&lt;h2 id=&#34;command-line-flags&#34;&gt;Command-line flags&lt;/h2&gt;
&lt;h3 id=&#34;resync&#34;&gt;--resync&lt;/h3&gt;
&lt;p&gt;This will effectively make both Path1 and Path2 filesystems contain a
matching superset of all files. By default, Path2 files that do not exist in Path1 will
be copied to Path1, and the process will then copy the Path1 tree to Path2.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--resync&lt;/code&gt; sequence is roughly equivalent to the following (but see &lt;a href=&#34;#resync-mode&#34;&gt;&lt;code&gt;--resync-mode&lt;/code&gt;&lt;/a&gt; for other options):&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
rclone copy Path1 Path2 [--create-empty-src-dirs]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The base directories on both Path1 and Path2 filesystems must exist
or bisync will fail. This is required for safety - that bisync can verify
that both paths are valid.&lt;/p&gt;
&lt;p&gt;When using &lt;code&gt;--resync&lt;/code&gt;, a newer version of a file on the Path2 filesystem
will (by default) be overwritten by the Path1 filesystem version.
(Note that this is &lt;a href=&#34;https://github.com/rclone/rclone/issues/5681#issuecomment-938761815&#34;&gt;NOT entirely symmetrical&lt;/a&gt;, and more symmetrical options can be specified with the &lt;a href=&#34;#resync-mode&#34;&gt;&lt;code&gt;--resync-mode&lt;/code&gt;&lt;/a&gt; flag.)
Carefully evaluate deltas using &lt;a href=&#34;https://rclone.org/flags/#non-backend-flags&#34;&gt;--dry-run&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For a resync run, one of the paths may be empty (no files in the path tree).
The resync run should result in files on both paths, else a normal non-resync
run will fail.&lt;/p&gt;
&lt;p&gt;For a non-resync run, either path being empty (no files in the tree) fails with
&lt;code&gt;Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst&lt;/code&gt;
This is a safety check that an unexpected empty path does not result in
deleting &lt;strong&gt;everything&lt;/strong&gt; in the other path.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;--resync&lt;/code&gt; implies &lt;code&gt;--resync-mode path1&lt;/code&gt; unless a different
&lt;a href=&#34;#resync-mode&#34;&gt;&lt;code&gt;--resync-mode&lt;/code&gt;&lt;/a&gt;  is explicitly specified.
It is not necessary to use both the &lt;code&gt;--resync&lt;/code&gt; and &lt;code&gt;--resync-mode&lt;/code&gt; flags --
either one is sufficient without the other.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;code&gt;--resync&lt;/code&gt; (including &lt;code&gt;--resync-mode&lt;/code&gt;) should only be used under three specific (rare) circumstances:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It is your &lt;em&gt;first&lt;/em&gt; bisync run (between these two paths)&lt;/li&gt;
&lt;li&gt;You&#39;ve just made changes to your bisync settings (such as editing the contents of your &lt;code&gt;--filters-file&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;There was an error on the prior run, and as a result, bisync now requires &lt;code&gt;--resync&lt;/code&gt; to recover&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The rest of the time, you should &lt;em&gt;omit&lt;/em&gt; &lt;code&gt;--resync&lt;/code&gt;. The reason is because &lt;code&gt;--resync&lt;/code&gt; will only &lt;em&gt;copy&lt;/em&gt; (not &lt;em&gt;sync&lt;/em&gt;) each side to the other.
Therefore, if you included &lt;code&gt;--resync&lt;/code&gt; for every bisync run, it would never be possible to delete a file --
the deleted file would always keep reappearing at the end of every run (because it&#39;s being copied from the other side where it still exists).
Similarly, renaming a file would always result in a duplicate copy (both old and new name) on both sides.&lt;/p&gt;
&lt;p&gt;If you find that frequent interruptions from #3 are an issue, rather than
automatically running &lt;code&gt;--resync&lt;/code&gt;, the recommended alternative is to use the
&lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt;, and
&lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt; flags, (along with &lt;a href=&#34;#graceful-shutdown&#34;&gt;Graceful
Shutdown&lt;/a&gt; mode, when needed) for a very robust
&amp;quot;set-it-and-forget-it&amp;quot; bisync setup that can automatically bounce back from
almost any interruption it might encounter. Consider adding something like the
following:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--resilient --recover --max-lock 2m --conflict-resolve newer
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;resync-mode&#34;&gt;--resync-mode CHOICE&lt;/h3&gt;
&lt;p&gt;In the event that a file differs on both sides during a &lt;code&gt;--resync&lt;/code&gt;,
&lt;code&gt;--resync-mode&lt;/code&gt; controls which version will overwrite the other. The supported
options are similar to &lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt;. For all of
the following options, the version that is kept is referred to as the &amp;quot;winner&amp;quot;,
and the version that is overwritten (deleted) is referred to as the &amp;quot;loser&amp;quot;.
The options are named after the &amp;quot;winner&amp;quot;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;path1&lt;/code&gt; - (the default) - the version from Path1 is unconditionally
considered the winner (regardless of &lt;code&gt;modtime&lt;/code&gt; and &lt;code&gt;size&lt;/code&gt;, if any). This can be
useful if one side is more trusted or up-to-date than the other, at the time of
the &lt;code&gt;--resync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;path2&lt;/code&gt; - same as &lt;code&gt;path1&lt;/code&gt;, except the path2 version is considered the winner.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;newer&lt;/code&gt; - the newer file (by &lt;code&gt;modtime&lt;/code&gt;) is considered the winner, regardless
of which side it came from. This may result in having a mix of some winners
from Path1, and some winners from Path2. (The implementation is analogous to
running &lt;code&gt;rclone copy --update&lt;/code&gt; in both directions.)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;older&lt;/code&gt; - same as &lt;code&gt;newer&lt;/code&gt;, except the older file is considered the winner,
and the newer file is considered the loser.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;larger&lt;/code&gt; - the larger file (by &lt;code&gt;size&lt;/code&gt;) is considered the winner (regardless
of &lt;code&gt;modtime&lt;/code&gt;, if any). This can be a useful option for remotes without
&lt;code&gt;modtime&lt;/code&gt; support, or with the kinds of files (such as logs) that tend to grow
but not shrink, over time.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;smaller&lt;/code&gt; - the smaller file (by &lt;code&gt;size&lt;/code&gt;) is considered the winner (regardless
of &lt;code&gt;modtime&lt;/code&gt;, if any).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For all of the above options, note the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If either of the underlying remotes lacks support for the chosen method, it
will be ignored and will fall back to the default of &lt;code&gt;path1&lt;/code&gt;. (For example, if
&lt;code&gt;--resync-mode newer&lt;/code&gt; is set, but one of the paths uses a remote that doesn&#39;t
support &lt;code&gt;modtime&lt;/code&gt;.)&lt;/li&gt;
&lt;li&gt;If a winner can&#39;t be determined because the chosen method&#39;s attribute is
missing or equal, it will be ignored, and bisync will instead try to determine
whether the files differ by looking at the other &lt;code&gt;--compare&lt;/code&gt; methods in effect.
(For example, if &lt;code&gt;--resync-mode newer&lt;/code&gt; is set, but the Path1 and Path2 modtimes
are identical, bisync will compare the sizes.) If bisync concludes that they
differ, preference is given to whichever is the &amp;quot;source&amp;quot; at that moment. (In
practice, this gives a slight advantage to Path2, as the 2to1 copy comes before
the 1to2 copy.) If the files &lt;em&gt;do not&lt;/em&gt; differ, nothing is copied (as both sides
are already correct).&lt;/li&gt;
&lt;li&gt;These options apply only to files that exist on both sides (with the same
name and relative path). Files that exist &lt;em&gt;only&lt;/em&gt; on one side and not the other
are &lt;em&gt;always&lt;/em&gt; copied to the other, during &lt;code&gt;--resync&lt;/code&gt; (this is one of the main
differences between resync and non-resync runs.).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;, &lt;code&gt;--conflict-loser&lt;/code&gt;, and &lt;code&gt;--conflict-suffix&lt;/code&gt; do not
apply during &lt;code&gt;--resync&lt;/code&gt;, and unlike these flags, nothing is renamed during
&lt;code&gt;--resync&lt;/code&gt;. When a file differs on both sides during &lt;code&gt;--resync&lt;/code&gt;, one version
always overwrites the other (much like in &lt;code&gt;rclone copy&lt;/code&gt;.) (Consider using
&lt;a href=&#34;#backup-dir1-and-backup-dir2&#34;&gt;&lt;code&gt;--backup-dir&lt;/code&gt;&lt;/a&gt; to retain a backup of the losing
version.)&lt;/li&gt;
&lt;li&gt;Unlike for &lt;code&gt;--conflict-resolve&lt;/code&gt;, &lt;code&gt;--resync-mode none&lt;/code&gt; is not a valid option
(or rather, it will be interpreted as &amp;quot;no resync&amp;quot;, unless &lt;code&gt;--resync&lt;/code&gt; has also
been specified, in which case it will be ignored.)&lt;/li&gt;
&lt;li&gt;Winners and losers are decided at the individual file-level only (there is
not currently an option to pick an entire winning directory atomically,
although the &lt;code&gt;path1&lt;/code&gt; and &lt;code&gt;path2&lt;/code&gt; options typically produce a similar result.)&lt;/li&gt;
&lt;li&gt;To maintain backward-compatibility, the &lt;code&gt;--resync&lt;/code&gt; flag implies
&lt;code&gt;--resync-mode path1&lt;/code&gt; unless a different &lt;code&gt;--resync-mode&lt;/code&gt; is explicitly
specified. Similarly, all &lt;code&gt;--resync-mode&lt;/code&gt; options (except &lt;code&gt;none&lt;/code&gt;) imply
&lt;code&gt;--resync&lt;/code&gt;, so it is not necessary to use both the &lt;code&gt;--resync&lt;/code&gt; and
&lt;code&gt;--resync-mode&lt;/code&gt; flags simultaneously -- either one is sufficient without the
other.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;check-access&#34;&gt;--check-access&lt;/h3&gt;
&lt;p&gt;Access check files are an additional safety measure against data loss.
bisync will ensure it can find matching &lt;code&gt;RCLONE_TEST&lt;/code&gt; files in the same places
in the Path1 and Path2 filesystems.
&lt;code&gt;RCLONE_TEST&lt;/code&gt; files are not generated automatically.
For &lt;code&gt;--check-access&lt;/code&gt; to succeed, you must first either:
&lt;strong&gt;A)&lt;/strong&gt; Place one or more &lt;code&gt;RCLONE_TEST&lt;/code&gt; files in both systems, or
&lt;strong&gt;B)&lt;/strong&gt; Set &lt;code&gt;--check-filename&lt;/code&gt; to a filename already in use in various locations
throughout your sync&#39;d fileset. Recommended methods for &lt;strong&gt;A)&lt;/strong&gt; include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;rclone touch Path1/RCLONE_TEST&lt;/code&gt; (create a new file)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST&lt;/code&gt; (copy an existing file)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST  --include &amp;quot;RCLONE_TEST&amp;quot;&lt;/code&gt; (copy multiple files at once, recursively)&lt;/li&gt;
&lt;li&gt;create the files manually (outside of rclone)&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;bisync&lt;/code&gt; once &lt;em&gt;without&lt;/em&gt; &lt;code&gt;--check-access&lt;/code&gt; to set matching files on both filesystems
will also work, but is not preferred, due to potential for user error
(you are temporarily disabling the safety feature).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note that &lt;code&gt;--check-access&lt;/code&gt; is still enforced on &lt;code&gt;--resync&lt;/code&gt;, so &lt;code&gt;bisync --resync --check-access&lt;/code&gt;
will not work as a method of initially setting the files (this is to ensure that bisync can&#39;t
&lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should&#34;&gt;inadvertently circumvent its own safety switch&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;Time stamps and file contents for &lt;code&gt;RCLONE_TEST&lt;/code&gt; files are not important, just the names and locations.
If you have symbolic links in your sync tree it is recommended to place
&lt;code&gt;RCLONE_TEST&lt;/code&gt; files in the linked-to directory tree to protect against
bisync assuming a bunch of deleted files if the linked-to tree should not be
accessible.
See also the &lt;a href=&#34;--check-filename&#34;&gt;--check-filename&lt;/a&gt; flag.&lt;/p&gt;
&lt;h3 id=&#34;check-filename&#34;&gt;--check-filename&lt;/h3&gt;
&lt;p&gt;Name of the file(s) used in access health validation.
The default &lt;code&gt;--check-filename&lt;/code&gt; is &lt;code&gt;RCLONE_TEST&lt;/code&gt;.
One or more files having this filename must exist, synchronized between your
source and destination filesets, in order for &lt;code&gt;--check-access&lt;/code&gt; to succeed.
See &lt;a href=&#34;#check-access&#34;&gt;--check-access&lt;/a&gt; for additional details.&lt;/p&gt;
&lt;h3 id=&#34;compare&#34;&gt;--compare&lt;/h3&gt;
&lt;p&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, bisync fully supports comparing based on any combination of
size, modtime, and checksum (lifting the prior restriction on backends without
modtime support.)&lt;/p&gt;
&lt;p&gt;By default (without the &lt;code&gt;--compare&lt;/code&gt; flag), bisync inherits the same comparison
options as &lt;code&gt;sync&lt;/code&gt;
(that is: &lt;code&gt;size&lt;/code&gt; and &lt;code&gt;modtime&lt;/code&gt; by default, unless modified with flags such as
&lt;a href=&#34;https://rclone.org/docs/#c-checksum&#34;&gt;&lt;code&gt;--checksum&lt;/code&gt;&lt;/a&gt; or &lt;a href=&#34;https://rclone.org/docs/#size-only&#34;&gt;&lt;code&gt;--size-only&lt;/code&gt;&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;If the &lt;code&gt;--compare&lt;/code&gt; flag is set, it will override these defaults. This can be
useful if you wish to compare based on combinations not currently supported in
&lt;code&gt;sync&lt;/code&gt;, such as comparing all three of &lt;code&gt;size&lt;/code&gt; AND &lt;code&gt;modtime&lt;/code&gt; AND &lt;code&gt;checksum&lt;/code&gt;
simultaneously (or just &lt;code&gt;modtime&lt;/code&gt; AND &lt;code&gt;checksum&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--compare&lt;/code&gt; takes a comma-separated list, with the currently supported values
being &lt;code&gt;size&lt;/code&gt;, &lt;code&gt;modtime&lt;/code&gt;, and &lt;code&gt;checksum&lt;/code&gt;. For example, if you want to compare
size and checksum, but not modtime, you would do:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--compare size,checksum
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Or if you want to compare all three:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--compare size,modtime,checksum
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;--compare&lt;/code&gt; overrides any conflicting flags. For example, if you set the
conflicting flags &lt;code&gt;--compare checksum --size-only&lt;/code&gt;, &lt;code&gt;--size-only&lt;/code&gt; will be
ignored, and bisync will compare checksum and not size. To avoid confusion, it
is recommended to use &lt;em&gt;either&lt;/em&gt; &lt;code&gt;--compare&lt;/code&gt; or the normal &lt;code&gt;sync&lt;/code&gt; flags, but not
both.&lt;/p&gt;
&lt;p&gt;If &lt;code&gt;--compare&lt;/code&gt; includes &lt;code&gt;checksum&lt;/code&gt; and both remotes support checksums but have
no hash types in common with each other, checksums will be considered &lt;em&gt;only&lt;/em&gt;
for comparisons within the same side (to determine what has changed since the
prior sync), but not for comparisons against the opposite side. If one side
supports checksums and the other does not, checksums will only be considered on
the side that supports them.&lt;/p&gt;
&lt;p&gt;When comparing with &lt;code&gt;checksum&lt;/code&gt; and/or &lt;code&gt;size&lt;/code&gt; without &lt;code&gt;modtime&lt;/code&gt;, bisync cannot
determine whether a file is &lt;code&gt;newer&lt;/code&gt; or &lt;code&gt;older&lt;/code&gt; -- only whether it is &lt;code&gt;changed&lt;/code&gt;
or &lt;code&gt;unchanged&lt;/code&gt;. (If it is &lt;code&gt;changed&lt;/code&gt; on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)&lt;/p&gt;
&lt;p&gt;It is recommended to do a &lt;code&gt;--resync&lt;/code&gt; when changing &lt;code&gt;--compare&lt;/code&gt; settings, as
otherwise your prior listing files may not contain the attributes you wish to
compare (for example, they will not have stored checksums if you were not
previously comparing checksums.)&lt;/p&gt;
&lt;h3 id=&#34;ignore-listing-checksum&#34;&gt;--ignore-listing-checksum&lt;/h3&gt;
&lt;p&gt;When &lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--compare checksum&lt;/code&gt; is set, bisync will retrieve (or
generate) checksums (for backends that support them) when creating the listings
for both paths, and store the checksums in the listing files.
&lt;code&gt;--ignore-listing-checksum&lt;/code&gt; will disable this behavior, which may speed things
up considerably, especially on backends (such as &lt;a href=&#34;https://rclone.org/local/&#34;&gt;local&lt;/a&gt;) where hashes
must be computed on the fly instead of retrieved. Please note the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, &lt;code&gt;--ignore-listing-checksum&lt;/code&gt; is now automatically set when
neither &lt;code&gt;--checksum&lt;/code&gt; nor &lt;code&gt;--compare checksum&lt;/code&gt; are in use (as the checksums
would not be used for anything.)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--ignore-listing-checksum&lt;/code&gt; is NOT the same as
&lt;a href=&#34;https://rclone.org/docs/#ignore-checksum&#34;&gt;&lt;code&gt;--ignore-checksum&lt;/code&gt;&lt;/a&gt;,
and you may wish to use one or the other, or both. In a nutshell:
&lt;code&gt;--ignore-listing-checksum&lt;/code&gt; controls whether checksums are considered when
scanning for diffs,
while &lt;code&gt;--ignore-checksum&lt;/code&gt; controls whether checksums are considered during the
copy/sync operations that follow,
if there ARE diffs.&lt;/li&gt;
&lt;li&gt;Unless &lt;code&gt;--ignore-listing-checksum&lt;/code&gt; is passed, bisync currently computes
hashes for one path
&lt;em&gt;even when there&#39;s no common hash with the other path&lt;/em&gt;
(for example, a &lt;a href=&#34;https://rclone.org/crypt/#modification-times-and-hashes&#34;&gt;crypt&lt;/a&gt; remote.)
This can still be beneficial, as the hashes will still be used to detect
changes within the same side
(if &lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--compare checksum&lt;/code&gt; is set), even if they can&#39;t be used to
compare against the opposite side.&lt;/li&gt;
&lt;li&gt;If you wish to ignore listing checksums &lt;em&gt;only&lt;/em&gt; on remotes where they are slow
to compute, consider using
&lt;a href=&#34;#no-slow-hash&#34;&gt;&lt;code&gt;--no-slow-hash&lt;/code&gt;&lt;/a&gt; (or
&lt;a href=&#34;#slow-hash-sync-only&#34;&gt;&lt;code&gt;--slow-hash-sync-only&lt;/code&gt;&lt;/a&gt;) instead of
&lt;code&gt;--ignore-listing-checksum&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If &lt;code&gt;--ignore-listing-checksum&lt;/code&gt; is used simultaneously with &lt;code&gt;--compare checksum&lt;/code&gt; (or &lt;code&gt;--checksum&lt;/code&gt;), checksums will be ignored for bisync deltas,
but still considered during the sync operations that follow (if deltas are
detected based on modtime and/or size.)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;no-slow-hash&#34;&gt;--no-slow-hash&lt;/h3&gt;
&lt;p&gt;On some remotes (notably &lt;code&gt;local&lt;/code&gt;), checksums can dramatically slow down a
bisync run, because hashes cannot be stored and need to be computed in
real-time when they are requested. On other remotes (such as &lt;code&gt;drive&lt;/code&gt;), they add
practically no time at all. The &lt;code&gt;--no-slow-hash&lt;/code&gt; flag will automatically skip
checksums on remotes where they are slow, while still comparing them on others
(assuming &lt;a href=&#34;#compare&#34;&gt;&lt;code&gt;--compare&lt;/code&gt;&lt;/a&gt; includes &lt;code&gt;checksum&lt;/code&gt;.) This can be useful when one of your
bisync paths is slow but you still want to check checksums on the other, for a more
robust sync.&lt;/p&gt;
&lt;h3 id=&#34;slow-hash-sync-only&#34;&gt;--slow-hash-sync-only&lt;/h3&gt;
&lt;p&gt;Same as &lt;a href=&#34;#no-slow-hash&#34;&gt;&lt;code&gt;--no-slow-hash&lt;/code&gt;&lt;/a&gt;, except slow hashes are still
considered during sync calls. They are still NOT considered for determining
deltas, nor or they included in listings. They are also skipped during
&lt;code&gt;--resync&lt;/code&gt;. The main use case for this flag is when you have a large number of
files, but relatively few of them change from run to run -- so you don&#39;t want
to check your entire tree every time (it would take too long), but you still
want to consider checksums for the smaller group of files for which a &lt;code&gt;modtime&lt;/code&gt;
or &lt;code&gt;size&lt;/code&gt; change was detected. Keep in mind that this speed savings comes with
a safety trade-off: if a file&#39;s content were to change without a change to its
&lt;code&gt;modtime&lt;/code&gt; or &lt;code&gt;size&lt;/code&gt;, bisync would not detect it, and it would not be synced.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--slow-hash-sync-only&lt;/code&gt; is only useful if both remotes share a common hash
type (if they don&#39;t, bisync will automatically fall back to &lt;code&gt;--no-slow-hash&lt;/code&gt;.)
Both &lt;code&gt;--no-slow-hash&lt;/code&gt; and &lt;code&gt;--slow-hash-sync-only&lt;/code&gt; have no effect without
&lt;code&gt;--compare checksum&lt;/code&gt; (or &lt;code&gt;--checksum&lt;/code&gt;).&lt;/p&gt;
&lt;h3 id=&#34;download-hash&#34;&gt;--download-hash&lt;/h3&gt;
&lt;p&gt;If &lt;code&gt;--download-hash&lt;/code&gt; is set, bisync will use best efforts to obtain an MD5
checksum by downloading and computing on-the-fly, when checksums are not
otherwise available (for example, a remote that doesn&#39;t support them.) Note
that since rclone has to download the entire file, this may dramatically slow
down your bisync runs, and is also likely to use a lot of data, so it is
probably not practical for bisync paths with a large total file size. However,
it can be a good option for syncing small-but-important files with maximum
accuracy (for example, a source code repo on a &lt;code&gt;crypt&lt;/code&gt; remote.) An additional
advantage over methods like &lt;a href=&#34;https://rclone.org/commands/rclone_cryptcheck/&#34;&gt;&lt;code&gt;cryptcheck&lt;/code&gt;&lt;/a&gt; is
that the original file is not required for comparison (for example,
&lt;code&gt;--download-hash&lt;/code&gt; can be used to bisync two different crypt remotes with
different passwords.)&lt;/p&gt;
&lt;p&gt;When &lt;code&gt;--download-hash&lt;/code&gt; is set, bisync still looks for more efficient checksums
first, and falls back to downloading only when none are found. It takes
priority over conflicting flags such as &lt;code&gt;--no-slow-hash&lt;/code&gt;. &lt;code&gt;--download-hash&lt;/code&gt; is
not suitable for &lt;a href=&#34;#gdocs&#34;&gt;Google Docs&lt;/a&gt; and other files of unknown size, as
their checksums would change from run to run (due to small variances in the
internals of the generated export file.) Therefore, bisync automatically skips
&lt;code&gt;--download-hash&lt;/code&gt; for files with a size less than 0.&lt;/p&gt;
&lt;p&gt;See also: &lt;a href=&#34;https://rclone.org/hasher/&#34;&gt;&lt;code&gt;Hasher&lt;/code&gt;&lt;/a&gt; backend,
&lt;a href=&#34;https://rclone.org/commands/rclone_cryptcheck/&#34;&gt;&lt;code&gt;cryptcheck&lt;/code&gt;&lt;/a&gt; command, &lt;a href=&#34;https://rclone.org/commands/rclone_check/&#34;&gt;&lt;code&gt;rclone check --download&lt;/code&gt;&lt;/a&gt; option,
&lt;a href=&#34;https://rclone.org/commands/rclone_md5sum/&#34;&gt;&lt;code&gt;md5sum&lt;/code&gt;&lt;/a&gt; command&lt;/p&gt;
&lt;h3 id=&#34;max-delete&#34;&gt;--max-delete&lt;/h3&gt;
&lt;p&gt;As a safety check, if greater than the &lt;code&gt;--max-delete&lt;/code&gt; percent of files were
deleted on either the Path1 or Path2 filesystem, then bisync will abort with
a warning message, without making any changes.
The default &lt;code&gt;--max-delete&lt;/code&gt; is &lt;code&gt;50%&lt;/code&gt;.
One way to trigger this limit is to rename a directory that contains more
than half of your files. This will appear to bisync as a bunch of deleted
files and a bunch of new files.
This safety check is intended to block bisync from deleting all of the
files on both filesystems due to a temporary network access issue, or if
the user had inadvertently deleted the files on one side or the other.
To force the sync, either set a different delete percentage limit,
e.g. &lt;code&gt;--max-delete 75&lt;/code&gt; (allows up to 75% deletion), or use &lt;code&gt;--force&lt;/code&gt;
to bypass the check.&lt;/p&gt;
&lt;p&gt;Also see the &lt;a href=&#34;#all-files-changed&#34;&gt;all files changed&lt;/a&gt; check.&lt;/p&gt;
&lt;h3 id=&#34;filters-file&#34;&gt;--filters-file&lt;/h3&gt;
&lt;p&gt;By using rclone filter features you can exclude file types or directory
sub-trees from the sync.
See the &lt;a href=&#34;#filtering&#34;&gt;bisync filters&lt;/a&gt; section and generic
&lt;a href=&#34;https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file&#34;&gt;--filter-from&lt;/a&gt;
documentation.
An &lt;a href=&#34;#example-filters-file&#34;&gt;example filters file&lt;/a&gt; contains filters for
non-allowed files for synching with Dropbox.&lt;/p&gt;
&lt;p&gt;If you make changes to your filters file then bisync requires a run
with &lt;code&gt;--resync&lt;/code&gt;. This is a safety feature, which prevents existing files
on the Path1 and/or Path2 side from seeming to disappear from view
(since they are excluded in the new listings), which would fool bisync
into seeing them as deleted (as compared to the prior run listings),
and then bisync would proceed to delete them for real.&lt;/p&gt;
&lt;p&gt;To block this from happening, bisync calculates an MD5 hash of the filters file
and stores the hash in a &lt;code&gt;.md5&lt;/code&gt; file in the same place as your filters file.
On the next run with &lt;code&gt;--filters-file&lt;/code&gt; set, bisync re-calculates the MD5 hash
of the current filters file and compares it to the hash stored in the &lt;code&gt;.md5&lt;/code&gt; file.
If they don&#39;t match, the run aborts with a critical error and thus forces you
to do a &lt;code&gt;--resync&lt;/code&gt;, likely avoiding a disaster.&lt;/p&gt;
&lt;h3 id=&#34;conflict-resolve&#34;&gt;--conflict-resolve CHOICE&lt;/h3&gt;
&lt;p&gt;In bisync, a &amp;quot;conflict&amp;quot; is a file that is &lt;em&gt;new&lt;/em&gt; or &lt;em&gt;changed&lt;/em&gt; on &lt;em&gt;both sides&lt;/em&gt;
(relative to the prior run) AND is &lt;em&gt;not currently identical&lt;/em&gt; on both sides.
&lt;code&gt;--conflict-resolve&lt;/code&gt; controls how bisync handles such a scenario. The currently
supported options are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;none&lt;/code&gt; - (the default) - do not attempt to pick a winner, keep and rename
both files according to &lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt; and
&lt;a href=&#34;#conflict-suffix&#34;&gt;&lt;code&gt;--conflict-suffix&lt;/code&gt;&lt;/a&gt; settings. For example, with the default
settings, &lt;code&gt;file.txt&lt;/code&gt; on Path1 is renamed &lt;code&gt;file.txt.conflict1&lt;/code&gt; and &lt;code&gt;file.txt&lt;/code&gt; on
Path2 is renamed &lt;code&gt;file.txt.conflict2&lt;/code&gt;. Both are copied to the opposite path
during the run, so both sides end up with a copy of both files. (As &lt;code&gt;none&lt;/code&gt; is
the default, it is not necessary to specify &lt;code&gt;--conflict-resolve none&lt;/code&gt; -- you
can just omit the flag.)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;newer&lt;/code&gt; - the newer file (by &lt;code&gt;modtime&lt;/code&gt;) is considered the winner and is
copied without renaming. The older file (the &amp;quot;loser&amp;quot;) is handled according to
&lt;code&gt;--conflict-loser&lt;/code&gt; and &lt;code&gt;--conflict-suffix&lt;/code&gt; settings (either renamed or
deleted.) For example, if &lt;code&gt;file.txt&lt;/code&gt; on Path1 is newer than &lt;code&gt;file.txt&lt;/code&gt; on
Path2, the result on both sides (with other default settings) will be &lt;code&gt;file.txt&lt;/code&gt;
(winner from Path1) and &lt;code&gt;file.txt.conflict1&lt;/code&gt; (loser from Path2).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;older&lt;/code&gt; - same as &lt;code&gt;newer&lt;/code&gt;, except the older file is considered the winner,
and the newer file is considered the loser.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;larger&lt;/code&gt; - the larger file (by &lt;code&gt;size&lt;/code&gt;) is considered the winner (regardless
of &lt;code&gt;modtime&lt;/code&gt;, if any).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;smaller&lt;/code&gt; - the smaller file (by &lt;code&gt;size&lt;/code&gt;) is considered the winner (regardless
of &lt;code&gt;modtime&lt;/code&gt;, if any).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;path1&lt;/code&gt; - the version from Path1 is unconditionally considered the winner
(regardless of &lt;code&gt;modtime&lt;/code&gt; and &lt;code&gt;size&lt;/code&gt;, if any). This can be useful if one side is
usually more trusted or up-to-date than the other.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;path2&lt;/code&gt; - same as &lt;code&gt;path1&lt;/code&gt;, except the path2 version is considered the
winner.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For all of the above options, note the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If either of the underlying remotes lacks support for the chosen method, it
will be ignored and fall back to &lt;code&gt;none&lt;/code&gt;. (For example, if &lt;code&gt;--conflict-resolve newer&lt;/code&gt; is set, but one of the paths uses a remote that doesn&#39;t support
&lt;code&gt;modtime&lt;/code&gt;.)&lt;/li&gt;
&lt;li&gt;If a winner can&#39;t be determined because the chosen method&#39;s attribute is
missing or equal, it will be ignored and fall back to &lt;code&gt;none&lt;/code&gt;. (For example, if
&lt;code&gt;--conflict-resolve newer&lt;/code&gt; is set, but the Path1 and Path2 modtimes are
identical, even if the sizes may differ.)&lt;/li&gt;
&lt;li&gt;If the file&#39;s content is currently identical on both sides, it is not
considered a &amp;quot;conflict&amp;quot;, even if new or changed on both sides since the prior
sync. (For example, if you made a change on one side and then synced it to the
other side by other means.) Therefore, none of the conflict resolution flags
apply in this scenario.&lt;/li&gt;
&lt;li&gt;The conflict resolution flags do not apply during a &lt;code&gt;--resync&lt;/code&gt;, as there is
no &amp;quot;prior run&amp;quot; to speak of (but see &lt;a href=&#34;#resync-mode&#34;&gt;&lt;code&gt;--resync-mode&lt;/code&gt;&lt;/a&gt; for similar
options.)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;conflict-loser&#34;&gt;--conflict-loser CHOICE&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;--conflict-loser&lt;/code&gt; determines what happens to the &amp;quot;loser&amp;quot; of a sync conflict
(when &lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt; determines a winner) or to both
files (when there is no winner.) The currently supported options are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;num&lt;/code&gt; - (the default) - auto-number the conflicts by automatically appending
the next available number to the &lt;code&gt;--conflict-suffix&lt;/code&gt;, in chronological order.
For example, with the default settings, the first conflict for &lt;code&gt;file.txt&lt;/code&gt; will
be renamed &lt;code&gt;file.txt.conflict1&lt;/code&gt;. If &lt;code&gt;file.txt.conflict1&lt;/code&gt; already exists,
&lt;code&gt;file.txt.conflict2&lt;/code&gt; will be used instead (etc., up to a maximum of
9223372036854775807 conflicts.)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pathname&lt;/code&gt; - rename the conflicts according to which side they came from,
which was the default behavior prior to &lt;code&gt;v1.66&lt;/code&gt;. For example, with
&lt;code&gt;--conflict-suffix path&lt;/code&gt;, &lt;code&gt;file.txt&lt;/code&gt; from Path1 will be renamed
&lt;code&gt;file.txt.path1&lt;/code&gt;, and &lt;code&gt;file.txt&lt;/code&gt; from Path2 will be renamed &lt;code&gt;file.txt.path2&lt;/code&gt;.
If two non-identical suffixes are provided (ex. &lt;code&gt;--conflict-suffix cloud,local&lt;/code&gt;), the trailing digit is omitted. Importantly, note that with
&lt;code&gt;pathname&lt;/code&gt;, there is no auto-numbering beyond &lt;code&gt;2&lt;/code&gt;, so if &lt;code&gt;file.txt.path2&lt;/code&gt;
somehow already exists, it will be overwritten. Using a dynamic date variable
in your &lt;code&gt;--conflict-suffix&lt;/code&gt; (see below) is one possible way to avoid this. Note
also that conflicts-of-conflicts are possible, if the original conflict is not
manually resolved -- for example, if for some reason you edited
&lt;code&gt;file.txt.path1&lt;/code&gt; on both sides, and those edits were different, the result
would be &lt;code&gt;file.txt.path1.path1&lt;/code&gt; and &lt;code&gt;file.txt.path1.path2&lt;/code&gt; (in addition to
&lt;code&gt;file.txt.path2&lt;/code&gt;.)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;delete&lt;/code&gt; - keep the winner only and delete the loser, instead of renaming it.
If a winner cannot be determined (see &lt;code&gt;--conflict-resolve&lt;/code&gt; for details on how
this could happen), &lt;code&gt;delete&lt;/code&gt; is ignored and the default &lt;code&gt;num&lt;/code&gt; is used instead
(i.e. both versions are kept and renamed, and neither is deleted.) &lt;code&gt;delete&lt;/code&gt; is
inherently the most destructive option, so use it only with care.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For all of the above options, note that if a winner cannot be determined (see
&lt;code&gt;--conflict-resolve&lt;/code&gt; for details on how this could happen), or if
&lt;code&gt;--conflict-resolve&lt;/code&gt; is not in use, &lt;em&gt;both&lt;/em&gt; files will be renamed.&lt;/p&gt;
&lt;h3 id=&#34;conflict-suffix&#34;&gt;--conflict-suffix STRING[,STRING]&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;--conflict-suffix&lt;/code&gt; controls the suffix that is appended when bisync renames a
&lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt; (default: &lt;code&gt;conflict&lt;/code&gt;).
&lt;code&gt;--conflict-suffix&lt;/code&gt; will accept either one string or two comma-separated
strings to assign different suffixes to Path1 vs. Path2. This may be helpful
later in identifying the source of the conflict. (For example,
&lt;code&gt;--conflict-suffix dropboxconflict,laptopconflict&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;With &lt;code&gt;--conflict-loser num&lt;/code&gt;, a number is always appended to the suffix. With
&lt;code&gt;--conflict-loser pathname&lt;/code&gt;, a number is appended only when one suffix is
specified (or when two identical suffixes are specified.) i.e. with
&lt;code&gt;--conflict-loser pathname&lt;/code&gt;, all of the following would produce exactly the
same result:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--conflict-suffix path
--conflict-suffix path,path
--conflict-suffix path1,path2
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Suffixes may be as short as 1 character. By default, the suffix is appended
after any other extensions (ex. &lt;code&gt;file.jpg.conflict1&lt;/code&gt;), however, this can be
changed with the &lt;a href=&#34;https://rclone.org/docs/#suffix-keep-extension&#34;&gt;&lt;code&gt;--suffix-keep-extension&lt;/code&gt;&lt;/a&gt; flag
(i.e. to instead result in &lt;code&gt;file.conflict1.jpg&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--conflict-suffix&lt;/code&gt; supports several &lt;em&gt;dynamic date variables&lt;/em&gt; when enclosed in
curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;All of the formats described &lt;a href=&#34;https://pkg.go.dev/time#pkg-constants&#34;&gt;here&lt;/a&gt; and
&lt;a href=&#34;https://pkg.go.dev/time#example-Time.Format&#34;&gt;here&lt;/a&gt; are supported, but take
care to ensure that your chosen format does not use any characters that are
illegal on your remotes (for example, macOS does not allow colons in
filenames, and slashes are also best avoided as they are often interpreted as
directory separators.) To address this particular issue, an additional
&lt;code&gt;{MacFriendlyTime}&lt;/code&gt; (or just &lt;code&gt;{mac}&lt;/code&gt;) option is supported, which results in
&lt;code&gt;2006-01-02 0304PM&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;--conflict-suffix&lt;/code&gt; is entirely separate from rclone&#39;s main
&lt;a href=&#34;https://rclone.org/docs/#suffix-suffix&#34;&gt;&lt;code&gt;--sufix&lt;/code&gt;&lt;/a&gt; flag. This is intentional, as users may wish
to use both flags simultaneously, if also using
&lt;a href=&#34;#backup-dir1-and-backup-dir2&#34;&gt;&lt;code&gt;--backup-dir&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Finally, note that the default in bisync prior to &lt;code&gt;v1.66&lt;/code&gt; was to rename
conflicts with &lt;code&gt;..path1&lt;/code&gt; and &lt;code&gt;..path2&lt;/code&gt; (with two periods, and &lt;code&gt;path&lt;/code&gt; instead of
&lt;code&gt;conflict&lt;/code&gt;.) Bisync now defaults to a single dot instead of a double dot, but
additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;check-sync&#34;&gt;--check-sync&lt;/h3&gt;
&lt;p&gt;Enabled by default, the check-sync function checks that all of the same
files exist in both the Path1 and Path2 history listings. This &lt;em&gt;check-sync&lt;/em&gt;
integrity check is performed at the end of the sync run by default.
Any untrapped failing copy/deletes between the two paths might result
in differences between the two listings and in the untracked file content
differences between the two paths. A resync run would correct the error.&lt;/p&gt;
&lt;p&gt;Note that the default-enabled integrity check locally executes a load of both
the final Path1 and Path2 listings, and thus adds to the run time of a sync.
Using &lt;code&gt;--check-sync=false&lt;/code&gt; will disable it and may significantly reduce the
sync run times for very large numbers of files.&lt;/p&gt;
&lt;p&gt;The check may be run manually with &lt;code&gt;--check-sync=only&lt;/code&gt;. It runs only the
integrity check and terminates without actually synching.&lt;/p&gt;
&lt;p&gt;Note that currently, &lt;code&gt;--check-sync&lt;/code&gt; &lt;strong&gt;only checks listing snapshots and NOT the
actual files on the remotes.&lt;/strong&gt; Note also that the listing snapshots will not
know about any changes that happened during or after the latest bisync run, as
those will be discovered on the next run. Therefore, while listings should
always match &lt;em&gt;each other&lt;/em&gt; at the end of a bisync run, it is &lt;em&gt;expected&lt;/em&gt; that
they will not match the underlying remotes, nor will the remotes match each
other, if there were changes during or after the run. This is normal, and any
differences will be detected and synced on the next run.&lt;/p&gt;
&lt;p&gt;For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using &lt;a href=&#34;commands/rclone_check/&#34;&gt;&lt;code&gt;check&lt;/code&gt;&lt;/a&gt;
(or &lt;a href=&#34;https://rclone.org/commands/rclone_cryptcheck/&#34;&gt;&lt;code&gt;cryptcheck&lt;/code&gt;&lt;/a&gt;, if at least one path is a &lt;code&gt;crypt&lt;/code&gt; remote) instead of &lt;code&gt;--check-sync&lt;/code&gt;,
keeping in mind that differences are expected if files changed during or after your last bisync run.&lt;/p&gt;
&lt;p&gt;For example, a possible sequence could look like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Normally scheduled bisync run:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Periodic independent integrity check (perhaps scheduled nightly or weekly):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;If diffs are found, you have some choices to correct them.
If one side is more up-to-date and you want to make the other side match it, you could run:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;(or switch Path1 and Path2 to make Path2 the source-of-truth)&lt;/p&gt;
&lt;p&gt;Or, if neither side is totally up-to-date, you could run a &lt;code&gt;--resync&lt;/code&gt; to bring them back into agreement
(but remember that this could cause deleted files to re-appear.)&lt;/p&gt;
&lt;p&gt;*Note also that &lt;code&gt;rclone check&lt;/code&gt; does not currently include empty directories,
so if you want to know if any empty directories are out of sync,
consider alternatively running the above &lt;code&gt;rclone sync&lt;/code&gt; command with &lt;code&gt;--dry-run&lt;/code&gt; added.&lt;/p&gt;
&lt;p&gt;See also: &lt;a href=&#34;#concurrent-modifications&#34;&gt;Concurrent modifications&lt;/a&gt;, &lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;resilient&#34;&gt;--resilient&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Caution: this is an experimental feature. Use at your own risk!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;By default, most errors or interruptions will cause bisync to abort and
require &lt;a href=&#34;#resync&#34;&gt;&lt;code&gt;--resync&lt;/code&gt;&lt;/a&gt; to recover. This is a safety feature,  to prevent
bisync from running again until a user checks things out.  However, in some
cases, bisync can go too far and enforce a lockout when one isn&#39;t actually
necessary,  like for certain less-serious errors that might resolve themselves
on the next run.  When &lt;code&gt;--resilient&lt;/code&gt; is specified, bisync tries its best to
recover and self-correct,  and only requires &lt;code&gt;--resync&lt;/code&gt; as a last resort when a
human&#39;s involvement is absolutely necessary.  The intended use case is for
running bisync as a background process (such as via scheduled &lt;a href=&#34;#cron&#34;&gt;cron&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;When using &lt;code&gt;--resilient&lt;/code&gt; mode, bisync will still report the error and abort,
however it will not lock out future runs -- allowing the possibility of
retrying at the next normally scheduled time,  without requiring a &lt;code&gt;--resync&lt;/code&gt;
first. Examples of such retryable errors include  access test failures, missing
listing files, and filter change detections.  These safety features will still
prevent the &lt;em&gt;current&lt;/em&gt; run from proceeding --  the difference is that if
conditions have improved by the time of the &lt;em&gt;next&lt;/em&gt; run,  that next run will be
allowed to proceed.  Certain more serious errors will still enforce a
&lt;code&gt;--resync&lt;/code&gt; lockout, even in &lt;code&gt;--resilient&lt;/code&gt; mode, to prevent data loss.&lt;/p&gt;
&lt;p&gt;Behavior of &lt;code&gt;--resilient&lt;/code&gt; may change in a future version. (See also:
&lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#max-lock&#34;&gt;&lt;code&gt;--max-lock&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#graceful-shutdown&#34;&gt;Graceful
Shutdown&lt;/a&gt;)&lt;/p&gt;
&lt;h3 id=&#34;recover&#34;&gt;--recover&lt;/h3&gt;
&lt;p&gt;If &lt;code&gt;--recover&lt;/code&gt; is set, in the event of a sudden interruption or other
un-graceful shutdown, bisync will attempt to automatically recover on the next
run, instead of requiring &lt;code&gt;--resync&lt;/code&gt;. Bisync is able to recover robustly by
keeping one &amp;quot;backup&amp;quot; listing at all times, representing the state of both paths
after the last known successful sync. Bisync can then compare the current state
with this snapshot to determine which changes it needs to retry. Changes that
were synced after this snapshot (during the run that was later interrupted)
will appear to bisync as if they are &amp;quot;new or changed on both sides&amp;quot;, but in
most cases this is not a problem, as bisync will simply do its usual &amp;quot;equality
check&amp;quot; and learn that no action needs to be taken on these files, since they
are already identical on both sides.&lt;/p&gt;
&lt;p&gt;In the rare event that a file is synced successfully during a run that later
aborts, and then that same file changes AGAIN before the next run, bisync will
think it is a sync conflict, and handle it accordingly. (From bisync&#39;s
perspective, the file has changed on both sides since the last trusted sync,
and the files on either side are not currently identical.) Therefore,
&lt;code&gt;--recover&lt;/code&gt; carries with it a slightly increased chance of having conflicts --
though in practice this is pretty rare, as the conditions required to cause it
are quite specific. This risk can be reduced by using bisync&#39;s &lt;a href=&#34;#graceful-shutdown&#34;&gt;&amp;quot;Graceful
Shutdown&amp;quot;&lt;/a&gt; mode (triggered by sending &lt;code&gt;SIGINT&lt;/code&gt; or
&lt;code&gt;Ctrl+C&lt;/code&gt;), when you have the choice, instead of forcing a sudden termination.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--recover&lt;/code&gt; and &lt;code&gt;--resilient&lt;/code&gt; are similar, but distinct -- the main difference
is that &lt;code&gt;--resilient&lt;/code&gt; is about &lt;em&gt;retrying&lt;/em&gt;, while &lt;code&gt;--recover&lt;/code&gt; is about
&lt;em&gt;recovering&lt;/em&gt;. Most users will probably want both. &lt;code&gt;--resilient&lt;/code&gt; allows retrying
when bisync has chosen to abort itself due to safety features such as failing
&lt;code&gt;--check-access&lt;/code&gt; or detecting a filter change. &lt;code&gt;--resilient&lt;/code&gt; does not cover
external interruptions such as a user shutting down their computer in the
middle of a sync -- that is what &lt;code&gt;--recover&lt;/code&gt; is for.&lt;/p&gt;
&lt;h3 id=&#34;max-lock&#34;&gt;--max-lock&lt;/h3&gt;
&lt;p&gt;Bisync uses &lt;a href=&#34;#lock-file&#34;&gt;lock files&lt;/a&gt; as a safety feature to prevent
interference from other bisync runs while it is running. Bisync normally
removes these lock files at the end of a run, but if bisync is abruptly
interrupted, these files will be left behind. By default, they will lock out
all future runs, until the user has a chance to manually check things out and
remove the lock. As an alternative, &lt;code&gt;--max-lock&lt;/code&gt; can be used to make them
automatically expire after a certain period of time, so that future runs are
not locked out forever, and auto-recovery is possible. &lt;code&gt;--max-lock&lt;/code&gt; can be any
duration &lt;code&gt;2m&lt;/code&gt; or greater (or &lt;code&gt;0&lt;/code&gt; to disable). If set, lock files older than
this will be considered &amp;quot;expired&amp;quot;, and future runs will be allowed to disregard
them and proceed. (Note that the &lt;code&gt;--max-lock&lt;/code&gt; duration must be set by the
process that left the lock file -- not the later one interpreting it.)&lt;/p&gt;
&lt;p&gt;If set, bisync will also &amp;quot;renew&amp;quot; these lock files every &lt;code&gt;--max-lock minus one minute&lt;/code&gt; throughout a run, for extra safety. (For example, with &lt;code&gt;--max-lock 5m&lt;/code&gt;,
bisync would renew the lock file (for another 5 minutes) every 4 minutes until
the run has completed.) In other words, it should not be possible for a lock
file to pass its expiration time while the process that created it is still
running -- and you can therefore be reasonably sure that any &lt;em&gt;expired&lt;/em&gt; lock
file you may find was left there by an interrupted run, not one that is still
running and just taking awhile.&lt;/p&gt;
&lt;p&gt;If &lt;code&gt;--max-lock&lt;/code&gt; is &lt;code&gt;0&lt;/code&gt; or not set, the default is that lock files will never
expire, and will block future runs (of these same two bisync paths)
indefinitely.&lt;/p&gt;
&lt;p&gt;For maximum resilience from disruptions, consider setting a relatively short
duration like &lt;code&gt;--max-lock 2m&lt;/code&gt; along with &lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt; and
&lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt;, and a relatively frequent &lt;a href=&#34;#cron&#34;&gt;cron schedule&lt;/a&gt;. The
result will be a very robust &amp;quot;set-it-and-forget-it&amp;quot; bisync run that can
automatically bounce back from almost any interruption it might encounter,
without requiring the user to get involved and run a &lt;code&gt;--resync&lt;/code&gt;. (See also:
&lt;a href=&#34;#graceful-shutdown&#34;&gt;Graceful Shutdown&lt;/a&gt; mode)&lt;/p&gt;
&lt;h3 id=&#34;backup-dir1-and-backup-dir2&#34;&gt;--backup-dir1 and --backup-dir2&lt;/h3&gt;
&lt;p&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, &lt;a href=&#34;https://rclone.org/docs/#backup-dir-dir&#34;&gt;&lt;code&gt;--backup-dir&lt;/code&gt;&lt;/a&gt; is supported in bisync.
Because &lt;code&gt;--backup-dir&lt;/code&gt; must be a non-overlapping path on the same remote,
Bisync has introduced new &lt;code&gt;--backup-dir1&lt;/code&gt; and &lt;code&gt;--backup-dir2&lt;/code&gt; flags to support
separate backup-dirs for &lt;code&gt;Path1&lt;/code&gt; and &lt;code&gt;Path2&lt;/code&gt; (bisyncing between different
remotes with &lt;code&gt;--backup-dir&lt;/code&gt; would not otherwise be possible.) &lt;code&gt;--backup-dir1&lt;/code&gt;
and &lt;code&gt;--backup-dir2&lt;/code&gt; can use different remotes from each other, but
&lt;code&gt;--backup-dir1&lt;/code&gt; must use the same remote as &lt;code&gt;Path1&lt;/code&gt;, and &lt;code&gt;--backup-dir2&lt;/code&gt; must
use the same remote as &lt;code&gt;Path2&lt;/code&gt;. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.&lt;/p&gt;
&lt;p&gt;The standard &lt;code&gt;--backup-dir&lt;/code&gt; will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either &lt;code&gt;--backup-dir1&lt;/code&gt; and &lt;code&gt;--backup-dir2&lt;/code&gt; are set, they will
override &lt;code&gt;--backup-dir&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this example, if the user deletes a file in
&lt;code&gt;/Users/someuser/some/local/path/Bisync&lt;/code&gt;, bisync will propagate the delete to
the other side by moving the corresponding file from &lt;code&gt;gdrive:Bisync&lt;/code&gt; to
&lt;code&gt;gdrive:BackupDir&lt;/code&gt;. If the user deletes a file from &lt;code&gt;gdrive:Bisync&lt;/code&gt;, bisync
moves it from &lt;code&gt;/Users/someuser/some/local/path/Bisync&lt;/code&gt; to
&lt;code&gt;/Users/someuser/some/local/path/BackupDir&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In the event of a &lt;a href=&#34;#conflict-loser&#34;&gt;rename due to a sync conflict&lt;/a&gt;, the
rename is not considered a delete, unless a previous conflict with the same
name already exists and would get overwritten.&lt;/p&gt;
&lt;p&gt;See also: &lt;a href=&#34;https://rclone.org/docs/#suffix-suffix&#34;&gt;&lt;code&gt;--suffix&lt;/code&gt;&lt;/a&gt;,
&lt;a href=&#34;https://rclone.org/docs/#suffix-keep-extension&#34;&gt;&lt;code&gt;--suffix-keep-extension&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;operation&#34;&gt;Operation&lt;/h2&gt;
&lt;h3 id=&#34;runtime-flow-details&#34;&gt;Runtime flow details&lt;/h3&gt;
&lt;p&gt;bisync retains the listings of the &lt;code&gt;Path1&lt;/code&gt; and &lt;code&gt;Path2&lt;/code&gt; filesystems
from the prior run.
On each successive run it will:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;list files on &lt;code&gt;path1&lt;/code&gt; and &lt;code&gt;path2&lt;/code&gt;, and check for changes on each side.
Changes include &lt;code&gt;New&lt;/code&gt;, &lt;code&gt;Newer&lt;/code&gt;, &lt;code&gt;Older&lt;/code&gt;, and &lt;code&gt;Deleted&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;Propagate changes on &lt;code&gt;path1&lt;/code&gt; to &lt;code&gt;path2&lt;/code&gt;, and vice-versa.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;safety-measures&#34;&gt;Safety measures&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Lock file prevents multiple simultaneous runs when taking a while.
This can be particularly useful if bisync is run by cron scheduler.&lt;/li&gt;
&lt;li&gt;Handle change conflicts non-destructively by creating
&lt;code&gt;.conflict1&lt;/code&gt;, &lt;code&gt;.conflict2&lt;/code&gt;, etc. file versions, according to
&lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt;, and &lt;a href=&#34;#conflict-suffix&#34;&gt;&lt;code&gt;--conflict-suffix&lt;/code&gt;&lt;/a&gt; settings.&lt;/li&gt;
&lt;li&gt;File system access health check using &lt;code&gt;RCLONE_TEST&lt;/code&gt; files
(see the &lt;code&gt;--check-access&lt;/code&gt; flag).&lt;/li&gt;
&lt;li&gt;Abort on excessive deletes - protects against a failed listing
being interpreted as all the files were deleted.
See the &lt;code&gt;--max-delete&lt;/code&gt; and &lt;code&gt;--force&lt;/code&gt; flags.&lt;/li&gt;
&lt;li&gt;If something evil happens, bisync goes into a safe state to block
damage by later runs. (See &lt;a href=&#34;#error-handling&#34;&gt;Error Handling&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;normal-sync-checks&#34;&gt;Normal sync checks&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Path2 new&lt;/td&gt;
&lt;td&gt;File is new on Path2, does not exist on Path1&lt;/td&gt;
&lt;td&gt;Path2 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path2 to Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 newer&lt;/td&gt;
&lt;td&gt;File is newer on Path2, unchanged on Path1&lt;/td&gt;
&lt;td&gt;Path2 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path2 to Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 deleted&lt;/td&gt;
&lt;td&gt;File is deleted on Path2, unchanged on Path1&lt;/td&gt;
&lt;td&gt;File is deleted&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone delete&lt;/code&gt; Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 new&lt;/td&gt;
&lt;td&gt;File is new on Path1, does not exist on Path2&lt;/td&gt;
&lt;td&gt;Path1 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path1 to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 newer&lt;/td&gt;
&lt;td&gt;File is newer on Path1, unchanged on Path2&lt;/td&gt;
&lt;td&gt;Path1 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path1 to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 older&lt;/td&gt;
&lt;td&gt;File is older on Path1, unchanged on Path2&lt;/td&gt;
&lt;td&gt;&lt;em&gt;Path1 version survives&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path1 to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 older&lt;/td&gt;
&lt;td&gt;File is older on Path2, unchanged on Path1&lt;/td&gt;
&lt;td&gt;&lt;em&gt;Path2 version survives&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path2 to Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 deleted&lt;/td&gt;
&lt;td&gt;File no longer exists on Path1&lt;/td&gt;
&lt;td&gt;File is deleted&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone delete&lt;/code&gt; Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&#34;unusual-sync-checks&#34;&gt;Unusual sync checks&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Path1 new/changed AND Path2 new/changed AND Path1 == Path2&lt;/td&gt;
&lt;td&gt;File is new/changed on Path1 AND new/changed on Path2 AND Path1 version is currently identical to Path2&lt;/td&gt;
&lt;td&gt;No change&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 new AND Path2 new&lt;/td&gt;
&lt;td&gt;File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2)&lt;/td&gt;
&lt;td&gt;Conflicts handled according to &lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt; &amp;amp; &lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt; settings&lt;/td&gt;
&lt;td&gt;default: &lt;code&gt;rclone copy&lt;/code&gt; renamed &lt;code&gt;Path2.conflict2&lt;/code&gt; file to Path1, &lt;code&gt;rclone copy&lt;/code&gt; renamed &lt;code&gt;Path1.conflict1&lt;/code&gt; file to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 newer AND Path1 changed&lt;/td&gt;
&lt;td&gt;File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2)&lt;/td&gt;
&lt;td&gt;Conflicts handled according to &lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt; &amp;amp; &lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt; settings&lt;/td&gt;
&lt;td&gt;default: &lt;code&gt;rclone copy&lt;/code&gt; renamed &lt;code&gt;Path2.conflict2&lt;/code&gt; file to Path1, &lt;code&gt;rclone copy&lt;/code&gt; renamed &lt;code&gt;Path1.conflict1&lt;/code&gt; file to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 newer AND Path1 deleted&lt;/td&gt;
&lt;td&gt;File is newer on Path2 AND also deleted on Path1&lt;/td&gt;
&lt;td&gt;Path2 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path2 to Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path2 deleted AND Path1 changed&lt;/td&gt;
&lt;td&gt;File is deleted on Path2 AND changed (newer/older/size) on Path1&lt;/td&gt;
&lt;td&gt;Path1 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path1 to Path2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Path1 deleted AND Path2 changed&lt;/td&gt;
&lt;td&gt;File is deleted on Path1 AND changed (newer/older/size) on Path2&lt;/td&gt;
&lt;td&gt;Path2 version survives&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rclone copy&lt;/code&gt; Path2 to Path1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;As of &lt;code&gt;rclone v1.64&lt;/code&gt;, bisync is now better at detecting &lt;em&gt;false positive&lt;/em&gt; sync conflicts,
which would previously have resulted in unnecessary renames and duplicates.
Now, when bisync comes to a file that it wants to rename (because it is new/changed on both sides),
it first checks whether the Path1 and Path2 versions are currently &lt;em&gt;identical&lt;/em&gt;
(using the same underlying function as &lt;a href=&#34;commands/rclone_check/&#34;&gt;&lt;code&gt;check&lt;/code&gt;&lt;/a&gt;.)
If bisync concludes that the files are identical, it will skip them and move on.
Otherwise, it will create renamed duplicates, as before.
This behavior also &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories&#34;&gt;improves the experience of renaming directories&lt;/a&gt;,
as a &lt;code&gt;--resync&lt;/code&gt; is no longer required, so long as the same change has been made on both sides.&lt;/p&gt;
&lt;h3 id=&#34;all-files-changed&#34;&gt;All files changed check&lt;/h3&gt;
&lt;p&gt;If &lt;em&gt;all&lt;/em&gt; prior existing files on either of the filesystems have changed
(e.g. timestamps have changed due to changing the system&#39;s timezone)
then bisync will abort without making any changes.
Any new files are not considered for this check. You could use &lt;code&gt;--force&lt;/code&gt;
to force the sync (whichever side has the changed timestamp files wins).
Alternately, a &lt;code&gt;--resync&lt;/code&gt; may be used (Path1 versions will be pushed
to Path2). Consider the situation carefully and perhaps use &lt;code&gt;--dry-run&lt;/code&gt;
before you commit to the changes.&lt;/p&gt;
&lt;h3 id=&#34;modification-times&#34;&gt;Modification times&lt;/h3&gt;
&lt;p&gt;By default, bisync compares files by modification time and size.
If you or your application should change the content of a file
without changing the modification time and size, then bisync will &lt;em&gt;not&lt;/em&gt;
notice the change, and thus will not copy it to the other side.
As an alternative, consider comparing by checksum (if your remotes support it).
See &lt;a href=&#34;#compare&#34;&gt;&lt;code&gt;--compare&lt;/code&gt;&lt;/a&gt; for details.&lt;/p&gt;
&lt;h3 id=&#34;error-handling&#34;&gt;Error handling&lt;/h3&gt;
&lt;p&gt;Certain bisync critical errors, such as file copy/move failing, will result in
a bisync lockout of following runs. The lockout is asserted because the sync
status and history of the Path1 and Path2 filesystems cannot be trusted,
so it is safer to block any further changes until someone checks things out.
The recovery is to do a &lt;code&gt;--resync&lt;/code&gt; again.&lt;/p&gt;
&lt;p&gt;It is recommended to use &lt;code&gt;--resync --dry-run --verbose&lt;/code&gt; initially and
&lt;em&gt;carefully&lt;/em&gt; review what changes will be made before running the &lt;code&gt;--resync&lt;/code&gt;
without &lt;code&gt;--dry-run&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Most of these events come up due to an error status from an internal call.
On such a critical error the &lt;code&gt;{...}.path1.lst&lt;/code&gt; and &lt;code&gt;{...}.path2.lst&lt;/code&gt;
listing files are renamed to extension &lt;code&gt;.lst-err&lt;/code&gt;, which blocks any future
bisync runs (since the normal &lt;code&gt;.lst&lt;/code&gt; files are not found).
Bisync keeps them under &lt;code&gt;bisync&lt;/code&gt; subdirectory of the rclone cache directory,
typically at &lt;code&gt;${HOME}/.cache/rclone/bisync/&lt;/code&gt; on Linux.&lt;/p&gt;
&lt;p&gt;Some errors are considered temporary and re-running the bisync is not blocked.
The &lt;em&gt;critical return&lt;/em&gt; blocks further bisync runs.&lt;/p&gt;
&lt;p&gt;See also: &lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt;,
&lt;a href=&#34;#max-lock&#34;&gt;&lt;code&gt;--max-lock&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#graceful-shutdown&#34;&gt;Graceful Shutdown&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;lock-file&#34;&gt;Lock file&lt;/h3&gt;
&lt;p&gt;When bisync is running, a lock file is created in the bisync working directory,
typically at &lt;code&gt;~/.cache/rclone/bisync/PATH1..PATH2.lck&lt;/code&gt; on Linux.
If bisync should crash or hang, the lock file will remain in place and block
any further runs of bisync &lt;em&gt;for the same paths&lt;/em&gt;.
Delete the lock file as part of debugging the situation.
The lock file effectively blocks follow-on (e.g., scheduled by &lt;em&gt;cron&lt;/em&gt;) runs
when the prior invocation is taking a long time.
The lock file contains &lt;em&gt;PID&lt;/em&gt; of the blocking process, which may help in debug.
Lock files can be set to automatically expire after a certain amount of time,
using the &lt;a href=&#34;#max-lock&#34;&gt;&lt;code&gt;--max-lock&lt;/code&gt;&lt;/a&gt; flag.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;
that while concurrent bisync runs are allowed, &lt;em&gt;be very cautious&lt;/em&gt;
that there is no overlap in the trees being synched between concurrent runs,
lest there be replicated files, deleted files and general mayhem.&lt;/p&gt;
&lt;h3 id=&#34;return-codes&#34;&gt;Return codes&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;rclone bisync&lt;/code&gt; returns the following codes to calling program:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;0&lt;/code&gt; on a successful run,&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1&lt;/code&gt; for a non-critical failing run (a rerun may be successful),&lt;/li&gt;
&lt;li&gt;&lt;code&gt;2&lt;/code&gt; for a critically aborted run (requires a &lt;code&gt;--resync&lt;/code&gt; to recover).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;graceful-shutdown&#34;&gt;Graceful Shutdown&lt;/h3&gt;
&lt;p&gt;Bisync has a &amp;quot;Graceful Shutdown&amp;quot; mode which is activated by sending &lt;code&gt;SIGINT&lt;/code&gt; or
pressing &lt;code&gt;Ctrl+C&lt;/code&gt; during a run. Once triggered, bisync will use best efforts to
exit cleanly before the timer runs out. If bisync is in the middle of
transferring files, it will attempt to cleanly empty its queue by finishing
what it has started but not taking more. If it cannot do so within 30 seconds,
it will cancel the in-progress transfers at that point and then give itself a
maximum of 60 seconds to wrap up, save its state for next time, and exit. With
the &lt;code&gt;-vP&lt;/code&gt; flags you will see constant status updates and a final confirmation
of whether or not the graceful shutdown was successful.&lt;/p&gt;
&lt;p&gt;At any point during the &amp;quot;Graceful Shutdown&amp;quot; sequence, a second &lt;code&gt;SIGINT&lt;/code&gt; or
&lt;code&gt;Ctrl+C&lt;/code&gt; will trigger an immediate, un-graceful exit, which will leave things
in a messier state. Usually a robust recovery will still be possible if using
&lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt; mode, otherwise you will need to do a &lt;code&gt;--resync&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you plan to use Graceful Shutdown mode, it is recommended to use
&lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt; and &lt;a href=&#34;#recover&#34;&gt;&lt;code&gt;--recover&lt;/code&gt;&lt;/a&gt;, and it is important to
NOT use &lt;a href=&#34;https://rclone.org/docs/#inplace&#34;&gt;&lt;code&gt;--inplace&lt;/code&gt;&lt;/a&gt;, otherwise you risk leaving
partially-written files on one side, which may be confused for real files on
the next run. Note also that in the event of an abrupt interruption, a &lt;a href=&#34;#lock-file&#34;&gt;lock
file&lt;/a&gt; will be left behind to block concurrent runs. You will need
to delete it before you can proceed with the next run (or wait for it to
expire on its own, if using &lt;code&gt;--max-lock&lt;/code&gt;.)&lt;/p&gt;
&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;h3 id=&#34;supported-backends&#34;&gt;Supported backends&lt;/h3&gt;
&lt;p&gt;Bisync is considered &lt;em&gt;BETA&lt;/em&gt; and has been tested with the following backends:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Local filesystem&lt;/li&gt;
&lt;li&gt;Google Drive&lt;/li&gt;
&lt;li&gt;Dropbox&lt;/li&gt;
&lt;li&gt;OneDrive&lt;/li&gt;
&lt;li&gt;S3&lt;/li&gt;
&lt;li&gt;SFTP&lt;/li&gt;
&lt;li&gt;Yandex Disk&lt;/li&gt;
&lt;li&gt;Crypt&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It has not been fully tested with other services yet.
If it works, or sorta works, please let us know and we&#39;ll update the list.
Run the test suite to check for proper operation as described below.&lt;/p&gt;
&lt;p&gt;The first release of &lt;code&gt;rclone bisync&lt;/code&gt; required both underlying backends to support
modification times, and refused to run otherwise.
This limitation has been lifted as of &lt;code&gt;v1.66&lt;/code&gt;, as bisync now supports comparing
checksum and/or size instead of (or in addition to) modtime.
See &lt;a href=&#34;#compare&#34;&gt;&lt;code&gt;--compare&lt;/code&gt;&lt;/a&gt; for details.&lt;/p&gt;
&lt;h3 id=&#34;concurrent-modifications&#34;&gt;Concurrent modifications&lt;/h3&gt;
&lt;p&gt;When using &lt;strong&gt;Local, FTP or SFTP&lt;/strong&gt; remotes with &lt;a href=&#34;https://rclone.org/docs/#inplace&#34;&gt;&lt;code&gt;--inplace&lt;/code&gt;&lt;/a&gt;, rclone does not create &lt;em&gt;temporary&lt;/em&gt;
files at the destination when copying, and thus if the connection is lost
the created file may be corrupt, which will likely propagate back to the
original path on the next sync, resulting in data loss.
It is therefore recommended to &lt;em&gt;omit&lt;/em&gt; &lt;code&gt;--inplace&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Files that &lt;strong&gt;change during&lt;/strong&gt; a bisync run may result in data loss.
Prior to &lt;code&gt;rclone v1.66&lt;/code&gt;, this was commonly seen in highly dynamic environments, where the filesystem
was getting hammered by running processes during the sync.
As of &lt;code&gt;rclone v1.66&lt;/code&gt;, bisync was redesigned to use a &amp;quot;snapshot&amp;quot; model,
greatly reducing the risks from changes during a sync.
Changes that are not detected during the current sync will now be detected during the following sync,
and will no longer cause the entire run to throw a critical error.
There is additionally a mechanism to mark files as needing to be internally rechecked next time, for added safety.
It should therefore no longer be necessary to sync only at quiet times --
however, note that an error can still occur if a file happens to change at the exact moment it&#39;s
being read/written by bisync (same as would happen in &lt;code&gt;rclone sync&lt;/code&gt;.)
(See also: &lt;a href=&#34;https://rclone.org/docs/#ignore-checksum&#34;&gt;&lt;code&gt;--ignore-checksum&lt;/code&gt;&lt;/a&gt;,
&lt;a href=&#34;https://rclone.org/local/#local-no-check-updated&#34;&gt;&lt;code&gt;--local-no-check-updated&lt;/code&gt;&lt;/a&gt;)&lt;/p&gt;
&lt;h3 id=&#34;empty-directories&#34;&gt;Empty directories&lt;/h3&gt;
&lt;p&gt;By default, new/deleted empty directories on one path are &lt;em&gt;not&lt;/em&gt; propagated to the other side.
This is because bisync (and rclone) natively works on files, not directories.
However, this can be changed with the &lt;code&gt;--create-empty-src-dirs&lt;/code&gt; flag, which works in
much the same way as in &lt;a href=&#34;https://rclone.org/commands/rclone_sync/&#34;&gt;&lt;code&gt;sync&lt;/code&gt;&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_copy/&#34;&gt;&lt;code&gt;copy&lt;/code&gt;&lt;/a&gt;.
When used, empty directories created or deleted on one side will also be created or deleted on the other side.
The following should be noted:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;--create-empty-src-dirs&lt;/code&gt; is not compatible with &lt;code&gt;--remove-empty-dirs&lt;/code&gt;. Use only one or the other (or neither).&lt;/li&gt;
&lt;li&gt;It is not recommended to switch back and forth between &lt;code&gt;--create-empty-src-dirs&lt;/code&gt;
and the default (no &lt;code&gt;--create-empty-src-dirs&lt;/code&gt;) without running &lt;code&gt;--resync&lt;/code&gt;.
This is because it may appear as though all directories (not just the empty ones) were created/deleted,
when actually you&#39;ve just toggled between making them visible/invisible to bisync.
It looks scarier than it is, but it&#39;s still probably best to stick to one or the other,
and use &lt;code&gt;--resync&lt;/code&gt; when you need to switch.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;renamed-directories&#34;&gt;Renamed directories&lt;/h3&gt;
&lt;p&gt;By default, renaming a folder on the Path1 side results in deleting all files on
the Path2 side and then copying all files again from Path1 to Path2.
Bisync sees this as all files in the old directory name as deleted and all
files in the new directory name as new.&lt;/p&gt;
&lt;p&gt;A recommended solution is to use &lt;a href=&#34;https://rclone.org/docs/#track-renames&#34;&gt;&lt;code&gt;--track-renames&lt;/code&gt;&lt;/a&gt;,
which is now supported in bisync as of &lt;code&gt;rclone v1.66&lt;/code&gt;.
Note that &lt;code&gt;--track-renames&lt;/code&gt; is not available during &lt;code&gt;--resync&lt;/code&gt;,
as &lt;code&gt;--resync&lt;/code&gt; does not delete anything (&lt;code&gt;--track-renames&lt;/code&gt; only supports &lt;code&gt;sync&lt;/code&gt;, not &lt;code&gt;copy&lt;/code&gt;.)&lt;/p&gt;
&lt;p&gt;Otherwise, the most effective and efficient method of renaming a directory
is to rename it to the same name on both sides. (As of &lt;code&gt;rclone v1.64&lt;/code&gt;,
a &lt;code&gt;--resync&lt;/code&gt; is no longer required after doing so, as bisync will automatically
detect that Path1 and Path2 are in agreement.)&lt;/p&gt;
&lt;h3 id=&#34;fast-list-used-by-default&#34;&gt;&lt;code&gt;--fast-list&lt;/code&gt; used by default&lt;/h3&gt;
&lt;p&gt;Unlike most other rclone commands, bisync uses &lt;a href=&#34;https://rclone.org/docs/#fast-list&#34;&gt;&lt;code&gt;--fast-list&lt;/code&gt;&lt;/a&gt; by default,
for backends that support it. In many cases this is desirable, however,
there are some scenarios in which bisync could be faster &lt;em&gt;without&lt;/em&gt; &lt;code&gt;--fast-list&lt;/code&gt;,
and there is also a &lt;a href=&#34;https://github.com/rclone/rclone/commit/cbf3d4356135814921382dd3285d859d15d0aa77&#34;&gt;known issue concerning Google Drive users with many empty directories&lt;/a&gt;.
For now, the recommended way to avoid using &lt;code&gt;--fast-list&lt;/code&gt; is to add &lt;code&gt;--disable ListR&lt;/code&gt;
to all bisync commands. The default behavior may change in a future version.&lt;/p&gt;
&lt;h3 id=&#34;case-sensitivity&#34;&gt;Case (and unicode) sensitivity&lt;/h3&gt;
&lt;p&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, case and unicode form differences no longer cause critical errors,
and normalization (when comparing between filesystems) is handled according to the same flags and defaults as &lt;code&gt;rclone sync&lt;/code&gt;.
See the following options (all of which are supported by bisync) to control this behavior more granularly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://rclone.org/docs/#fix-case&#34;&gt;&lt;code&gt;--fix-case&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://rclone.org/docs/#ignore-case-sync&#34;&gt;&lt;code&gt;--ignore-case-sync&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://rclone.org/docs/#no-unicode-normalization&#34;&gt;&lt;code&gt;--no-unicode-normalization&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://rclone.org/local/#local-unicode-normalization&#34;&gt;&lt;code&gt;--local-unicode-normalization&lt;/code&gt;&lt;/a&gt; and
&lt;a href=&#34;https://rclone.org/local/#local-case-sensitive&#34;&gt;&lt;code&gt;--local-case-sensitive&lt;/code&gt;&lt;/a&gt; (caution: these are normally not what you want.)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note that in the (probably rare) event that &lt;code&gt;--fix-case&lt;/code&gt; is used AND a file is new/changed on both sides
AND the checksums match AND the filename case does not match, the Path1 filename is considered the winner,
for the purposes of &lt;code&gt;--fix-case&lt;/code&gt; (Path2 will be renamed to match it).&lt;/p&gt;
&lt;h2 id=&#34;windows&#34;&gt;Windows support&lt;/h2&gt;
&lt;p&gt;Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows
GitHub runners.&lt;/p&gt;
&lt;p&gt;Drive letters are allowed, including drive letters mapped to network drives
(&lt;code&gt;rclone bisync J:\localsync GDrive:&lt;/code&gt;).
If a drive letter is omitted, the shell current drive is the default.
Drive letters are a single character follows by &lt;code&gt;:&lt;/code&gt;, so cloud names
must be more than one character long.&lt;/p&gt;
&lt;p&gt;Absolute paths (with or without a drive letter), and relative paths
(with or without a drive letter) are supported.&lt;/p&gt;
&lt;p&gt;Working directory is created at &lt;code&gt;C:\Users\MyLogin\AppData\Local\rclone\bisync&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that bisync output may show a mix of forward &lt;code&gt;/&lt;/code&gt; and back &lt;code&gt;\&lt;/code&gt; slashes.&lt;/p&gt;
&lt;p&gt;Be careful of case independent directory and file naming on Windows
vs. case dependent Linux&lt;/p&gt;
&lt;h2 id=&#34;filtering&#34;&gt;Filtering&lt;/h2&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/filtering/&#34;&gt;filtering documentation&lt;/a&gt;
for how filter rules are written and interpreted.&lt;/p&gt;
&lt;p&gt;Bisync&#39;s &lt;a href=&#34;#filters-file&#34;&gt;&lt;code&gt;--filters-file&lt;/code&gt;&lt;/a&gt; flag slightly extends the rclone&#39;s
&lt;a href=&#34;https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file&#34;&gt;--filter-from&lt;/a&gt;
filtering mechanism.
For a given bisync run you may provide &lt;em&gt;only one&lt;/em&gt; &lt;code&gt;--filters-file&lt;/code&gt;.
The &lt;code&gt;--include*&lt;/code&gt;, &lt;code&gt;--exclude*&lt;/code&gt;, and &lt;code&gt;--filter&lt;/code&gt; flags are also supported.&lt;/p&gt;
&lt;h3 id=&#34;how-to-filter-directories&#34;&gt;How to filter directories&lt;/h3&gt;
&lt;p&gt;Filtering portions of the directory tree is a critical feature for synching.&lt;/p&gt;
&lt;p&gt;Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Directory trees containing only software build intermediate files.&lt;/li&gt;
&lt;li&gt;Directory trees containing application temporary files and data
such as the Windows &lt;code&gt;C:\Users\MyLogin\AppData\&lt;/code&gt; tree.&lt;/li&gt;
&lt;li&gt;Directory trees containing files that are large, less important,
or are getting thrashed continuously by ongoing processes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On the other hand, there may be only select directories that you
actually want to sync, and exclude all others. See the
&lt;a href=&#34;#include-filters&#34;&gt;Example include-style filters for Windows user directories&lt;/a&gt;
below.&lt;/p&gt;
&lt;h3 id=&#34;filters-file-writing-guidelines&#34;&gt;Filters file writing guidelines&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Begin with excluding directory trees:
&lt;ul&gt;
&lt;li&gt;e.g. &lt;code&gt;- /AppData/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;**&lt;/code&gt; on the end is not necessary. Once a given directory level
is excluded then everything beneath it won&#39;t be looked at by rclone.&lt;/li&gt;
&lt;li&gt;Exclude such directories that are unneeded, are big, dynamically thrashed,
or where there may be access permission issues.&lt;/li&gt;
&lt;li&gt;Excluding such dirs first will make rclone operations (much) faster.&lt;/li&gt;
&lt;li&gt;Specific files may also be excluded, as with the Dropbox exclusions
example below.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Decide if it&#39;s easier (or cleaner) to:
&lt;ul&gt;
&lt;li&gt;Include select directories and therefore &lt;em&gt;exclude everything else&lt;/em&gt; -- or --&lt;/li&gt;
&lt;li&gt;Exclude select directories and therefore &lt;em&gt;include everything else&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Include select directories:
&lt;ul&gt;
&lt;li&gt;Add lines like: &lt;code&gt;+ /Documents/PersonalFiles/**&lt;/code&gt; to select which
directories to include in the sync.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;**&lt;/code&gt; on the end specifies to include the full depth of the specified tree.&lt;/li&gt;
&lt;li&gt;With Include-style filters, files at the Path1/Path2 root are not included.
They may be included with &lt;code&gt;+ /*&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Place RCLONE_TEST files within these included directory trees.
They will only be looked for in these directory trees.&lt;/li&gt;
&lt;li&gt;Finish by excluding everything else by adding &lt;code&gt;- **&lt;/code&gt; at the end
of the filters file.&lt;/li&gt;
&lt;li&gt;Disregard step 4.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Exclude select directories:
&lt;ul&gt;
&lt;li&gt;Add more lines like in step 1.
For example: &lt;code&gt;-/Desktop/tempfiles/&lt;/code&gt;, or &lt;code&gt;- /testdir/&lt;/code&gt;.
Again, a &lt;code&gt;**&lt;/code&gt; on the end is not necessary.&lt;/li&gt;
&lt;li&gt;Do &lt;em&gt;not&lt;/em&gt; add a &lt;code&gt;- **&lt;/code&gt; in the file. Without this line, everything
will be included that has not been explicitly excluded.&lt;/li&gt;
&lt;li&gt;Disregard step 3.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A few rules for the syntax of a filter file expanding on
&lt;a href=&#34;https://rclone.org/filtering/&#34;&gt;filtering documentation&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lines may start with spaces and tabs - rclone strips leading whitespace.&lt;/li&gt;
&lt;li&gt;If the first non-whitespace character is a &lt;code&gt;#&lt;/code&gt; then the line is a comment
and will be ignored.&lt;/li&gt;
&lt;li&gt;Blank lines are ignored.&lt;/li&gt;
&lt;li&gt;The first non-whitespace character on a filter line must be a &lt;code&gt;+&lt;/code&gt; or &lt;code&gt;-&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Exactly 1 space is allowed between the &lt;code&gt;+/-&lt;/code&gt; and the path term.&lt;/li&gt;
&lt;li&gt;Only forward slashes (&lt;code&gt;/&lt;/code&gt;) are used in path terms, even on Windows.&lt;/li&gt;
&lt;li&gt;The rest of the line is taken as the path term.
Trailing whitespace is taken literally, and probably is an error.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;include-filters&#34;&gt;Example include-style filters for Windows user directories&lt;/h3&gt;
&lt;p&gt;This Windows &lt;em&gt;include-style&lt;/em&gt; example is based on the sync root (Path1)
set to &lt;code&gt;C:\Users\MyLogin&lt;/code&gt;. The strategy is to select specific directories
to be synched with a network drive (Path2).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;- /AppData/&lt;/code&gt; excludes an entire tree of Windows stored stuff
that need not be synched.
In my case, AppData has &amp;gt;11 GB of stuff I don&#39;t care about, and there are
some subdirectories beneath AppData that are not accessible to my
user login, resulting in bisync critical aborts.&lt;/li&gt;
&lt;li&gt;Windows creates cache files starting with both upper and
lowercase &lt;code&gt;NTUSER&lt;/code&gt; at &lt;code&gt;C:\Users\MyLogin&lt;/code&gt;. These files may be dynamic,
locked, and are generally &lt;em&gt;don&#39;t care&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;There are just a few directories with &lt;em&gt;my&lt;/em&gt; data that I do want synched,
in the form of &lt;code&gt;+ /&amp;lt;path&amp;gt;&lt;/code&gt;. By selecting only the directory trees I
want to avoid the dozen plus directories that various apps make
at &lt;code&gt;C:\Users\MyLogin\Documents&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Include files in the root of the sync point, &lt;code&gt;C:\Users\MyLogin&lt;/code&gt;,
by adding the &lt;code&gt;+ /*&lt;/code&gt; line.&lt;/li&gt;
&lt;li&gt;This is an Include-style filters file, therefore it ends with &lt;code&gt;- **&lt;/code&gt;
which excludes everything not explicitly included.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;- /AppData/
- NTUSER*
- ntuser*
+ /Documents/Family/**
+ /Documents/Sketchup/**
+ /Documents/Microcapture_Photo/**
+ /Documents/Microcapture_Video/**
+ /Desktop/**
+ /Pictures/**
+ /*
- **
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note also that Windows implements several &amp;quot;library&amp;quot; links such as
&lt;code&gt;C:\Users\MyLogin\My Documents\My Music&lt;/code&gt; pointing to &lt;code&gt;C:\Users\MyLogin\Music&lt;/code&gt;.
rclone sees these as links, so you must add &lt;code&gt;--links&lt;/code&gt; to the
bisync command line if you which to follow these links. I find that I get
permission errors in trying to follow the links, so I don&#39;t include the
rclone &lt;code&gt;--links&lt;/code&gt; flag, but then you get lots of &lt;code&gt;Can&#39;t follow symlink…&lt;/code&gt;
noise from rclone about not following the links. This noise can be
quashed by adding &lt;code&gt;--quiet&lt;/code&gt; to the bisync command line.&lt;/p&gt;
&lt;h2 id=&#34;exclude-filters&#34;&gt;Example exclude-style filters files for use with Dropbox&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Dropbox disallows synching the listed temporary and configuration/data files.
The &lt;code&gt;- &amp;lt;filename&amp;gt;&lt;/code&gt; filters exclude these files where ever they may occur
in the sync tree. Consider adding similar exclusions for file types
you don&#39;t need to sync, such as core dump and software build files.&lt;/li&gt;
&lt;li&gt;bisync testing creates &lt;code&gt;/testdir/&lt;/code&gt; at the top level of the sync tree,
and usually deletes the tree after the test. If a normal sync should run
while the &lt;code&gt;/testdir/&lt;/code&gt; tree exists the &lt;code&gt;--check-access&lt;/code&gt; phase may fail
due to unbalanced RCLONE_TEST files.
The &lt;code&gt;- /testdir/&lt;/code&gt; filter blocks this tree from being synched.
You don&#39;t need this exclusion if you are not doing bisync development testing.&lt;/li&gt;
&lt;li&gt;Everything else beneath the Path1/Path2 root will be synched.&lt;/li&gt;
&lt;li&gt;RCLONE_TEST files may be placed anywhere within the tree, including the root.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;example-filters-file&#34;&gt;Example filters file for Dropbox&lt;/h3&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# Filter file for use with bisync
# See https://rclone.org/filtering/ for filtering rules
# NOTICE: If you make changes to this file you MUST do a --resync run.
#         Run with --dry-run to see what changes will be made.

# Dropbox won&amp;#39;t sync some files so filter them away here.
# See https://help.dropbox.com/installs-integrations/sync-uploads/files-not-syncing
- .dropbox.attr
- ~*.tmp
- ~$*
- .~*
- desktop.ini
- .dropbox

# Used for bisync testing, so excluded from normal runs
- /testdir/

# Other example filters
#- /TiBU/
#- /Photos/
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;how-check-access-handles-filters&#34;&gt;How --check-access handles filters&lt;/h3&gt;
&lt;p&gt;At the start of a bisync run, listings are gathered for Path1 and Path2
while using the user&#39;s &lt;code&gt;--filters-file&lt;/code&gt;. During the check access phase,
bisync scans these listings for &lt;code&gt;RCLONE_TEST&lt;/code&gt; files.
Any &lt;code&gt;RCLONE_TEST&lt;/code&gt; files hidden by the &lt;code&gt;--filters-file&lt;/code&gt; are &lt;em&gt;not&lt;/em&gt; in the
listings and thus not checked during the check access phase.&lt;/p&gt;
&lt;h2 id=&#34;troubleshooting&#34;&gt;Troubleshooting&lt;/h2&gt;
&lt;h3 id=&#34;reading-bisync-logs&#34;&gt;Reading bisync logs&lt;/h3&gt;
&lt;p&gt;Here are two normal runs. The first one has a newer file on the remote.
The second has no deltas between local and remote.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;2021/05/16 00:24:38 INFO  : Synching Path1 &amp;#34;/path/to/local/tree/&amp;#34; with Path2 &amp;#34;dropbox:/&amp;#34;
2021/05/16 00:24:38 INFO  : Path1 checking for diffs
2021/05/16 00:24:38 INFO  : - Path1    File is new                         - file.txt
2021/05/16 00:24:38 INFO  : Path1:    1 changes:    1 new,    0 newer,    0 older,    0 deleted
2021/05/16 00:24:38 INFO  : Path2 checking for diffs
2021/05/16 00:24:38 INFO  : Applying changes
2021/05/16 00:24:38 INFO  : - Path1    Queue copy to Path2                 - dropbox:/file.txt
2021/05/16 00:24:38 INFO  : - Path1    Do queued copies to                 - Path2
2021/05/16 00:24:38 INFO  : Updating listings
2021/05/16 00:24:38 INFO  : Validating listings for Path1 &amp;#34;/path/to/local/tree/&amp;#34; vs Path2 &amp;#34;dropbox:/&amp;#34;
2021/05/16 00:24:38 INFO  : Bisync successful

2021/05/16 00:36:52 INFO  : Synching Path1 &amp;#34;/path/to/local/tree/&amp;#34; with Path2 &amp;#34;dropbox:/&amp;#34;
2021/05/16 00:36:52 INFO  : Path1 checking for diffs
2021/05/16 00:36:52 INFO  : Path2 checking for diffs
2021/05/16 00:36:52 INFO  : No changes found
2021/05/16 00:36:52 INFO  : Updating listings
2021/05/16 00:36:52 INFO  : Validating listings for Path1 &amp;#34;/path/to/local/tree/&amp;#34; vs Path2 &amp;#34;dropbox:/&amp;#34;
2021/05/16 00:36:52 INFO  : Bisync successful
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;dry-run-oddity&#34;&gt;Dry run oddity&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;--dry-run&lt;/code&gt; messages may indicate that it would try to delete some files.
For example, if a file is new on Path2 and does not exist on Path1 then
it would normally be copied to Path1, but with &lt;code&gt;--dry-run&lt;/code&gt; enabled those
copies don&#39;t happen, which leads to the attempted delete on Path2,
blocked again by --dry-run: &lt;code&gt;... Not deleting as --dry-run&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This whole confusing situation is an artifact of the &lt;code&gt;--dry-run&lt;/code&gt; flag.
Scrutinize the proposed deletes carefully, and if the files would have been
copied to Path1 then the threatened deletes on Path2 may be disregarded.&lt;/p&gt;
&lt;h3 id=&#34;retries&#34;&gt;Retries&lt;/h3&gt;
&lt;p&gt;Rclone has built-in retries. If you run with &lt;code&gt;--verbose&lt;/code&gt; you&#39;ll see
error and retry messages such as shown below. This is usually not a bug.
If at the end of the run, you see &lt;code&gt;Bisync successful&lt;/code&gt; and not
&lt;code&gt;Bisync critical error&lt;/code&gt; or &lt;code&gt;Bisync aborted&lt;/code&gt; then the run was successful,
and you can ignore the error messages.&lt;/p&gt;
&lt;p&gt;The following run shows an intermittent fail. Lines &lt;em&gt;5&lt;/em&gt; and _6- are
low-level messages. Line &lt;em&gt;6&lt;/em&gt; is a bubbled-up &lt;em&gt;warning&lt;/em&gt; message, conveying
the error. Rclone normally retries failing commands, so there may be
numerous such messages in the log.&lt;/p&gt;
&lt;p&gt;Since there are no final error/warning messages on line &lt;em&gt;7&lt;/em&gt;, rclone has
recovered from failure after a retry, and the overall sync was successful.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;1: 2021/05/14 00:44:12 INFO  : Synching Path1 &amp;#34;/path/to/local/tree&amp;#34; with Path2 &amp;#34;dropbox:&amp;#34;
2: 2021/05/14 00:44:12 INFO  : Path1 checking for diffs
3: 2021/05/14 00:44:12 INFO  : Path2 checking for diffs
4: 2021/05/14 00:44:12 INFO  : Path2:  113 changes:   22 new,    0 newer,    0 older,   91 deleted
5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
6: 2021/05/14 00:44:12 NOTICE: WARNING  listing try 1 failed.                 - dropbox:
7: 2021/05/14 00:44:12 INFO  : Bisync successful
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This log shows a &lt;em&gt;Critical failure&lt;/em&gt; which requires a &lt;code&gt;--resync&lt;/code&gt; to recover from.
See the &lt;a href=&#34;#error-handling&#34;&gt;Runtime Error Handling&lt;/a&gt; section.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;2021/05/12 00:49:40 INFO  : Google drive root &amp;#39;&amp;#39;: Waiting for checks to finish
2021/05/12 00:49:40 INFO  : Google drive root &amp;#39;&amp;#39;: Waiting for transfers to finish
2021/05/12 00:49:40 INFO  : Google drive root &amp;#39;&amp;#39;: not deleting files as there were IO errors
2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
2021/05/12 00:49:40 NOTICE: WARNING  rclone sync try 3 failed.           - /path/to/local/tree/
2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;denied-downloads-of-infected-or-abusive-files&#34;&gt;Denied downloads of &amp;quot;infected&amp;quot; or &amp;quot;abusive&amp;quot; files&lt;/h3&gt;
&lt;p&gt;Google Drive has a filter for certain file types (&lt;code&gt;.exe&lt;/code&gt;, &lt;code&gt;.apk&lt;/code&gt;, et cetera)
that by default cannot be copied from Google Drive to the local filesystem.
If you are having problems, run with &lt;code&gt;--verbose&lt;/code&gt; to see specifically which
files are generating complaints. If the error is
&lt;code&gt;This file has been identified as malware or spam and cannot be downloaded&lt;/code&gt;,
consider using the flag
&lt;a href=&#34;https://rclone.org/drive/#drive-acknowledge-abuse&#34;&gt;--drive-acknowledge-abuse&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;gdocs&#34;&gt;Google Docs (and other files of unknown size)&lt;/h3&gt;
&lt;p&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, &lt;a href=&#34;https://rclone.org/drive/#import-export-of-google-documents&#34;&gt;Google Docs&lt;/a&gt;
(including Google Sheets, Slides, etc.) are now supported in bisync, subject to
the same options, defaults, and limitations as in &lt;code&gt;rclone sync&lt;/code&gt;. When bisyncing
drive with non-drive backends, the drive -&amp;gt; non-drive direction is controlled
by &lt;a href=&#34;https://rclone.org/drive/#drive-export-formats&#34;&gt;&lt;code&gt;--drive-export-formats&lt;/code&gt;&lt;/a&gt; (default
&lt;code&gt;&amp;quot;docx,xlsx,pptx,svg&amp;quot;&lt;/code&gt;) and the non-drive -&amp;gt; drive direction is controlled by
&lt;a href=&#34;https://rclone.org/drive/#drive-import-formats&#34;&gt;&lt;code&gt;--drive-import-formats&lt;/code&gt;&lt;/a&gt; (default none.)&lt;/p&gt;
&lt;p&gt;For example, with the default export/import formats, a Google Sheet on the
drive side will be synced to an &lt;code&gt;.xlsx&lt;/code&gt; file on the non-drive side. In the
reverse direction, &lt;code&gt;.xlsx&lt;/code&gt; files with filenames that match an existing Google
Sheet will be synced to that Google Sheet, while &lt;code&gt;.xlsx&lt;/code&gt; files that do NOT
match an existing Google Sheet will be copied to drive as normal &lt;code&gt;.xlsx&lt;/code&gt; files
(without conversion to Sheets, although the Google Drive web browser UI may
still give you the option to open it as one.)&lt;/p&gt;
&lt;p&gt;If &lt;code&gt;--drive-import-formats&lt;/code&gt; is set (it&#39;s not, by default), then all of the
specified formats will be converted to Google Docs, if there is no existing
Google Doc with a matching name. Caution: such conversion can be quite lossy,
and in most cases it&#39;s probably not what you want!&lt;/p&gt;
&lt;p&gt;To bisync Google Docs as URL shortcut links (in a manner similar to &amp;quot;Drive for
Desktop&amp;quot;), use: &lt;code&gt;--drive-export-formats url&lt;/code&gt; (or
&lt;a href=&#34;https://rclone.org/drive/#exportformats:~:text=available%20Google%20Documents.-,Extension,macOS,-Standard%20options&#34;&gt;alternatives&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;Note that these link files cannot be edited on the non-drive side -- you will
get errors if you try to sync an edited link file back to drive. They CAN be
deleted (it will result in deleting the corresponding Google Doc.) If you
create a &lt;code&gt;.url&lt;/code&gt; file on the non-drive side that does not match an existing
Google Doc, bisyncing it will just result in copying the literal &lt;code&gt;.url&lt;/code&gt; file
over to drive (no Google Doc will be created.) So, as a general rule of thumb,
think of them as read-only placeholders on the non-drive side, and make all
your changes on the drive side.&lt;/p&gt;
&lt;p&gt;Likewise, even with other export-formats, it is best to only move/rename Google
Docs on the drive side. This is because otherwise, bisync will interpret this
as a file deleted and another created, and accordingly, it will delete the
Google Doc and create a new file at the new path. (Whether or not that new file
is a Google Doc depends on &lt;code&gt;--drive-import-formats&lt;/code&gt;.)&lt;/p&gt;
&lt;p&gt;Lastly, take note that all Google Docs on the drive side have a size of &lt;code&gt;-1&lt;/code&gt;
and no checksum. Therefore, they cannot be reliably synced with the
&lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--size-only&lt;/code&gt; flags. (To be exact: they will still get
created/deleted, and bisync&#39;s delta engine will notice changes and queue them
for syncing, but the underlying sync function will consider them identical and
skip them.) To work around this, use the default (modtime and size) instead of
&lt;code&gt;--checksum&lt;/code&gt; or &lt;code&gt;--size-only&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To ignore Google Docs entirely, use
&lt;a href=&#34;https://rclone.org/drive/#drive-skip-gdocs&#34;&gt;&lt;code&gt;--drive-skip-gdocs&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;usage-examples&#34;&gt;Usage examples&lt;/h2&gt;
&lt;h3 id=&#34;cron&#34;&gt;Cron&lt;/h3&gt;
&lt;p&gt;Rclone does not yet have a built-in capability to monitor the local file
system for changes and must be blindly run periodically.
On Windows this can be done using a &lt;em&gt;Task Scheduler&lt;/em&gt;,
on Linux you can use &lt;em&gt;Cron&lt;/em&gt; which is described below.&lt;/p&gt;
&lt;p&gt;The 1st example runs a sync every 5 minutes between a local directory
and an OwnCloud server, with output logged to a runlog file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# Minute (0-59)
#      Hour (0-23)
#           Day of Month (1-31)
#                Month (1-12 or Jan-Dec)
#                     Day of Week (0-6 or Sun-Sat)
#                         Command
  */5  *    *    *    *   /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See &lt;a href=&#34;https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES&#34;&gt;crontab syntax&lt;/a&gt;
for the details of crontab time interval expressions.&lt;/p&gt;
&lt;p&gt;If you run &lt;code&gt;rclone bisync&lt;/code&gt; as a cron job, redirect stdout/stderr to a file.
The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the &lt;code&gt;&amp;gt;&amp;gt;&lt;/code&gt;)
and stderr (via &lt;code&gt;2&amp;gt;&amp;amp;1&lt;/code&gt;) to a log file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt &amp;gt;&amp;gt; /path/to/logs/dropbox-run.log 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;sharing-an-encrypted-folder-tree-between-hosts&#34;&gt;Sharing an encrypted folder tree between hosts&lt;/h3&gt;
&lt;p&gt;bisync can keep a local folder in sync with a cloud service,
but what if you have some highly sensitive files to be synched?&lt;/p&gt;
&lt;p&gt;Usage of a cloud service is for exchanging both routine and sensitive
personal files between one&#39;s home network, one&#39;s personal notebook when on the
road, and with one&#39;s work computer. The routine data is not sensitive.
For the sensitive data, configure an rclone &lt;a href=&#34;https://rclone.org/crypt/&#34;&gt;crypt remote&lt;/a&gt; to point to
a subdirectory within the local disk tree that is bisync&#39;d to Dropbox,
and then set up an bisync for this local crypt directory to a directory
outside of the main sync tree.&lt;/p&gt;
&lt;h3 id=&#34;linux-server-setup&#34;&gt;Linux server setup&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;/path/to/DBoxroot&lt;/code&gt; is the root of my local sync tree.
There are numerous subdirectories.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/path/to/DBoxroot/crypt&lt;/code&gt; is the root subdirectory for files
that are encrypted. This local directory target is setup as an
rclone crypt remote named &lt;code&gt;Dropcrypt:&lt;/code&gt;.
See &lt;a href=&#34;#rclone-conf-snippet&#34;&gt;rclone.conf&lt;/a&gt; snippet below.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/path/to/my/unencrypted/files&lt;/code&gt; is the root of my sensitive
files - not encrypted, not within the tree synched to Dropbox.&lt;/li&gt;
&lt;li&gt;To sync my local unencrypted files with the encrypted Dropbox versions
I manually run &lt;code&gt;bisync /path/to/my/unencrypted/files DropCrypt:&lt;/code&gt;.
This step could be bundled into a script to run before and after
the full Dropbox tree sync in the last step,
thus actively keeping the sensitive files in sync.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bisync /path/to/DBoxroot Dropbox:&lt;/code&gt; runs periodically via cron,
keeping my full local sync tree in sync with Dropbox.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;windows-notebook-setup&#34;&gt;Windows notebook setup&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The Dropbox client runs keeping the local tree &lt;code&gt;C:\Users\MyLogin\Dropbox&lt;/code&gt;
always in sync with Dropbox. I could have used &lt;code&gt;rclone bisync&lt;/code&gt; instead.&lt;/li&gt;
&lt;li&gt;A separate directory tree at &lt;code&gt;C:\Users\MyLogin\Documents\DropLocal&lt;/code&gt;
hosts the tree of unencrypted files/folders.&lt;/li&gt;
&lt;li&gt;To sync my local unencrypted files with the encrypted
Dropbox versions I manually run the following command:
&lt;code&gt;rclone bisync C:\Users\MyLogin\Documents\DropLocal Dropcrypt:&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The Dropbox client then syncs the changes with Dropbox.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;rclone-conf-snippet&#34;&gt;rclone.conf snippet&lt;/h3&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[Dropbox]
type = dropbox
...

[Dropcrypt]
type = crypt
remote = /path/to/DBoxroot/crypt          # on the Linux server
remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
filename_encryption = standard
directory_name_encryption = true
password = ...
...
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;testing&#34;&gt;Testing&lt;/h2&gt;
&lt;p&gt;You should read this section only if you are developing for rclone.
You need to have rclone source code locally to work with bisync tests.&lt;/p&gt;
&lt;p&gt;Bisync has a dedicated test framework implemented in the &lt;code&gt;bisync_test.go&lt;/code&gt;
file located in the rclone source tree. The test suite is based on the
&lt;code&gt;go test&lt;/code&gt; command. Series of tests are stored in subdirectories below the
&lt;code&gt;cmd/bisync/testdata&lt;/code&gt; directory. Individual tests can be invoked by their
directory name, e.g.
&lt;code&gt;go test . -case basic -remote local -remote2 gdrive: -v&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Tests will make a temporary folder on remote and purge it afterwards.
If during test run there are intermittent errors and rclone retries,
these errors will be captured and flagged as invalid MISCOMPAREs.
Rerunning the test will let it pass. Consider such failures as noise.&lt;/p&gt;
&lt;h3 id=&#34;test-command-syntax&#34;&gt;Test command syntax&lt;/h3&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;usage: go test ./cmd/bisync [options...]

Options:
  -case NAME        Name(s) of the test case(s) to run. Multiple names should
                    be separated by commas. You can remove the `test_` prefix
                    and replace `_` by `-` in test name for convenience.
                    If not `all`, the name(s) should map to a directory under
                    `./cmd/bisync/testdata`.
                    Use `all` to run all tests (default: all)
  -remote PATH1     `local` or name of cloud service with `:` (default: local)
  -remote2 PATH2    `local` or name of cloud service with `:` (default: local)
  -no-compare       Disable comparing test results with the golden directory
                    (default: compare)
  -no-cleanup       Disable cleanup of Path1 and Path2 testdirs.
                    Useful for troubleshooting. (default: cleanup)
  -golden           Store results in the golden directory (default: false)
                    This flag can be used with multiple tests.
  -debug            Print debug messages
  -stop-at NUM      Stop test after given step number. (default: run to the end)
                    Implies `-no-compare` and `-no-cleanup`, if the test really
                    ends prematurely. Only meaningful for a single test case.
  -refresh-times    Force refreshing the target modtime, useful for Dropbox
                    (default: false)
  -verbose          Run tests verbosely
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note: unlike rclone flags which must be prefixed by double dash (&lt;code&gt;--&lt;/code&gt;), the
test command flags can be equally prefixed by a single &lt;code&gt;-&lt;/code&gt; or double dash.&lt;/p&gt;
&lt;h3 id=&#34;running-tests&#34;&gt;Running tests&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;go test . -case basic -remote local -remote2 local&lt;/code&gt;
runs the &lt;code&gt;test_basic&lt;/code&gt; test case using only the local filesystem,
synching one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the &lt;code&gt;.../workdir/test.log&lt;/code&gt; file,
which is finally compared to the golden copy.&lt;/li&gt;
&lt;li&gt;The first argument after &lt;code&gt;go test&lt;/code&gt; should be a relative name of the
directory containing bisync source code. If you run tests right from there,
the argument will be &lt;code&gt;.&lt;/code&gt; (current directory) as in most examples below.
If you run bisync tests from the rclone source directory, the command
should be &lt;code&gt;go test ./cmd/bisync ...&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The test engine will mangle rclone output to ensure comparability
with golden listings and logs.&lt;/li&gt;
&lt;li&gt;Test scenarios are located in &lt;code&gt;./cmd/bisync/testdata&lt;/code&gt;. The test &lt;code&gt;-case&lt;/code&gt;
argument should match the full name of a subdirectory under that
directory. Every test subdirectory name on disk must start with &lt;code&gt;test_&lt;/code&gt;,
this prefix can be omitted on command line for brevity. Also, underscores
in the name can be replaced by dashes for convenience.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;go test . -remote local -remote2 local -case all&lt;/code&gt; runs all tests.&lt;/li&gt;
&lt;li&gt;Path1 and Path2 may either be the keyword &lt;code&gt;local&lt;/code&gt;
or may be names of configured cloud services.
&lt;code&gt;go test . -remote gdrive: -remote2 dropbox: -case basic&lt;/code&gt;
will run the test between these two services, without transferring
any files to the local filesystem.&lt;/li&gt;
&lt;li&gt;Test run stdout and stderr console output may be directed to a file, e.g.
&lt;code&gt;go test . -remote gdrive: -remote2 local -case all &amp;gt; runlog.txt 2&amp;gt;&amp;amp;1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;test-execution-flow&#34;&gt;Test execution flow&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;The base setup in the &lt;code&gt;initial&lt;/code&gt; directory of the testcase is applied
on the Path1 and Path2 filesystems (via rclone copy the initial directory
to Path1, then rclone sync Path1 to Path2).&lt;/li&gt;
&lt;li&gt;The commands in the scenario.txt file are applied, with output directed
to the &lt;code&gt;test.log&lt;/code&gt; file in the test working directory.
Typically, the first actual command in the &lt;code&gt;scenario.txt&lt;/code&gt; file is
to do a &lt;code&gt;--resync&lt;/code&gt;, which establishes the baseline
&lt;code&gt;{...}.path1.lst&lt;/code&gt; and &lt;code&gt;{...}.path2.lst&lt;/code&gt; files in the test working
directory (&lt;code&gt;.../workdir/&lt;/code&gt; relative to the temporary test directory).
Various commands and listing snapshots are done within the test.&lt;/li&gt;
&lt;li&gt;Finally, the contents of the test working directory are compared
to the contents of the testcase&#39;s golden directory.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;notes-about-testing&#34;&gt;Notes about testing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Test cases are in individual directories beneath &lt;code&gt;./cmd/bisync/testdata&lt;/code&gt;.
A command line reference to a test is understood to reference a directory
beneath &lt;code&gt;testdata&lt;/code&gt;. For example,
&lt;code&gt;go test ./cmd/bisync -case dry-run -remote gdrive: -remote2 local&lt;/code&gt;
refers to the test case in &lt;code&gt;./cmd/bisync/testdata/test_dry_run&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The test working directory is located at &lt;code&gt;.../workdir&lt;/code&gt; relative to a
temporary test directory, usually under &lt;code&gt;/tmp&lt;/code&gt; on Linux.&lt;/li&gt;
&lt;li&gt;The local test sync tree is created at a temporary directory named
like &lt;code&gt;bisync.XXX&lt;/code&gt; under system temporary directory.&lt;/li&gt;
&lt;li&gt;The remote test sync tree is located at a temporary directory
under &lt;code&gt;&amp;lt;remote:&amp;gt;/bisync.XXX/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;path1&lt;/code&gt; and/or &lt;code&gt;path2&lt;/code&gt; subdirectories are created in a temporary
directory under the respective local or cloud test remote.&lt;/li&gt;
&lt;li&gt;By default, the Path1 and Path2 test dirs and workdir will be deleted
after each test run. The &lt;code&gt;-no-cleanup&lt;/code&gt; flag disables purging these
directories when validating and debugging a given test.
These directories will be flushed before running another test,
independent of the &lt;code&gt;-no-cleanup&lt;/code&gt; usage.&lt;/li&gt;
&lt;li&gt;You will likely want to add &lt;code&gt;- /testdir/&lt;/code&gt; to your normal
bisync &lt;code&gt;--filters-file&lt;/code&gt; so that normal syncs do not attempt to sync
the test temporary directories, which may have &lt;code&gt;RCLONE_TEST&lt;/code&gt; miscompares
in some testcases which would otherwise trip the &lt;code&gt;--check-access&lt;/code&gt; system.
The &lt;code&gt;--check-access&lt;/code&gt; mechanism is hard-coded to ignore &lt;code&gt;RCLONE_TEST&lt;/code&gt;
files beneath &lt;code&gt;bisync/testdata&lt;/code&gt;, so the test cases may reside on the
synched tree even if there are check file mismatches in the test tree.&lt;/li&gt;
&lt;li&gt;Some Dropbox tests can fail, notably printing the following message:
&lt;code&gt;src and dst identical but can&#39;t set mod time without deleting and re-uploading&lt;/code&gt;
This is expected and happens due to the way Dropbox handles modification times.
You should use the &lt;code&gt;-refresh-times&lt;/code&gt; test flag to make up for this.&lt;/li&gt;
&lt;li&gt;If Dropbox tests hit request limit for you and print error message
&lt;code&gt;too_many_requests/...: Too many requests or write operations.&lt;/code&gt;
then follow the
&lt;a href=&#34;https://rclone.org/dropbox/#get-your-own-dropbox-app-id&#34;&gt;Dropbox App ID instructions&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;updating-golden-results&#34;&gt;Updating golden results&lt;/h3&gt;
&lt;p&gt;Sometimes even a slight change in the bisync source can cause little changes
spread around many log files. Updating them manually would be a nightmare.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-golden&lt;/code&gt; flag will store the &lt;code&gt;test.log&lt;/code&gt; and &lt;code&gt;*.lst&lt;/code&gt; listings from each
test case into respective golden directories. Golden results will
automatically contain generic strings instead of local or cloud paths which
means that they should match when run with a different cloud service.&lt;/p&gt;
&lt;p&gt;Your normal workflow might be as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Git-clone the rclone sources locally&lt;/li&gt;
&lt;li&gt;Modify bisync source and check that it builds&lt;/li&gt;
&lt;li&gt;Run the whole test suite &lt;code&gt;go test ./cmd/bisync -remote local&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;If some tests show log difference, recheck them individually, e.g.:
&lt;code&gt;go test ./cmd/bisync -remote local -case basic&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;If you are convinced with the difference, goldenize all tests at once:
&lt;code&gt;go test ./cmd/bisync -remote local -golden&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Use word diff: &lt;code&gt;git diff --word-diff ./cmd/bisync/testdata/&lt;/code&gt;.
Please note that normal line-level diff is generally useless here.&lt;/li&gt;
&lt;li&gt;Check the difference &lt;em&gt;carefully&lt;/em&gt;!&lt;/li&gt;
&lt;li&gt;Commit the change (&lt;code&gt;git commit&lt;/code&gt;) &lt;em&gt;only&lt;/em&gt; if you are sure.
If unsure, save your code changes then wipe the log diffs from git:
&lt;code&gt;git reset [--hard]&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;structure-of-test-scenarios&#34;&gt;Structure of test scenarios&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;testname&amp;gt;/initial/&lt;/code&gt; contains a tree of files that will be set
as the initial condition on both Path1 and Path2 testdirs.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;testname&amp;gt;/modfiles/&lt;/code&gt; contains files that will be used to
modify the Path1 and/or Path2 filesystems.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;testname&amp;gt;/golden/&lt;/code&gt; contains the expected content of the test
working directory (&lt;code&gt;workdir&lt;/code&gt;) at the completion of the testcase.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;testname&amp;gt;/scenario.txt&lt;/code&gt; contains the body of the test, in the form of
various commands to modify files, run bisync, and snapshot listings.
Output from these commands is captured to &lt;code&gt;.../workdir/test.log&lt;/code&gt;
for comparison to the golden files.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;supported-test-commands&#34;&gt;Supported test commands&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;test &amp;lt;some message&amp;gt;&lt;/code&gt;
Print the line to the console and to the &lt;code&gt;test.log&lt;/code&gt;:
&lt;code&gt;test sync is working correctly with options x, y, z&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;copy-listings &amp;lt;prefix&amp;gt;&lt;/code&gt;
Save a copy of all &lt;code&gt;.lst&lt;/code&gt; listings in the test working directory
with the specified prefix:
&lt;code&gt;save-listings exclude-pass-run&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;move-listings &amp;lt;prefix&amp;gt;&lt;/code&gt;
Similar to &lt;code&gt;copy-listings&lt;/code&gt; but removes the source&lt;/li&gt;
&lt;li&gt;&lt;code&gt;purge-children &amp;lt;dir&amp;gt;&lt;/code&gt;
This will delete all child files and purge all child subdirs under given
directory but keep the parent intact. This behavior is important for tests
with Google Drive because removing and re-creating the parent would change
its ID.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;delete-file &amp;lt;file&amp;gt;&lt;/code&gt;
Delete a single file.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;delete-glob &amp;lt;dir&amp;gt; &amp;lt;pattern&amp;gt;&lt;/code&gt;
Delete a group of files located one level deep in the given directory
with names matching a given glob pattern.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;touch-glob YYYY-MM-DD &amp;lt;dir&amp;gt; &amp;lt;pattern&amp;gt;&lt;/code&gt;
Change modification time on a group of files.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;touch-copy YYYY-MM-DD &amp;lt;source-file&amp;gt; &amp;lt;dest-dir&amp;gt;&lt;/code&gt;
Change file modification time then copy it to destination.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;copy-file &amp;lt;source-file&amp;gt; &amp;lt;dest-dir&amp;gt;&lt;/code&gt;
Copy a single file to given directory.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;copy-as &amp;lt;source-file&amp;gt; &amp;lt;dest-file&amp;gt;&lt;/code&gt;
Similar to above but destination must include both directory
and the new file name at destination.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;copy-dir &amp;lt;src&amp;gt; &amp;lt;dst&amp;gt;&lt;/code&gt; and &lt;code&gt;sync-dir &amp;lt;src&amp;gt; &amp;lt;dst&amp;gt;&lt;/code&gt;
Copy/sync a directory. Equivalent of &lt;code&gt;rclone copy&lt;/code&gt; and &lt;code&gt;rclone sync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;list-dirs &amp;lt;dir&amp;gt;&lt;/code&gt;
Equivalent to &lt;code&gt;rclone lsf -R --dirs-only &amp;lt;dir&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bisync [options]&lt;/code&gt;
Runs bisync against &lt;code&gt;-remote&lt;/code&gt; and &lt;code&gt;-remote2&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;supported-substitution-terms&#34;&gt;Supported substitution terms&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;{testdir/}&lt;/code&gt; - the root dir of the testcase&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{datadir/}&lt;/code&gt; - the &lt;code&gt;modfiles&lt;/code&gt; dir under the testcase root&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{workdir/}&lt;/code&gt; - the temporary test working directory&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{path1/}&lt;/code&gt; - the root of the Path1 test directory tree&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{path2/}&lt;/code&gt; - the root of the Path2 test directory tree&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{session}&lt;/code&gt; - base name of the test listings&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{/}&lt;/code&gt; - OS-specific path separator&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{spc}&lt;/code&gt;, &lt;code&gt;{tab}&lt;/code&gt;, &lt;code&gt;{eol}&lt;/code&gt; - whitespace&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{chr:HH}&lt;/code&gt; - raw byte with given hexadecimal code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Substitution results of the terms named like &lt;code&gt;{dir/}&lt;/code&gt; will end with
&lt;code&gt;/&lt;/code&gt; (or backslash on Windows), so it is not necessary to include
slash in the usage, for example &lt;code&gt;delete-file {path1/}file1.txt&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;benchmarks&#34;&gt;Benchmarks&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This section is work in progress.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Here are a few data points for scale, execution times, and memory usage.&lt;/p&gt;
&lt;p&gt;The first set of data was taken between a local disk to Dropbox.
The &lt;a href=&#34;https://speedtest.net&#34;&gt;speedtest.net&lt;/a&gt; download speed was ~170 Mbps,
and upload speed was ~10 Mbps. 500 files (~9.5 MB each) had been already
synched. 50 files were added in a new directory, each ~9.5 MB, ~475 MB total.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;th&gt;Operations and times&lt;/th&gt;
&lt;th&gt;Overall run time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;500 files synched (nothing to move)&lt;/td&gt;
&lt;td&gt;1x listings for Path1 &amp;amp; Path2&lt;/td&gt;
&lt;td&gt;1.5 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500 files synched with --check-access&lt;/td&gt;
&lt;td&gt;1x listings for Path1 &amp;amp; Path2&lt;/td&gt;
&lt;td&gt;1.5 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50 new files on remote&lt;/td&gt;
&lt;td&gt;Queued 50 copies down: 27 sec&lt;/td&gt;
&lt;td&gt;29 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Moved local dir&lt;/td&gt;
&lt;td&gt;Queued 50 copies up: 410 sec, 50 deletes up: 9 sec&lt;/td&gt;
&lt;td&gt;421 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Moved remote dir&lt;/td&gt;
&lt;td&gt;Queued 50 copies down: 31 sec, 50 deletes down: &amp;lt;1 sec&lt;/td&gt;
&lt;td&gt;33 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete local dir&lt;/td&gt;
&lt;td&gt;Queued 50 deletes up: 9 sec&lt;/td&gt;
&lt;td&gt;13 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This next data is from a user&#39;s application. They had ~400GB of data
over 1.96 million files being sync&#39;ed between a Windows local disk and some
remote cloud. The file full path length was on average 35 characters
(which factors into load time and RAM required).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Loading the prior listing into memory (1.96 million files, listing file
size 140 MB) took ~30 sec and occupied about 1 GB of RAM.&lt;/li&gt;
&lt;li&gt;Getting a fresh listing of the local file system (producing the
140 MB output file) took about XXX sec.&lt;/li&gt;
&lt;li&gt;Getting a fresh listing of the remote file system (producing the 140 MB
output file) took about XXX sec. The network download speed was measured
at XXX Mb/s.&lt;/li&gt;
&lt;li&gt;Once the prior and current Path1 and Path2 listings were loaded (a total
of four to be loaded, two at a time), determining the deltas was pretty
quick (a few seconds for this test case), and the transfer time for any
files to be copied was dominated by the network bandwidth.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;references&#34;&gt;References&lt;/h2&gt;
&lt;p&gt;rclone&#39;s bisync implementation was derived from
the &lt;a href=&#34;https://github.com/cjnaz/rclonesync-V2&#34;&gt;rclonesync-V2&lt;/a&gt; project,
including documentation and test mechanisms,
with &lt;a href=&#34;https://github.com/cjnaz&#34;&gt;@cjnaz&lt;/a&gt;&#39;s full support and encouragement.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;rclone bisync&lt;/code&gt; is similar in nature to a range of other projects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/bcpierce00/unison&#34;&gt;unison&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/syncthing/syncthing&#34;&gt;syncthing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/cjnaz/rclonesync-V2&#34;&gt;cjnaz/rclonesync&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/ConorWilliams/rsinc&#34;&gt;ConorWilliams/rsinc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Jwink3101/syncrclone&#34;&gt;jwink3101/syncrclone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/DavideRossi/upback&#34;&gt;DavideRossi/upback&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Bisync adopts the differential synchronization technique, which is
based on keeping history of changes performed by both synchronizing sides.
See the &lt;em&gt;Dual Shadow Method&lt;/em&gt; section in
&lt;a href=&#34;https://neil.fraser.name/writing/sync/&#34;&gt;Neil Fraser&#39;s article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also note a number of academic publications by
&lt;a href=&#34;http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization&#34;&gt;Benjamin Pierce&lt;/a&gt;
about &lt;em&gt;Unison&lt;/em&gt; and synchronization in general.&lt;/p&gt;
&lt;h2 id=&#34;changelog&#34;&gt;Changelog&lt;/h2&gt;
&lt;h3 id=&#34;v1-68&#34;&gt;&lt;code&gt;v1.68&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Fixed an issue affecting backends that round modtimes to a lower precision.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;v1-67&#34;&gt;&lt;code&gt;v1.67&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Added integration tests against all backends.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;v1-66&#34;&gt;&lt;code&gt;v1.66&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Copies and deletes are now handled in one operation instead of two&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--track-renames&lt;/code&gt; and &lt;code&gt;--backup-dir&lt;/code&gt; are now supported&lt;/li&gt;
&lt;li&gt;Partial uploads known issue on &lt;code&gt;local&lt;/code&gt;/&lt;code&gt;ftp&lt;/code&gt;/&lt;code&gt;sftp&lt;/code&gt; has been resolved (unless using &lt;code&gt;--inplace&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Final listings are now generated from sync results, to avoid needing to re-list&lt;/li&gt;
&lt;li&gt;Bisync is now much more resilient to changes that happen during a bisync run, and far less prone to critical errors / undetected changes&lt;/li&gt;
&lt;li&gt;Bisync is now capable of rolling a file listing back in cases of uncertainty, essentially marking the file as needing to be rechecked next time.&lt;/li&gt;
&lt;li&gt;A few basic terminal colors are now supported, controllable with &lt;a href=&#34;https://rclone.org/docs/#color-when&#34;&gt;&lt;code&gt;--color&lt;/code&gt;&lt;/a&gt; (&lt;code&gt;AUTO&lt;/code&gt;|&lt;code&gt;NEVER&lt;/code&gt;|&lt;code&gt;ALWAYS&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Initial listing snapshots of Path1 and Path2 are now generated concurrently, using the same &amp;quot;march&amp;quot; infrastructure as &lt;code&gt;check&lt;/code&gt; and &lt;code&gt;sync&lt;/code&gt;,
for performance improvements and less &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors&#34;&gt;risk of error&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Fixed handling of unicode normalization and case insensitivity, support for &lt;a href=&#34;https://rclone.org/docs/#fix-case&#34;&gt;&lt;code&gt;--fix-case&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;https://rclone.org/docs/#ignore-case-sync&#34;&gt;&lt;code&gt;--ignore-case-sync&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;https://rclone.org/docs/#no-unicode-normalization&#34;&gt;&lt;code&gt;--no-unicode-normalization&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--resync&lt;/code&gt; is now much more efficient (especially for users of &lt;code&gt;--create-empty-src-dirs&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Google Docs (and other files of unknown size) are now supported (with the same options as in &lt;code&gt;sync&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Equality checks before a sync conflict rename now fall back to &lt;code&gt;cryptcheck&lt;/code&gt; (when possible) or &lt;code&gt;--download&lt;/code&gt;,
instead of of &lt;code&gt;--size-only&lt;/code&gt;, when &lt;code&gt;check&lt;/code&gt; is not available.&lt;/li&gt;
&lt;li&gt;Bisync no longer fails to find the correct listing file when configs are overridden with backend-specific flags.&lt;/li&gt;
&lt;li&gt;Bisync now fully supports comparing based on any combination of size, modtime, and checksum, lifting the prior restriction on backends without modtime support.&lt;/li&gt;
&lt;li&gt;Bisync now supports a &amp;quot;Graceful Shutdown&amp;quot; mode to cleanly cancel a run early without requiring &lt;code&gt;--resync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;New &lt;code&gt;--recover&lt;/code&gt; flag allows robust recovery in the event of interruptions, without requiring &lt;code&gt;--resync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A new &lt;code&gt;--max-lock&lt;/code&gt; setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted.&lt;/li&gt;
&lt;li&gt;Bisync now supports auto-resolving sync conflicts and customizing rename behavior with new &lt;a href=&#34;#conflict-resolve&#34;&gt;&lt;code&gt;--conflict-resolve&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;#conflict-loser&#34;&gt;&lt;code&gt;--conflict-loser&lt;/code&gt;&lt;/a&gt;, and &lt;a href=&#34;#conflict-suffix&#34;&gt;&lt;code&gt;--conflict-suffix&lt;/code&gt;&lt;/a&gt; flags.&lt;/li&gt;
&lt;li&gt;A new &lt;a href=&#34;#resync-mode&#34;&gt;&lt;code&gt;--resync-mode&lt;/code&gt;&lt;/a&gt; flag allows more control over which version of a file gets kept during a &lt;code&gt;--resync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Bisync now supports &lt;a href=&#34;https://rclone.org/docs/#retries-int&#34;&gt;&lt;code&gt;--retries&lt;/code&gt;&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/docs/#retries-sleep-time&#34;&gt;&lt;code&gt;--retries-sleep&lt;/code&gt;&lt;/a&gt; (when &lt;a href=&#34;#resilient&#34;&gt;&lt;code&gt;--resilient&lt;/code&gt;&lt;/a&gt; is set.)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;v1-64&#34;&gt;&lt;code&gt;v1.64&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Fixed an &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Dry%20runs%20are%20not%20completely%20dry&#34;&gt;issue&lt;/a&gt;
causing dry runs to inadvertently commit filter changes&lt;/li&gt;
&lt;li&gt;Fixed an &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20%2D%2Dresync%20deletes%20data%2C%20contrary%20to%20docs&#34;&gt;issue&lt;/a&gt;
causing &lt;code&gt;--resync&lt;/code&gt; to erroneously delete empty folders and duplicate files unique to Path2&lt;/li&gt;
&lt;li&gt;&lt;code&gt;--check-access&lt;/code&gt; is now enforced during &lt;code&gt;--resync&lt;/code&gt;, preventing data loss in &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should&#34;&gt;certain user error scenarios&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fixed an &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Bisync%20reads%20files%20in%20excluded%20directories%20during%20delete%20operations&#34;&gt;issue&lt;/a&gt;
causing bisync to consider more files than necessary due to overbroad filters during delete operations&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Identical%20files%20should%20be%20left%20alone%2C%20even%20if%20new/newer/changed%20on%20both%20sides&#34;&gt;Improved detection of false positive change conflicts&lt;/a&gt;
(identical files are now left alone instead of renamed)&lt;/li&gt;
&lt;li&gt;Added &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20Bisync%20should%20create/delete%20empty%20directories%20as%20sync%20does%2C%20when%20%2D%2Dcreate%2Dempty%2Dsrc%2Ddirs%20is%20passed&#34;&gt;support for &lt;code&gt;--create-empty-src-dirs&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Added experimental &lt;code&gt;--resilient&lt;/code&gt; mode to allow &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20Bisync%20should%20be%20more%20resilient%20to%20self%2Dcorrectable%20errors&#34;&gt;recovery from self-correctable errors&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Added &lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20%2D%2Dignore%2Dchecksum%20should%20be%20split%20into%20two%20flags%20for%20separate%20purposes&#34;&gt;new &lt;code&gt;--ignore-listing-checksum&lt;/code&gt; flag&lt;/a&gt;
to distinguish from &lt;code&gt;--ignore-checksum&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20Deletes%20take%20several%20times%20longer%20than%20copies&#34;&gt;Performance improvements&lt;/a&gt; for large remotes&lt;/li&gt;
&lt;li&gt;Documentation and testing improvements&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Box</title>
      <link>https://rclone.org/box/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/box/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode3s0hbhb-box&#34;&gt;&lt;i class=&#34;fa fa-archive&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Box&lt;/h1&gt;
&lt;p&gt;Paths are specified as &lt;code&gt;remote:path&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Paths may be as deep as required, e.g. &lt;code&gt;remote:directory/subdirectory&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The initial setup for Box involves getting a token from Box which you
can do either in your browser, or with a config.json downloaded from Box
to use JWT authentication.  &lt;code&gt;rclone config&lt;/code&gt; walks you through it.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of how to make a remote called &lt;code&gt;remote&lt;/code&gt;.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Box
   \ &amp;#34;box&amp;#34;
[snip]
Storage&amp;gt; box
Box App Client Id - leave blank normally.
client_id&amp;gt; 
Box App Client Secret - leave blank normally.
client_secret&amp;gt;
Box App config.json location
Leave blank normally.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
box_config_file&amp;gt;
Box App Primary Access Token
Leave blank normally.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
access_token&amp;gt;

Enter a string value. Press Enter for the default (&amp;#34;user&amp;#34;).
Choose a number from below, or type in your own value
 1 / Rclone should act on behalf of a user
   \ &amp;#34;user&amp;#34;
 2 / Rclone should act on behalf of a service account
   \ &amp;#34;enterprise&amp;#34;
box_sub_type&amp;gt;
Remote config
Use web browser to automatically authenticate rclone with remote?
 * Say Y if the machine running rclone has a web browser you can use
 * Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n&amp;gt; y
If your browser doesn&amp;#39;t open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Configuration complete.
Options:
- type: box
- client_id:
- client_secret:
- token: {&amp;#34;access_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;token_type&amp;#34;:&amp;#34;bearer&amp;#34;,&amp;#34;refresh_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;expiry&amp;#34;:&amp;#34;XXX&amp;#34;}
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/remote_setup/&#34;&gt;remote setup docs&lt;/a&gt; for how to set it up on a
machine with no Internet browser available.&lt;/p&gt;
&lt;p&gt;Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens
your browser to the moment you get back the verification code.  This
is on &lt;code&gt;http://127.0.0.1:53682/&lt;/code&gt; and this it may require you to unblock
it temporarily if you are running a host firewall.&lt;/p&gt;
&lt;p&gt;Once configured you can then use &lt;code&gt;rclone&lt;/code&gt; like this,&lt;/p&gt;
&lt;p&gt;List directories in top level of your Box&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List all the files in your Box&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To copy a local directory to an Box directory called backup&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy /home/source remote:backup
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;using-rclone-with-an-enterprise-account-with-sso&#34;&gt;Using rclone with an Enterprise account with SSO&lt;/h3&gt;
&lt;p&gt;If you have an &amp;quot;Enterprise&amp;quot; account type with Box with single sign on
(SSO), you need to create a password to use Box with rclone. This can
be done at your Enterprise Box account by going to Settings, &amp;quot;Account&amp;quot;
Tab, and then set the password in the &amp;quot;Authentication&amp;quot; field.&lt;/p&gt;
&lt;p&gt;Once you have done this, you can setup your Enterprise Box account
using the same procedure detailed above in the, using the password you
have just set.&lt;/p&gt;
&lt;h3 id=&#34;invalid-refresh-token&#34;&gt;Invalid refresh token&lt;/h3&gt;
&lt;p&gt;According to the &lt;a href=&#34;https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens&#34;&gt;box docs&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Each refresh_token is valid for one use in 60 days.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This means that if you&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Don&#39;t use the box remote for 60 days&lt;/li&gt;
&lt;li&gt;Copy the config file with a box refresh token in and use it in two places&lt;/li&gt;
&lt;li&gt;Get an error on a token refresh&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;then rclone will return an error which includes the text &lt;code&gt;Invalid refresh token&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To fix this you will need to use oauth2 again to update the refresh
token.  You can use the methods in &lt;a href=&#34;https://rclone.org/remote_setup/&#34;&gt;the remote setup
docs&lt;/a&gt;, bearing in mind that if you use the copy the
config file method, you should not use that remote on the computer you
did the authentication on.&lt;/p&gt;
&lt;p&gt;Here is how to do it.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone config
Current remotes:

Name                 Type
====                 ====
remote               box

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q&amp;gt; e
Choose a number from below, or type in an existing value
 1 &amp;gt; remote
remote&amp;gt; remote
Configuration complete.
Options:
- type: box
- token: {&amp;#34;access_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;token_type&amp;#34;:&amp;#34;bearer&amp;#34;,&amp;#34;refresh_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;expiry&amp;#34;:&amp;#34;2017-07-08T23:40:08.059167677+01:00&amp;#34;}
Keep this &amp;#34;remote&amp;#34; remote?
Edit remote
Value &amp;#34;client_id&amp;#34; = &amp;#34;&amp;#34;
Edit? (y/n)&amp;gt;
y) Yes
n) No
y/n&amp;gt; n
Value &amp;#34;client_secret&amp;#34; = &amp;#34;&amp;#34;
Edit? (y/n)&amp;gt;
y) Yes
n) No
y/n&amp;gt; n
Remote config
Already have a token - refresh?
y) Yes
n) No
y/n&amp;gt; y
Use web browser to automatically authenticate rclone with remote?
 * Say Y if the machine running rclone has a web browser you can use
 * Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n&amp;gt; y
If your browser doesn&amp;#39;t open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Configuration complete.
Options:
- type: box
- token: {&amp;#34;access_token&amp;#34;:&amp;#34;YYY&amp;#34;,&amp;#34;token_type&amp;#34;:&amp;#34;bearer&amp;#34;,&amp;#34;refresh_token&amp;#34;:&amp;#34;YYY&amp;#34;,&amp;#34;expiry&amp;#34;:&amp;#34;2017-07-23T12:22:29.259137901+01:00&amp;#34;}
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;modification-times-and-hashes&#34;&gt;Modification times and hashes&lt;/h3&gt;
&lt;p&gt;Box allows modification times to be set on objects accurate to 1
second.  These will be used to detect whether objects need syncing or
not.&lt;/p&gt;
&lt;p&gt;Box supports SHA1 type hashes, so you can use the &lt;code&gt;--checksum&lt;/code&gt;
flag.&lt;/p&gt;
&lt;h3 id=&#34;restricted-filename-characters&#34;&gt;Restricted filename characters&lt;/h3&gt;
&lt;p&gt;In addition to the &lt;a href=&#34;https://rclone.org/overview/#restricted-characters&#34;&gt;default restricted characters set&lt;/a&gt;
the following characters are also replaced:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;\&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x5C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＼&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;File names can also not end with the following characters.
These only get replaced if they are the last character in the name:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SP&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x20&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;␠&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Invalid UTF-8 bytes will also be &lt;a href=&#34;https://rclone.org/overview/#invalid-utf8&#34;&gt;replaced&lt;/a&gt;,
as they can&#39;t be used in JSON strings.&lt;/p&gt;
&lt;h3 id=&#34;transfers&#34;&gt;Transfers&lt;/h3&gt;
&lt;p&gt;For files above 50 MiB rclone will use a chunked transfer.  Rclone will
upload up to &lt;code&gt;--transfers&lt;/code&gt; chunks at the same time (shared among all
the multipart uploads).  Chunks are buffered in memory and are
normally 8 MiB so increasing &lt;code&gt;--transfers&lt;/code&gt; will increase memory use.&lt;/p&gt;
&lt;h3 id=&#34;deleting-files&#34;&gt;Deleting files&lt;/h3&gt;
&lt;p&gt;Depending on the enterprise settings for your user, the item will
either be actually deleted from Box or moved to the trash.&lt;/p&gt;
&lt;p&gt;Emptying the trash is supported via the rclone however cleanup command
however this deletes every trashed file and folder individually so it
may take a very long time.
Emptying the trash via the  WebUI does not have this limitation
so it is advised to empty the trash via the WebUI.&lt;/p&gt;
&lt;h3 id=&#34;root-folder-id&#34;&gt;Root folder ID&lt;/h3&gt;
&lt;p&gt;You can set the &lt;code&gt;root_folder_id&lt;/code&gt; for rclone.  This is the directory
(identified by its &lt;code&gt;Folder ID&lt;/code&gt;) that rclone considers to be the root
of your Box drive.&lt;/p&gt;
&lt;p&gt;Normally you will leave this blank and rclone will determine the
correct root to use itself.&lt;/p&gt;
&lt;p&gt;However you can set this to restrict rclone to a specific folder
hierarchy.&lt;/p&gt;
&lt;p&gt;In order to do this you will have to find the &lt;code&gt;Folder ID&lt;/code&gt; of the
directory you wish rclone to display.  This will be the last segment
of the URL when you open the relevant folder in the Box web
interface.&lt;/p&gt;
&lt;p&gt;So if the folder you want rclone to use has a URL which looks like
&lt;code&gt;https://app.box.com/folder/11xxxxxxxxx8&lt;/code&gt;
in the browser, then you use &lt;code&gt;11xxxxxxxxx8&lt;/code&gt; as
the &lt;code&gt;root_folder_id&lt;/code&gt; in the config.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to box (Box).&lt;/p&gt;
&lt;h4 id=&#34;box-client-id&#34;&gt;--box-client-id&lt;/h4&gt;
&lt;p&gt;OAuth Client Id.&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      client_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_CLIENT_ID&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-client-secret&#34;&gt;--box-client-secret&lt;/h4&gt;
&lt;p&gt;OAuth Client Secret.&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      client_secret&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_CLIENT_SECRET&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-box-config-file&#34;&gt;--box-box-config-file&lt;/h4&gt;
&lt;p&gt;Box App config.json location&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Leading &lt;code&gt;~&lt;/code&gt; will be expanded in the file name as will environment variables such as &lt;code&gt;${RCLONE_CONFIG_DIR}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      box_config_file&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_BOX_CONFIG_FILE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-access-token&#34;&gt;--box-access-token&lt;/h4&gt;
&lt;p&gt;Box App Primary Access Token&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      access_token&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_ACCESS_TOKEN&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-box-sub-type&#34;&gt;--box-box-sub-type&lt;/h4&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      box_sub_type&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_BOX_SUB_TYPE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;user&amp;quot;&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;user&amp;quot;
&lt;ul&gt;
&lt;li&gt;Rclone should act on behalf of a user.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;enterprise&amp;quot;
&lt;ul&gt;
&lt;li&gt;Rclone should act on behalf of a service account.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to box (Box).&lt;/p&gt;
&lt;h4 id=&#34;box-token&#34;&gt;--box-token&lt;/h4&gt;
&lt;p&gt;OAuth Access Token as a JSON blob.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      token&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_TOKEN&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-auth-url&#34;&gt;--box-auth-url&lt;/h4&gt;
&lt;p&gt;Auth server URL.&lt;/p&gt;
&lt;p&gt;Leave blank to use the provider defaults.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      auth_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_AUTH_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-token-url&#34;&gt;--box-token-url&lt;/h4&gt;
&lt;p&gt;Token server url.&lt;/p&gt;
&lt;p&gt;Leave blank to use the provider defaults.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      token_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_TOKEN_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-root-folder-id&#34;&gt;--box-root-folder-id&lt;/h4&gt;
&lt;p&gt;Fill in for rclone to use a non root folder as its starting point.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      root_folder_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_ROOT_FOLDER_ID&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;0&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-upload-cutoff&#34;&gt;--box-upload-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to multipart upload (&amp;gt;= 50 MiB).&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_UPLOAD_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     50Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-commit-retries&#34;&gt;--box-commit-retries&lt;/h4&gt;
&lt;p&gt;Max number of times to try committing a multipart file.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      commit_retries&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_COMMIT_RETRIES&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     100&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-list-chunk&#34;&gt;--box-list-chunk&lt;/h4&gt;
&lt;p&gt;Size of listing chunk 1-1000.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      list_chunk&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_LIST_CHUNK&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     1000&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-owned-by&#34;&gt;--box-owned-by&lt;/h4&gt;
&lt;p&gt;Only show items owned by the login (email address) passed in.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      owned_by&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_OWNED_BY&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-impersonate&#34;&gt;--box-impersonate&lt;/h4&gt;
&lt;p&gt;Impersonate this user ID when using a service account.&lt;/p&gt;
&lt;p&gt;Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.&lt;/p&gt;
&lt;p&gt;The user ID is the Box identifier for a user. User IDs can found for
any user via the GET /users endpoint, which is only available to
admins, or by calling the GET /users/me endpoint with an authenticated
user session.&lt;/p&gt;
&lt;p&gt;See: &lt;a href=&#34;https://developer.box.com/guides/authentication/jwt/as-user/&#34;&gt;https://developer.box.com/guides/authentication/jwt/as-user/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      impersonate&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_IMPERSONATE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-encoding&#34;&gt;--box-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;box-description&#34;&gt;--box-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_BOX_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;Note that Box is case insensitive so you can&#39;t have a file called
&amp;quot;Hello.doc&amp;quot; and one called &amp;quot;hello.doc&amp;quot;.&lt;/p&gt;
&lt;p&gt;Box file names can&#39;t have the &lt;code&gt;\&lt;/code&gt; character in.  rclone maps this to
and from an identical looking unicode equivalent &lt;code&gt;＼&lt;/code&gt; (U+FF3C Fullwidth
Reverse Solidus).&lt;/p&gt;
&lt;p&gt;Box only supports filenames up to 255 characters in length.&lt;/p&gt;
&lt;p&gt;Box has &lt;a href=&#34;https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/&#34;&gt;API rate limits&lt;/a&gt; that sometimes reduce the speed of rclone.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;rclone about&lt;/code&gt; is not supported by the Box backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy &lt;code&gt;mfs&lt;/code&gt; (most free space) as a member of an rclone union
remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/overview/#optional-features&#34;&gt;List of backends that do not support rclone about&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_about/&#34;&gt;rclone about&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;get-your-own-box-app-id&#34;&gt;Get your own Box App ID&lt;/h2&gt;
&lt;p&gt;Here is how to create your own Box App ID for rclone:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Go to the &lt;a href=&#34;https://app.box.com/developers/console&#34;&gt;Box Developer Console&lt;/a&gt;
and login, then click &lt;code&gt;My Apps&lt;/code&gt; on the sidebar. Click &lt;code&gt;Create New App&lt;/code&gt;
and select &lt;code&gt;Custom App&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the first screen on the box that pops up, you can pretty much enter
whatever you want. The &lt;code&gt;App Name&lt;/code&gt; can be whatever. For &lt;code&gt;Purpose&lt;/code&gt; choose
automation to avoid having to fill out anything else. Click &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the second screen of the creation screen, select
&lt;code&gt;User Authentication (OAuth 2.0)&lt;/code&gt;. Then click &lt;code&gt;Create App&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You should now be on the &lt;code&gt;Configuration&lt;/code&gt; tab of your new app. If not,
click on it at the top of the webpage. Copy down &lt;code&gt;Client ID&lt;/code&gt;
and &lt;code&gt;Client Secret&lt;/code&gt;, you&#39;ll need those for rclone.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under &amp;quot;OAuth 2.0 Redirect URI&amp;quot;, add &lt;code&gt;http://127.0.0.1:53682/&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For &lt;code&gt;Application Scopes&lt;/code&gt;, select &lt;code&gt;Read all files and folders stored in Box&lt;/code&gt;
and &lt;code&gt;Write all files and folders stored in box&lt;/code&gt; (assuming you want to do both).
Leave others unchecked. Click &lt;code&gt;Save Changes&lt;/code&gt; at the top right.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
</description>
    </item>
    
    <item>
      <title>Bugs</title>
      <link>https://rclone.org/bugs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/bugs/</guid>
      <description>&lt;h1 id=&#34;bugs-and-limitations&#34;&gt;Bugs and Limitations&lt;/h1&gt;
&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;h3 id=&#34;directory-timestamps-aren-t-preserved-on-some-backends&#34;&gt;Directory timestamps aren&#39;t preserved on some backends&lt;/h3&gt;
&lt;p&gt;As of &lt;code&gt;v1.66&lt;/code&gt;, rclone supports syncing directory modtimes, if the backend
supports it. Some backends do not support it -- see
&lt;a href=&#34;https://rclone.org/overview/&#34;&gt;overview&lt;/a&gt; for a complete list. Additionally, note
that empty directories are not synced by default (this can be enabled with
&lt;code&gt;--create-empty-src-dirs&lt;/code&gt;.)&lt;/p&gt;
&lt;h3 id=&#34;rclone-struggles-with-millions-of-files-in-a-directory-bucket&#34;&gt;Rclone struggles with millions of files in a directory/bucket&lt;/h3&gt;
&lt;p&gt;Currently rclone loads each directory/bucket entirely into memory before
using it.  Since each rclone object takes 0.5k-1k of memory this can take
a very long time and use a large amount of memory.&lt;/p&gt;
&lt;p&gt;Millions of files in a directory tends to occur on bucket-based remotes
(e.g. S3 buckets) since those remotes do not segregate subdirectories within
the bucket.&lt;/p&gt;
&lt;h3 id=&#34;bucket-based-remotes-and-folders&#34;&gt;Bucket-based remotes and folders&lt;/h3&gt;
&lt;p&gt;Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of
directories.  Rclone therefore cannot create directories in them which
means that empty directories on a bucket-based remote will tend to
disappear.&lt;/p&gt;
&lt;p&gt;Some software creates empty keys ending in &lt;code&gt;/&lt;/code&gt; as directory markers.
Rclone doesn&#39;t do this as it potentially creates more objects and
costs more.  This ability may be added in the future (probably via a
flag/option).&lt;/p&gt;
&lt;h2 id=&#34;bugs&#34;&gt;Bugs&lt;/h2&gt;
&lt;p&gt;Bugs are stored in rclone&#39;s GitHub project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug&#34;&gt;Reported bugs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22&#34;&gt;Known issues&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Cache</title>
      <link>https://rclone.org/cache/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/cache/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode14s0hbhb-cache&#34;&gt;&lt;i class=&#34;fa fa-archive&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Cache&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;cache&lt;/code&gt; remote wraps another existing remote and stores file structure
and its data for long running tasks like &lt;code&gt;rclone mount&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;status&#34;&gt;Status&lt;/h2&gt;
&lt;p&gt;The cache backend code is working but it currently doesn&#39;t
have a maintainer so there are &lt;a href=&#34;https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22&#34;&gt;outstanding bugs&lt;/a&gt; which aren&#39;t getting fixed.&lt;/p&gt;
&lt;p&gt;The cache backend is due to be phased out in favour of the VFS caching
layer eventually which is more tightly integrated into rclone.&lt;/p&gt;
&lt;p&gt;Until this happens we recommend only using the cache backend if you
find you can&#39;t work without it. There are many docs online describing
the use of the cache backend to minimize API hits and by-and-large
these are out of date and the cache backend isn&#39;t needed in those
scenarios any more.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;To get started you just need to have an existing remote which can be configured
with &lt;code&gt;cache&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here is an example of how to make a remote called &lt;code&gt;test-cache&lt;/code&gt;.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q&amp;gt; n
name&amp;gt; test-cache
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Cache a remote
   \ &amp;#34;cache&amp;#34;
[snip]
Storage&amp;gt; cache
Remote to cache.
Normally should contain a &amp;#39;:&amp;#39; and a path, e.g. &amp;#34;myremote:path/to/dir&amp;#34;,
&amp;#34;myremote:bucket&amp;#34; or maybe &amp;#34;myremote:&amp;#34; (not recommended).
remote&amp;gt; local:/test
Optional: The URL of the Plex server
plex_url&amp;gt; http://127.0.0.1:32400
Optional: The username of the Plex user
plex_username&amp;gt; dummyusername
Optional: The password of the Plex user
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n&amp;gt; y
Enter the password:
password:
Confirm the password:
password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
 1 / 1 MiB
   \ &amp;#34;1M&amp;#34;
 2 / 5 MiB
   \ &amp;#34;5M&amp;#34;
 3 / 10 MiB
   \ &amp;#34;10M&amp;#34;
chunk_size&amp;gt; 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don&amp;#39;t plan on changing the source FS from outside the cache.
Accepted units are: &amp;#34;s&amp;#34;, &amp;#34;m&amp;#34;, &amp;#34;h&amp;#34;.
Default: 5m
Choose a number from below, or type in your own value
 1 / 1 hour
   \ &amp;#34;1h&amp;#34;
 2 / 24 hours
   \ &amp;#34;24h&amp;#34;
 3 / 24 hours
   \ &amp;#34;48h&amp;#34;
info_age&amp;gt; 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
 1 / 500 MiB
   \ &amp;#34;500M&amp;#34;
 2 / 1 GiB
   \ &amp;#34;1G&amp;#34;
 3 / 10 GiB
   \ &amp;#34;10G&amp;#34;
chunk_total_size&amp;gt; 3
Remote config
--------------------
[test-cache]
remote = local:/test
plex_url = http://127.0.0.1:32400
plex_username = dummyusername
plex_password = *** ENCRYPTED ***
chunk_size = 5M
info_age = 48h
chunk_total_size = 10G
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can then use it like this,&lt;/p&gt;
&lt;p&gt;List directories in top level of your drive&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd test-cache:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List all the files in your drive&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls test-cache:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To start a cached mount&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone mount --allow-other test-cache: /var/tmp/test-cache
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;write-features&#34;&gt;Write Features&lt;/h3&gt;
&lt;h3 id=&#34;offline-uploading&#34;&gt;Offline uploading&lt;/h3&gt;
&lt;p&gt;In an effort to make writing through cache more reliable, the backend
now supports this feature which can be activated by specifying a
&lt;code&gt;cache-tmp-upload-path&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;A files goes through these states when using this feature:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An upload is started (usually by copying a file on the cache remote)&lt;/li&gt;
&lt;li&gt;When the copy to the temporary location is complete the file is part
of the cached remote and looks and behaves like any other file (reading included)&lt;/li&gt;
&lt;li&gt;After &lt;code&gt;cache-tmp-wait-time&lt;/code&gt; passes and the file is next in line, &lt;code&gt;rclone move&lt;/code&gt;
is used to move the file to the cloud provider&lt;/li&gt;
&lt;li&gt;Reading the file still works during the upload but most modifications on it will be prohibited&lt;/li&gt;
&lt;li&gt;Once the move is complete the file is unlocked for modifications as it
becomes as any other regular file&lt;/li&gt;
&lt;li&gt;If the file is being read through &lt;code&gt;cache&lt;/code&gt; when it&#39;s actually
deleted from the temporary path then &lt;code&gt;cache&lt;/code&gt; will simply swap the source
to the cloud provider without interrupting the reading (small blip can happen though)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Files are uploaded in sequence and only one file is uploaded at a time.
Uploads will be stored in a queue and be processed based on the order they were added.
The queue and the temporary storage is persistent across restarts but
can be cleared on startup with the &lt;code&gt;--cache-db-purge&lt;/code&gt; flag.&lt;/p&gt;
&lt;h3 id=&#34;write-support&#34;&gt;Write Support&lt;/h3&gt;
&lt;p&gt;Writes are supported through &lt;code&gt;cache&lt;/code&gt;.
One caveat is that a mounted cache remote does not add any retry or fallback
mechanism to the upload operation. This will depend on the implementation
of the wrapped remote. Consider using &lt;code&gt;Offline uploading&lt;/code&gt; for reliable writes.&lt;/p&gt;
&lt;p&gt;One special case is covered with &lt;code&gt;cache-writes&lt;/code&gt; which will cache the file
data at the same time as the upload when it is enabled making it available
from the cache store immediately once the upload is finished.&lt;/p&gt;
&lt;h3 id=&#34;read-features&#34;&gt;Read Features&lt;/h3&gt;
&lt;h4 id=&#34;multiple-connections&#34;&gt;Multiple connections&lt;/h4&gt;
&lt;p&gt;To counter the high latency between a local PC where rclone is running
and cloud providers, the cache remote can split multiple requests to the
cloud provider for smaller file chunks and combines them together locally
where they can be available almost immediately before the reader usually
needs them.&lt;/p&gt;
&lt;p&gt;This is similar to buffering when media files are played online. Rclone
will stay around the current marker but always try its best to stay ahead
and prepare the data before.&lt;/p&gt;
&lt;h4 id=&#34;plex-integration&#34;&gt;Plex Integration&lt;/h4&gt;
&lt;p&gt;There is a direct integration with Plex which allows cache to detect during reading
if the file is in playback or not. This helps cache to adapt how it queries
the cloud provider depending on what is needed for.&lt;/p&gt;
&lt;p&gt;Scans will have a minimum amount of workers (1) while in a confirmed playback cache
will deploy the configured number of workers.&lt;/p&gt;
&lt;p&gt;This integration opens the doorway to additional performance improvements
which will be explored in the near future.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If Plex options are not configured, &lt;code&gt;cache&lt;/code&gt; will function with its
configured options without adapting any of its settings.&lt;/p&gt;
&lt;p&gt;How to enable? Run &lt;code&gt;rclone config&lt;/code&gt; and add all the Plex options (endpoint, username
and password) in your remote and it will be automatically enabled.&lt;/p&gt;
&lt;p&gt;Affected settings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cache-workers&lt;/code&gt;: &lt;em&gt;Configured value&lt;/em&gt; during confirmed playback or &lt;em&gt;1&lt;/em&gt; all the other times&lt;/li&gt;
&lt;/ul&gt;
&lt;h5 id=&#34;certificate-validation&#34;&gt;Certificate Validation&lt;/h5&gt;
&lt;p&gt;When the Plex server is configured to only accept secure connections, it is
possible to use &lt;code&gt;.plex.direct&lt;/code&gt; URLs to ensure certificate validation succeeds.
These URLs are used by Plex internally to connect to the Plex server securely.&lt;/p&gt;
&lt;p&gt;The format for these URLs is the following:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://ip-with-dots-replaced.server-hash.plex.direct:32400/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;ip-with-dots-replaced&lt;/code&gt; part can be any IPv4 address, where the dots
have been replaced with dashes, e.g. &lt;code&gt;127.0.0.1&lt;/code&gt; becomes &lt;code&gt;127-0-0-1&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To get the &lt;code&gt;server-hash&lt;/code&gt; part, the easiest way is to visit&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://plex.tv/api/resources?includeHttps=1&amp;amp;X-Plex-Token=your-plex-token&#34;&gt;https://plex.tv/api/resources?includeHttps=1&amp;amp;X-Plex-Token=your-plex-token&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This page will list all the available Plex servers for your account
with at least one &lt;code&gt;.plex.direct&lt;/code&gt; link for each. Copy one URL and replace
the IP address with the desired address. This can be used as the
&lt;code&gt;plex_url&lt;/code&gt; value.&lt;/p&gt;
&lt;h3 id=&#34;known-issues&#34;&gt;Known issues&lt;/h3&gt;
&lt;h4 id=&#34;mount-and-dir-cache-time&#34;&gt;Mount and --dir-cache-time&lt;/h4&gt;
&lt;p&gt;--dir-cache-time controls the first layer of directory caching which works at the mount layer.
Being an independent caching mechanism from the &lt;code&gt;cache&lt;/code&gt; backend, it will manage its own entries
based on the configured time.&lt;/p&gt;
&lt;p&gt;To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct
one, try to set &lt;code&gt;--dir-cache-time&lt;/code&gt; to a lower time than &lt;code&gt;--cache-info-age&lt;/code&gt;. Default values are
already configured in this way.&lt;/p&gt;
&lt;h4 id=&#34;windows-support-experimental&#34;&gt;Windows support - Experimental&lt;/h4&gt;
&lt;p&gt;There are a couple of issues with Windows &lt;code&gt;mount&lt;/code&gt; functionality that still require some investigations.
It should be considered as experimental thus far as fixes come in for this OS.&lt;/p&gt;
&lt;p&gt;Most of the issues seem to be related to the difference between filesystems
on Linux flavors and Windows as cache is heavily dependent on them.&lt;/p&gt;
&lt;p&gt;Any reports or feedback on how cache behaves on this OS is greatly appreciated.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues/1935&#34;&gt;https://github.com/rclone/rclone/issues/1935&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues/1907&#34;&gt;https://github.com/rclone/rclone/issues/1907&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues/1834&#34;&gt;https://github.com/rclone/rclone/issues/1834&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;risk-of-throttling&#34;&gt;Risk of throttling&lt;/h4&gt;
&lt;p&gt;Future iterations of the cache backend will make use of the pooling functionality
of the cloud provider to synchronize and at the same time make writing through it
more tolerant to failures.&lt;/p&gt;
&lt;p&gt;There are a couple of enhancements in track to add these but in the meantime
there is a valid concern that the expiring cache listings can lead to cloud provider
throttles or bans due to repeated queries on it for very large mounts.&lt;/p&gt;
&lt;p&gt;Some recommendations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;don&#39;t use a very small interval for entry information (&lt;code&gt;--cache-info-age&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;while writes aren&#39;t yet optimised, you can still write through &lt;code&gt;cache&lt;/code&gt; which gives you the advantage
of adding the file in the cache at the same time if configured to do so.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Future enhancements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues/1937&#34;&gt;https://github.com/rclone/rclone/issues/1937&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rclone/rclone/issues/1936&#34;&gt;https://github.com/rclone/rclone/issues/1936&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-and-crypt&#34;&gt;cache and crypt&lt;/h4&gt;
&lt;p&gt;One common scenario is to keep your data encrypted in the cloud provider
using the &lt;code&gt;crypt&lt;/code&gt; remote. &lt;code&gt;crypt&lt;/code&gt; uses a similar technique to wrap around
an existing remote and handles this translation in a seamless way.&lt;/p&gt;
&lt;p&gt;There is an issue with wrapping the remotes in this order:
&lt;span style=&#34;color: red&#34;&gt;**cloud remote** -&gt; **crypt** -&gt; **cache**&lt;/span&gt;
&lt;/p&gt;
&lt;p&gt;During testing, I experienced a lot of bans with the remotes in this order.
I suspect it might be related to how crypt opens files on the cloud provider
which makes it think we&#39;re downloading the full file instead of small chunks.
Organizing the remotes in this order yields better results:
&lt;span style=&#34;color: green&#34;&gt;**cloud remote** -&gt; **cache** -&gt; **crypt**&lt;/span&gt;
&lt;/p&gt;
&lt;h4 id=&#34;absolute-remote-paths&#34;&gt;absolute remote paths&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;cache&lt;/code&gt; can not differentiate between relative and absolute paths for the wrapped remote.
Any path given in the &lt;code&gt;remote&lt;/code&gt; config setting and on the command line will be passed to
the wrapped remote as is, but for storing the chunks on disk the path will be made
relative by removing any leading &lt;code&gt;/&lt;/code&gt; character.&lt;/p&gt;
&lt;p&gt;This behavior is irrelevant for most backend types, but there are backends where a leading &lt;code&gt;/&lt;/code&gt;
changes the effective directory, e.g. in the &lt;code&gt;sftp&lt;/code&gt; backend paths starting with a &lt;code&gt;/&lt;/code&gt; are
relative to the root of the SSH server and paths without are relative to the user home directory.
As a result &lt;code&gt;sftp:bin&lt;/code&gt; and &lt;code&gt;sftp:/bin&lt;/code&gt; will share the same cache folder, even if they represent
a different directory on the SSH server.&lt;/p&gt;
&lt;h3 id=&#34;cache-and-remote-control-rc&#34;&gt;Cache and Remote Control (--rc)&lt;/h3&gt;
&lt;p&gt;Cache supports the new &lt;code&gt;--rc&lt;/code&gt; mode in rclone and can be remote controlled through the following end points:
By default, the listener is disabled if you do not add the flag.&lt;/p&gt;
&lt;h3 id=&#34;rc-cache-expire&#34;&gt;rc cache/expire&lt;/h3&gt;
&lt;p&gt;Purge a remote from the cache backend. Supports either a directory or a file.
It supports both encrypted and unencrypted file names if cache is wrapped by crypt.&lt;/p&gt;
&lt;p&gt;Params:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;remote&lt;/strong&gt; = path to remote &lt;strong&gt;(required)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;withData&lt;/strong&gt; = true/false to delete cached data (chunks) as well &lt;em&gt;(optional, false by default)&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to cache (Cache a remote).&lt;/p&gt;
&lt;h4 id=&#34;cache-remote&#34;&gt;--cache-remote&lt;/h4&gt;
&lt;p&gt;Remote to cache.&lt;/p&gt;
&lt;p&gt;Normally should contain a &#39;:&#39; and a path, e.g. &amp;quot;myremote:path/to/dir&amp;quot;,
&amp;quot;myremote:bucket&amp;quot; or maybe &amp;quot;myremote:&amp;quot; (not recommended).&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      remote&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_REMOTE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-plex-url&#34;&gt;--cache-plex-url&lt;/h4&gt;
&lt;p&gt;The URL of the Plex server.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      plex_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_PLEX_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-plex-username&#34;&gt;--cache-plex-username&lt;/h4&gt;
&lt;p&gt;The username of the Plex user.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      plex_username&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_PLEX_USERNAME&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-plex-password&#34;&gt;--cache-plex-password&lt;/h4&gt;
&lt;p&gt;The password of the Plex user.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; Input to this must be obscured - see &lt;a href=&#34;https://rclone.org/commands/rclone_obscure/&#34;&gt;rclone obscure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      plex_password&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_PLEX_PASSWORD&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-chunk-size&#34;&gt;--cache-chunk-size&lt;/h4&gt;
&lt;p&gt;The size of a chunk (partial file data).&lt;/p&gt;
&lt;p&gt;Use lower numbers for slower connections. If the chunk size is
changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_CHUNK_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     5Mi&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;1M&amp;quot;
&lt;ul&gt;
&lt;li&gt;1 MiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;5M&amp;quot;
&lt;ul&gt;
&lt;li&gt;5 MiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;10M&amp;quot;
&lt;ul&gt;
&lt;li&gt;10 MiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-info-age&#34;&gt;--cache-info-age&lt;/h4&gt;
&lt;p&gt;How long to cache file structure information (directory listings, file size, times, etc.).
If all write operations are done through the cache then you can safely make
this value very large as the cache store will also be updated in real time.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      info_age&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_INFO_AGE&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     6h0m0s&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;1h&amp;quot;
&lt;ul&gt;
&lt;li&gt;1 hour&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;24h&amp;quot;
&lt;ul&gt;
&lt;li&gt;24 hours&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;48h&amp;quot;
&lt;ul&gt;
&lt;li&gt;48 hours&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-chunk-total-size&#34;&gt;--cache-chunk-total-size&lt;/h4&gt;
&lt;p&gt;The total size that the chunks can take up on the local disk.&lt;/p&gt;
&lt;p&gt;If the cache exceeds this value then it will start to delete the
oldest chunks until it goes under this value.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_total_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_CHUNK_TOTAL_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     10Gi&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;500M&amp;quot;
&lt;ul&gt;
&lt;li&gt;500 MiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;1G&amp;quot;
&lt;ul&gt;
&lt;li&gt;1 GiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;10G&amp;quot;
&lt;ul&gt;
&lt;li&gt;10 GiB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to cache (Cache a remote).&lt;/p&gt;
&lt;h4 id=&#34;cache-plex-token&#34;&gt;--cache-plex-token&lt;/h4&gt;
&lt;p&gt;The plex token for authentication - auto set normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      plex_token&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_PLEX_TOKEN&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-plex-insecure&#34;&gt;--cache-plex-insecure&lt;/h4&gt;
&lt;p&gt;Skip all certificate verification when connecting to the Plex server.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      plex_insecure&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_PLEX_INSECURE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-db-path&#34;&gt;--cache-db-path&lt;/h4&gt;
&lt;p&gt;Directory to store file structure metadata DB.&lt;/p&gt;
&lt;p&gt;The remote name is used as the DB file name.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      db_path&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_DB_PATH&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;$HOME/.cache/rclone/cache-backend&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-chunk-path&#34;&gt;--cache-chunk-path&lt;/h4&gt;
&lt;p&gt;Directory to cache chunk files.&lt;/p&gt;
&lt;p&gt;Path to where partial file data (chunks) are stored locally. The remote
name is appended to the final path.&lt;/p&gt;
&lt;p&gt;This config follows the &amp;quot;--cache-db-path&amp;quot;. If you specify a custom
location for &amp;quot;--cache-db-path&amp;quot; and don&#39;t specify one for &amp;quot;--cache-chunk-path&amp;quot;
then &amp;quot;--cache-chunk-path&amp;quot; will use the same path as &amp;quot;--cache-db-path&amp;quot;.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_path&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_CHUNK_PATH&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;$HOME/.cache/rclone/cache-backend&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-db-purge&#34;&gt;--cache-db-purge&lt;/h4&gt;
&lt;p&gt;Clear all the cached data for this remote on start.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      db_purge&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_DB_PURGE&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-chunk-clean-interval&#34;&gt;--cache-chunk-clean-interval&lt;/h4&gt;
&lt;p&gt;How often should the cache perform cleanups of the chunk storage.&lt;/p&gt;
&lt;p&gt;The default value should be ok for most people. If you find that the
cache goes over &amp;quot;cache-chunk-total-size&amp;quot; too often then try to lower
this value to force it to perform cleanups more often.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_clean_interval&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_CHUNK_CLEAN_INTERVAL&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     1m0s&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-read-retries&#34;&gt;--cache-read-retries&lt;/h4&gt;
&lt;p&gt;How many times to retry a read from a cache storage.&lt;/p&gt;
&lt;p&gt;Since reading from a cache stream is independent from downloading file
data, readers can get to a point where there&#39;s no more data in the
cache.  Most of the times this can indicate a connectivity issue if
cache isn&#39;t able to provide file data anymore.&lt;/p&gt;
&lt;p&gt;For really slow connections, increase this to a point where the stream is
able to provide data but your experience will be very stuttering.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      read_retries&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_READ_RETRIES&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     10&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-workers&#34;&gt;--cache-workers&lt;/h4&gt;
&lt;p&gt;How many workers should run in parallel to download chunks.&lt;/p&gt;
&lt;p&gt;Higher values will mean more parallel processing (better CPU needed)
and more concurrent requests on the cloud provider.  This impacts
several aspects like the cloud provider API limits, more stress on the
hardware that rclone runs on but it also means that streams will be
more fluid and data will be available much more faster to readers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If the optional Plex integration is enabled then this
setting will adapt to the type of reading performed and the value
specified here will be used as a maximum number of workers to use.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      workers&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_WORKERS&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     4&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-chunk-no-memory&#34;&gt;--cache-chunk-no-memory&lt;/h4&gt;
&lt;p&gt;Disable the in-memory cache for storing chunks during streaming.&lt;/p&gt;
&lt;p&gt;By default, cache will keep file data during streaming in RAM as well
to provide it to readers as fast as possible.&lt;/p&gt;
&lt;p&gt;This transient data is evicted as soon as it is read and the number of
chunks stored doesn&#39;t exceed the number of workers. However, depending
on other settings like &amp;quot;cache-chunk-size&amp;quot; and &amp;quot;cache-workers&amp;quot; this footprint
can increase if there are parallel streams too (multiple files being read
at the same time).&lt;/p&gt;
&lt;p&gt;If the hardware permits it, use this feature to provide an overall better
performance during streaming but it can also be disabled if RAM is not
available on the local machine.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_no_memory&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_CHUNK_NO_MEMORY&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-rps&#34;&gt;--cache-rps&lt;/h4&gt;
&lt;p&gt;Limits the number of requests per second to the source FS (-1 to disable).&lt;/p&gt;
&lt;p&gt;This setting places a hard limit on the number of requests per second
that cache will be doing to the cloud provider remote and try to
respect that value by setting waits between reads.&lt;/p&gt;
&lt;p&gt;If you find that you&#39;re getting banned or limited on the cloud
provider through cache and know that a smaller number of requests per
second will allow you to work with it then you can use this setting
for that.&lt;/p&gt;
&lt;p&gt;A good balance of all the other settings should make this setting
useless but it is available to set for more special cases.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This will limit the number of requests during streams but
other API calls to the cloud provider like directory listings will
still pass.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      rps&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_RPS&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     -1&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-writes&#34;&gt;--cache-writes&lt;/h4&gt;
&lt;p&gt;Cache file data on writes through the FS.&lt;/p&gt;
&lt;p&gt;If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the
cache store at the same time during upload.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      writes&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_WRITES&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-tmp-upload-path&#34;&gt;--cache-tmp-upload-path&lt;/h4&gt;
&lt;p&gt;Directory to keep temporary files until they are uploaded.&lt;/p&gt;
&lt;p&gt;This is the path where cache will use as a temporary storage for new
files that need to be uploaded to the cloud provider.&lt;/p&gt;
&lt;p&gt;Specifying a value will enable this feature. Without it, it is
completely disabled and files will be uploaded directly to the cloud
provider&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      tmp_upload_path&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_TMP_UPLOAD_PATH&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-tmp-wait-time&#34;&gt;--cache-tmp-wait-time&lt;/h4&gt;
&lt;p&gt;How long should files be stored in local cache before being uploaded.&lt;/p&gt;
&lt;p&gt;This is the duration that a file must wait in the temporary location
&lt;em&gt;cache-tmp-upload-path&lt;/em&gt; before it is selected for upload.&lt;/p&gt;
&lt;p&gt;Note that only one file is uploaded at a time and it can take longer
to start the upload if a queue formed for this purpose.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      tmp_wait_time&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_TMP_WAIT_TIME&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     15s&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-db-wait-time&#34;&gt;--cache-db-wait-time&lt;/h4&gt;
&lt;p&gt;How long to wait for the DB to be available - 0 is unlimited.&lt;/p&gt;
&lt;p&gt;Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
error.&lt;/p&gt;
&lt;p&gt;If you set it to 0 then it will wait forever.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      db_wait_time&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_DB_WAIT_TIME&lt;/li&gt;
&lt;li&gt;Type:        Duration&lt;/li&gt;
&lt;li&gt;Default:     1s&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;cache-description&#34;&gt;--cache-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CACHE_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;backend-commands&#34;&gt;Backend commands&lt;/h2&gt;
&lt;p&gt;Here are the commands specific to the cache backend.&lt;/p&gt;
&lt;p&gt;Run them with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend COMMAND remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The help below will explain what arguments each command takes.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/commands/rclone_backend/&#34;&gt;backend&lt;/a&gt; command for more
info on how to pass options and arguments.&lt;/p&gt;
&lt;p&gt;These can be run on a running backend using the rc command
&lt;a href=&#34;https://rclone.org/rc/#backend-command&#34;&gt;backend/command&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;stats&#34;&gt;stats&lt;/h3&gt;
&lt;p&gt;Print stats on the cache backend in JSON format.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend stats remote: [options] [&amp;lt;arguments&amp;gt;+]
&lt;/code&gt;&lt;/pre&gt;

</description>
    </item>
    
    <item>
      <title>Chunker</title>
      <link>https://rclone.org/chunker/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/chunker/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode16s0hbhb-chunker&#34;&gt;&lt;i class=&#34;fa fa-cut&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Chunker&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;chunker&lt;/code&gt; overlay transparently splits large files into smaller chunks
during upload to wrapped remote and transparently assembles them back
when the file is downloaded. This allows to effectively overcome size limits
imposed by storage providers.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;To use it, first set up the underlying remote following the configuration
instructions for that remote. You can also use a local pathname instead of
a remote.&lt;/p&gt;
&lt;p&gt;First check your chosen remote is working - we&#39;ll call it &lt;code&gt;remote:path&lt;/code&gt; here.
Note that anything inside &lt;code&gt;remote:path&lt;/code&gt; will be chunked and anything outside
won&#39;t. This means that if you are using a bucket-based remote (e.g. S3, B2, swift)
then you should probably put the bucket in the remote &lt;code&gt;s3:bucket&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now configure &lt;code&gt;chunker&lt;/code&gt; using &lt;code&gt;rclone config&lt;/code&gt;. We will call this one &lt;code&gt;overlay&lt;/code&gt;
to separate it from the &lt;code&gt;remote&lt;/code&gt; itself.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; overlay
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Transparently chunk/split large files
   \ &amp;#34;chunker&amp;#34;
[snip]
Storage&amp;gt; chunker
Remote to chunk/unchunk.
Normally should contain a &amp;#39;:&amp;#39; and a path, e.g. &amp;#34;myremote:path/to/dir&amp;#34;,
&amp;#34;myremote:bucket&amp;#34; or maybe &amp;#34;myremote:&amp;#34; (not recommended).
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
remote&amp;gt; remote:path
Files larger than chunk size will be split in chunks.
Enter a size with suffix K,M,G,T. Press Enter for the default (&amp;#34;2G&amp;#34;).
chunk_size&amp;gt; 100M
Choose how chunker handles hash sums. All modes but &amp;#34;none&amp;#34; require metadata.
Enter a string value. Press Enter for the default (&amp;#34;md5&amp;#34;).
Choose a number from below, or type in your own value
 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise
   \ &amp;#34;none&amp;#34;
 2 / MD5 for composite files
   \ &amp;#34;md5&amp;#34;
 3 / SHA1 for composite files
   \ &amp;#34;sha1&amp;#34;
 4 / MD5 for all files
   \ &amp;#34;md5all&amp;#34;
 5 / SHA1 for all files
   \ &amp;#34;sha1all&amp;#34;
 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported
   \ &amp;#34;md5quick&amp;#34;
 7 / Similar to &amp;#34;md5quick&amp;#34; but prefers SHA1 over MD5
   \ &amp;#34;sha1quick&amp;#34;
hash_type&amp;gt; md5
Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; n
Remote config
--------------------
[overlay]
type = chunker
remote = remote:bucket
chunk_size = 100M
hash_type = md5
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;specifying-the-remote&#34;&gt;Specifying the remote&lt;/h3&gt;
&lt;p&gt;In normal use, make sure the remote has a &lt;code&gt;:&lt;/code&gt; in. If you specify the remote
without a &lt;code&gt;:&lt;/code&gt; then rclone will use a local directory of that name.
So if you use a remote of &lt;code&gt;/path/to/secret/files&lt;/code&gt; then rclone will
chunk stuff in that directory. If you use a remote of &lt;code&gt;name&lt;/code&gt; then rclone
will put files in a directory called &lt;code&gt;name&lt;/code&gt; in the current directory.&lt;/p&gt;
&lt;h3 id=&#34;chunking&#34;&gt;Chunking&lt;/h3&gt;
&lt;p&gt;When rclone starts a file upload, chunker checks the file size. If it
doesn&#39;t exceed the configured chunk size, chunker will just pass the file
to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut
data in pieces with temporary names and stream them one by one, on the fly.
Each data chunk will contain the specified number of bytes, except for the
last one which may have less data. If file size is unknown in advance
(this is called a streaming upload), chunker will internally create
a temporary copy, record its size and repeat the above process.&lt;/p&gt;
&lt;p&gt;When upload completes, temporary chunk files are finally renamed.
This scheme guarantees that operations can be run in parallel and look
from outside as atomic.
A similar method with hidden temporary chunks is used for other operations
(copy/move/rename, etc.). If an operation fails, hidden chunks are normally
destroyed, and the target composite file stays intact.&lt;/p&gt;
&lt;p&gt;When a composite file download is requested, chunker transparently
assembles it by concatenating data chunks in order. As the split is trivial
one could even manually concatenate data chunks together to obtain the
original content.&lt;/p&gt;
&lt;p&gt;When the &lt;code&gt;list&lt;/code&gt; rclone command scans a directory on wrapped remote,
the potential chunk files are accounted for, grouped and assembled into
composite directory entries. Any temporary chunks are hidden.&lt;/p&gt;
&lt;p&gt;List and other commands can sometimes come across composite files with
missing or invalid chunks, e.g. shadowed by like-named directory or
another file. This usually means that wrapped file system has been directly
tampered with or damaged. If chunker detects a missing chunk it will
by default print warning, skip the whole incomplete group of chunks but
proceed with current command.
You can set the &lt;code&gt;--chunker-fail-hard&lt;/code&gt; flag to have commands abort with
error message in such cases.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Caveat&lt;/strong&gt;: As it is now, chunker will always create a temporary file in the
backend and then rename it, even if the file is below the chunk threshold.
This will result in unnecessary API calls and can severely restrict throughput
when handling transfers primarily composed of small files on some backends (e.g. Box).
A workaround to this issue is to use chunker only for files above the chunk threshold
via &lt;code&gt;--min-size&lt;/code&gt; and then perform a separate call without chunker on the remaining
files.&lt;/p&gt;
&lt;h4 id=&#34;chunk-names&#34;&gt;Chunk names&lt;/h4&gt;
&lt;p&gt;The default chunk name format is &lt;code&gt;*.rclone_chunk.###&lt;/code&gt;, hence by default
chunk names are &lt;code&gt;BIG_FILE_NAME.rclone_chunk.001&lt;/code&gt;,
&lt;code&gt;BIG_FILE_NAME.rclone_chunk.002&lt;/code&gt; etc. You can configure another name format
using the &lt;code&gt;name_format&lt;/code&gt; configuration file option. The format uses asterisk
&lt;code&gt;*&lt;/code&gt; as a placeholder for the base file name and one or more consecutive
hash characters &lt;code&gt;#&lt;/code&gt; as a placeholder for sequential chunk number.
There must be one and only one asterisk. The number of consecutive hash
characters defines the minimum length of a string representing a chunk number.
If decimal chunk number has less digits than the number of hashes, it is
left-padded by zeros. If the decimal string is longer, it is left intact.
By default numbering starts from 1 but there is another option that allows
user to start from 0, e.g. for compatibility with legacy software.&lt;/p&gt;
&lt;p&gt;For example, if name format is &lt;code&gt;big_*-##.part&lt;/code&gt; and original file name is
&lt;code&gt;data.txt&lt;/code&gt; and numbering starts from 0, then the first chunk will be named
&lt;code&gt;big_data.txt-00.part&lt;/code&gt;, the 99th chunk will be &lt;code&gt;big_data.txt-98.part&lt;/code&gt;
and the 302nd chunk will become &lt;code&gt;big_data.txt-301.part&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;list&lt;/code&gt; assembles composite directory entries only when chunk names
match the configured format and treats non-conforming file names as normal
non-chunked files.&lt;/p&gt;
&lt;p&gt;When using &lt;code&gt;norename&lt;/code&gt; transactions, chunk names will additionally have a unique
file version suffix. For example, &lt;code&gt;BIG_FILE_NAME.rclone_chunk.001_bp562k&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;metadata&#34;&gt;Metadata&lt;/h3&gt;
&lt;p&gt;Besides data chunks chunker will by default create metadata object for
a composite file. The object is named after the original file.
Chunker allows user to disable metadata completely (the &lt;code&gt;none&lt;/code&gt; format).
Note that metadata is normally not created for files smaller than the
configured chunk size. This may change in future rclone releases.&lt;/p&gt;
&lt;h4 id=&#34;simple-json-metadata-format&#34;&gt;Simple JSON metadata format&lt;/h4&gt;
&lt;p&gt;This is the default format. It supports hash sums and chunk validation
for composite files. Meta objects carry the following fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ver&lt;/code&gt;     - version of format, currently &lt;code&gt;1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;size&lt;/code&gt;    - total size of composite file&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nchunks&lt;/code&gt; - number of data chunks in file&lt;/li&gt;
&lt;li&gt;&lt;code&gt;md5&lt;/code&gt;     - MD5 hashsum of composite file (if present)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sha1&lt;/code&gt;    - SHA1 hashsum (if present)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;txn&lt;/code&gt;     - identifies current version of the file&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There is no field for composite file name as it&#39;s simply equal to the name
of meta object on the wrapped remote. Please refer to respective sections
for details on hashsums and modified time handling.&lt;/p&gt;
&lt;h4 id=&#34;no-metadata&#34;&gt;No metadata&lt;/h4&gt;
&lt;p&gt;You can disable meta objects by setting the meta format option to &lt;code&gt;none&lt;/code&gt;.
In this mode chunker will scan directory for all files that follow
configured chunk name format, group them by detecting chunks with the same
base name and show group names as virtual composite files.
This method is more prone to missing chunk errors (especially missing
last chunk) than format with metadata enabled.&lt;/p&gt;
&lt;h3 id=&#34;hashsums&#34;&gt;Hashsums&lt;/h3&gt;
&lt;p&gt;Chunker supports hashsums only when a compatible metadata is present.
Hence, if you choose metadata format of &lt;code&gt;none&lt;/code&gt;, chunker will report hashsum
as &lt;code&gt;UNSUPPORTED&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Please note that by default metadata is stored only for composite files.
If a file is smaller than configured chunk size, chunker will transparently
redirect hash requests to wrapped remote, so support depends on that.
You will see the empty string as a hashsum of requested type for small
files if the wrapped remote doesn&#39;t support it.&lt;/p&gt;
&lt;p&gt;Many storage backends support MD5 and SHA1 hash types, so does chunker.
With chunker you can choose one or another but not both.
MD5 is set by default as the most supported type.
Since chunker keeps hashes for composite files and falls back to the
wrapped remote hash for non-chunked ones, we advise you to choose the same
hash type as supported by wrapped remote so that your file listings
look coherent.&lt;/p&gt;
&lt;p&gt;If your storage backend does not support MD5 or SHA1 but you need consistent
file hashing, configure chunker with &lt;code&gt;md5all&lt;/code&gt; or &lt;code&gt;sha1all&lt;/code&gt;. These two modes
guarantee given hash for all files. If wrapped remote doesn&#39;t support it,
chunker will then add metadata to all files, even small. However, this can
double the amount of small files in storage and incur additional service charges.
You can even use chunker to force md5/sha1 support in any other remote
at expense of sidecar meta objects by setting e.g. &lt;code&gt;hash_type=sha1all&lt;/code&gt;
to force hashsums and &lt;code&gt;chunk_size=1P&lt;/code&gt; to effectively disable chunking.&lt;/p&gt;
&lt;p&gt;Normally, when a file is copied to chunker controlled remote, chunker
will ask the file source for compatible file hash and revert to on-the-fly
calculation if none is found. This involves some CPU overhead but provides
a guarantee that given hashsum is available. Also, chunker will reject
a server-side copy or move operation if source and destination hashsum
types are different resulting in the extra network bandwidth, too.
In some rare cases this may be undesired, so chunker provides two optional
choices: &lt;code&gt;sha1quick&lt;/code&gt; and &lt;code&gt;md5quick&lt;/code&gt;. If the source does not support primary
hash type and the quick mode is enabled, chunker will try to fall back to
the secondary type. This will save CPU and bandwidth but can result in empty
hashsums at destination. Beware of consequences: the &lt;code&gt;sync&lt;/code&gt; command will
revert (sometimes silently) to time/size comparison if compatible hashsums
between source and target are not found.&lt;/p&gt;
&lt;h3 id=&#34;modification-times&#34;&gt;Modification times&lt;/h3&gt;
&lt;p&gt;Chunker stores modification times using the wrapped remote so support
depends on that. For a small non-chunked file the chunker overlay simply
manipulates modification time of the wrapped remote file.
For a composite file with metadata chunker will get and set
modification time of the metadata object on the wrapped remote.
If file is chunked but metadata format is &lt;code&gt;none&lt;/code&gt; then chunker will
use modification time of the first data chunk.&lt;/p&gt;
&lt;h3 id=&#34;migrations&#34;&gt;Migrations&lt;/h3&gt;
&lt;p&gt;The idiomatic way to migrate to a different chunk size, hash type, transaction
style or chunk naming scheme is to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collect all your chunked files under a directory and have your
chunker remote point to it.&lt;/li&gt;
&lt;li&gt;Create another directory (most probably on the same cloud storage)
and configure a new remote with desired metadata format,
hash type, chunk naming etc.&lt;/li&gt;
&lt;li&gt;Now run &lt;code&gt;rclone sync --interactive oldchunks: newchunks:&lt;/code&gt; and all your data
will be transparently converted in transfer.
This may take some time, yet chunker will try server-side
copy if possible.&lt;/li&gt;
&lt;li&gt;After checking data integrity you may remove configuration section
of the old remote.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If rclone gets killed during a long operation on a big composite file,
hidden temporary chunks may stay in the directory. They will not be
shown by the &lt;code&gt;list&lt;/code&gt; command but will eat up your account quota.
Please note that the &lt;code&gt;deletefile&lt;/code&gt; command deletes only active
chunks of a file. As a workaround, you can use remote of the wrapped
file system to see them.
An easy way to get rid of hidden garbage is to copy littered directory
somewhere using the chunker remote and purge the original directory.
The &lt;code&gt;copy&lt;/code&gt; command will copy only active chunks while the &lt;code&gt;purge&lt;/code&gt; will
remove everything including garbage.&lt;/p&gt;
&lt;h3 id=&#34;caveats-and-limitations&#34;&gt;Caveats and Limitations&lt;/h3&gt;
&lt;p&gt;Chunker requires wrapped remote to support server-side &lt;code&gt;move&lt;/code&gt; (or &lt;code&gt;copy&lt;/code&gt; +
&lt;code&gt;delete&lt;/code&gt;) operations, otherwise it will explicitly refuse to start.
This is because it internally renames temporary chunk files to their final
names when an operation completes successfully.&lt;/p&gt;
&lt;p&gt;Chunker encodes chunk number in file name, so with default &lt;code&gt;name_format&lt;/code&gt;
setting it adds 17 characters. Also chunker adds 7 characters of temporary
suffix during operations. Many file systems limit base file name without path
by 255 characters. Using rclone&#39;s crypt remote as a base file system limits
file name by 143 characters. Thus, maximum name length is 231 for most files
and 119 for chunker-over-crypt. A user in need can change name format to
e.g. &lt;code&gt;*.rcc##&lt;/code&gt; and save 10 characters (provided at most 99 chunks per file).&lt;/p&gt;
&lt;p&gt;Note that a move implemented using the copy-and-delete method may incur
double charging with some cloud storage providers.&lt;/p&gt;
&lt;p&gt;Chunker will not automatically rename existing chunks when you run
&lt;code&gt;rclone config&lt;/code&gt; on a live remote and change the chunk name format.
Beware that in result of this some files which have been treated as chunks
before the change can pop up in directory listings as normal files
and vice versa. The same warning holds for the chunk size.
If you desperately need to change critical chunking settings, you should
run data migration as described above.&lt;/p&gt;
&lt;p&gt;If wrapped remote is case insensitive, the chunker overlay will inherit
that property (so you can&#39;t have a file called &amp;quot;Hello.doc&amp;quot; and &amp;quot;hello.doc&amp;quot;
in the same directory).&lt;/p&gt;
&lt;p&gt;Chunker included in rclone releases up to &lt;code&gt;v1.54&lt;/code&gt; can sometimes fail to
detect metadata produced by recent versions of rclone. We recommend users
to keep rclone up-to-date to avoid data corruption.&lt;/p&gt;
&lt;p&gt;Changing &lt;code&gt;transactions&lt;/code&gt; is dangerous and requires explicit migration.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to chunker (Transparently chunk/split large files).&lt;/p&gt;
&lt;h4 id=&#34;chunker-remote&#34;&gt;--chunker-remote&lt;/h4&gt;
&lt;p&gt;Remote to chunk/unchunk.&lt;/p&gt;
&lt;p&gt;Normally should contain a &#39;:&#39; and a path, e.g. &amp;quot;myremote:path/to/dir&amp;quot;,
&amp;quot;myremote:bucket&amp;quot; or maybe &amp;quot;myremote:&amp;quot; (not recommended).&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      remote&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_REMOTE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    true&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-chunk-size&#34;&gt;--chunker-chunk-size&lt;/h4&gt;
&lt;p&gt;Files larger than chunk size will be split in chunks.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_CHUNK_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     2Gi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-hash-type&#34;&gt;--chunker-hash-type&lt;/h4&gt;
&lt;p&gt;Choose how chunker handles hash sums.&lt;/p&gt;
&lt;p&gt;All modes but &amp;quot;none&amp;quot; require metadata.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      hash_type&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_HASH_TYPE&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;md5&amp;quot;&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;none&amp;quot;
&lt;ul&gt;
&lt;li&gt;Pass any hash supported by wrapped remote for non-chunked files.&lt;/li&gt;
&lt;li&gt;Return nothing otherwise.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;md5&amp;quot;
&lt;ul&gt;
&lt;li&gt;MD5 for composite files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;sha1&amp;quot;
&lt;ul&gt;
&lt;li&gt;SHA1 for composite files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;md5all&amp;quot;
&lt;ul&gt;
&lt;li&gt;MD5 for all files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;sha1all&amp;quot;
&lt;ul&gt;
&lt;li&gt;SHA1 for all files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;md5quick&amp;quot;
&lt;ul&gt;
&lt;li&gt;Copying a file to chunker will request MD5 from the source.&lt;/li&gt;
&lt;li&gt;Falling back to SHA1 if unsupported.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;sha1quick&amp;quot;
&lt;ul&gt;
&lt;li&gt;Similar to &amp;quot;md5quick&amp;quot; but prefers SHA1 over MD5.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to chunker (Transparently chunk/split large files).&lt;/p&gt;
&lt;h4 id=&#34;chunker-name-format&#34;&gt;--chunker-name-format&lt;/h4&gt;
&lt;p&gt;String format of chunk file names.&lt;/p&gt;
&lt;p&gt;The two placeholders are: base file name (*) and chunk number (#...).
There must be one and only one asterisk and one or more consecutive hash characters.
If chunk number has less digits than the number of hashes, it is left-padded by zeros.
If there are more digits in the number, they are left as is.
Possible chunk files are ignored if their name does not match given format.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      name_format&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_NAME_FORMAT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;*.rclone_chunk.###&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-start-from&#34;&gt;--chunker-start-from&lt;/h4&gt;
&lt;p&gt;Minimum valid chunk number. Usually 0 or 1.&lt;/p&gt;
&lt;p&gt;By default chunk numbers start from 1.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      start_from&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_START_FROM&lt;/li&gt;
&lt;li&gt;Type:        int&lt;/li&gt;
&lt;li&gt;Default:     1&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-meta-format&#34;&gt;--chunker-meta-format&lt;/h4&gt;
&lt;p&gt;Format of the metadata object or &amp;quot;none&amp;quot;.&lt;/p&gt;
&lt;p&gt;By default &amp;quot;simplejson&amp;quot;.
Metadata is a small JSON file named after the composite file.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      meta_format&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_META_FORMAT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;simplejson&amp;quot;&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;none&amp;quot;
&lt;ul&gt;
&lt;li&gt;Do not use metadata files at all.&lt;/li&gt;
&lt;li&gt;Requires hash type &amp;quot;none&amp;quot;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;simplejson&amp;quot;
&lt;ul&gt;
&lt;li&gt;Simple JSON supports hash sums and chunk validation.&lt;/li&gt;
&lt;li&gt;&lt;/li&gt;
&lt;li&gt;It has the following fields: ver, size, nchunks, md5, sha1.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-fail-hard&#34;&gt;--chunker-fail-hard&lt;/h4&gt;
&lt;p&gt;Choose how chunker should handle files with missing or invalid chunks.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      fail_hard&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_FAIL_HARD&lt;/li&gt;
&lt;li&gt;Type:        bool&lt;/li&gt;
&lt;li&gt;Default:     false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;true&amp;quot;
&lt;ul&gt;
&lt;li&gt;Report errors and abort current command.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;false&amp;quot;
&lt;ul&gt;
&lt;li&gt;Warn user, skip incomplete file and proceed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-transactions&#34;&gt;--chunker-transactions&lt;/h4&gt;
&lt;p&gt;Choose how chunker should handle temporary files during transactions.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      transactions&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_TRANSACTIONS&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Default:     &amp;quot;rename&amp;quot;&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;rename&amp;quot;
&lt;ul&gt;
&lt;li&gt;Rename temporary files after a successful transaction.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;norename&amp;quot;
&lt;ul&gt;
&lt;li&gt;Leave temporary file names and write transaction ID to metadata file.&lt;/li&gt;
&lt;li&gt;Metadata is required for no rename transactions (meta format cannot be &amp;quot;none&amp;quot;).&lt;/li&gt;
&lt;li&gt;If you are using norename transactions you should be careful not to downgrade Rclone&lt;/li&gt;
&lt;li&gt;as older versions of Rclone don&#39;t support this transaction style and will misinterpret&lt;/li&gt;
&lt;li&gt;files manipulated by norename transactions.&lt;/li&gt;
&lt;li&gt;This method is EXPERIMENTAL, don&#39;t use on production systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;auto&amp;quot;
&lt;ul&gt;
&lt;li&gt;Rename or norename will be used depending on capabilities of the backend.&lt;/li&gt;
&lt;li&gt;If meta format is set to &amp;quot;none&amp;quot;, rename transactions will always be used.&lt;/li&gt;
&lt;li&gt;This method is EXPERIMENTAL, don&#39;t use on production systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;chunker-description&#34;&gt;--chunker-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_CHUNKER_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    
    <item>
      <title>Citrix ShareFile</title>
      <link>https://rclone.org/sharefile/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/sharefile/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode70s0hbhb-citrix-sharefile&#34;&gt;&lt;i class=&#34;fas fa-share-square&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Citrix ShareFile&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://sharefile.com&#34;&gt;Citrix ShareFile&lt;/a&gt; is a secure file sharing and transfer service aimed as business.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;The initial setup for Citrix ShareFile involves getting a token from
Citrix ShareFile which you can in your browser.  &lt;code&gt;rclone config&lt;/code&gt; walks you
through it.&lt;/p&gt;
&lt;p&gt;Here is an example of how to make a remote called &lt;code&gt;remote&lt;/code&gt;.  First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Type of storage to configure.
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
XX / Citrix Sharefile
   \ &amp;#34;sharefile&amp;#34;
Storage&amp;gt; sharefile
** See help for sharefile backend at: https://rclone.org/sharefile/ **

ID of the root folder

Leave blank to access &amp;#34;Personal Folders&amp;#34;.  You can use one of the
standard values here or any folder ID (long hex number ID).
Enter a string value. Press Enter for the default (&amp;#34;&amp;#34;).
Choose a number from below, or type in your own value
 1 / Access the Personal Folders. (Default)
   \ &amp;#34;&amp;#34;
 2 / Access the Favorites folder.
   \ &amp;#34;favorites&amp;#34;
 3 / Access all the shared folders.
   \ &amp;#34;allshared&amp;#34;
 4 / Access all the individual connectors.
   \ &amp;#34;connectors&amp;#34;
 5 / Access the home, favorites, and shared folders as well as the connectors.
   \ &amp;#34;top&amp;#34;
root_folder_id&amp;gt; 
Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; n
Remote config
Use web browser to automatically authenticate rclone with remote?
 * Say Y if the machine running rclone has a web browser you can use
 * Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n&amp;gt; y
If your browser doesn&amp;#39;t open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX
Log in and authorize rclone for access
Waiting for code...
Got code
Configuration complete.
Options:
- type: sharefile
- endpoint: https://XXX.sharefile.com
- token: {&amp;#34;access_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;token_type&amp;#34;:&amp;#34;bearer&amp;#34;,&amp;#34;refresh_token&amp;#34;:&amp;#34;XXX&amp;#34;,&amp;#34;expiry&amp;#34;:&amp;#34;2019-09-30T19:41:45.878561877+01:00&amp;#34;}
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/remote_setup/&#34;&gt;remote setup docs&lt;/a&gt; for how to set it up on a
machine with no Internet browser available.&lt;/p&gt;
&lt;p&gt;Note that rclone runs a webserver on your local machine to collect the
token as returned from Citrix ShareFile. This only runs from the moment it opens
your browser to the moment you get back the verification code.  This
is on &lt;code&gt;http://127.0.0.1:53682/&lt;/code&gt; and this it may require you to unblock
it temporarily if you are running a host firewall.&lt;/p&gt;
&lt;p&gt;Once configured you can then use &lt;code&gt;rclone&lt;/code&gt; like this,&lt;/p&gt;
&lt;p&gt;List directories in top level of your ShareFile&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone lsd remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List all the files in your ShareFile&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone ls remote:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To copy a local directory to an ShareFile directory called backup&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone copy /home/source remote:backup
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Paths may be as deep as required, e.g. &lt;code&gt;remote:directory/subdirectory&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;modification-times-and-hashes&#34;&gt;Modification times and hashes&lt;/h3&gt;
&lt;p&gt;ShareFile allows modification times to be set on objects accurate to 1
second.  These will be used to detect whether objects need syncing or
not.&lt;/p&gt;
&lt;p&gt;ShareFile supports MD5 type hashes, so you can use the &lt;code&gt;--checksum&lt;/code&gt;
flag.&lt;/p&gt;
&lt;h3 id=&#34;transfers&#34;&gt;Transfers&lt;/h3&gt;
&lt;p&gt;For files above 128 MiB rclone will use a chunked transfer.  Rclone will
upload up to &lt;code&gt;--transfers&lt;/code&gt; chunks at the same time (shared among all
the multipart uploads).  Chunks are buffered in memory and are
normally 64 MiB so increasing &lt;code&gt;--transfers&lt;/code&gt; will increase memory use.&lt;/p&gt;
&lt;h3 id=&#34;restricted-filename-characters&#34;&gt;Restricted filename characters&lt;/h3&gt;
&lt;p&gt;In addition to the &lt;a href=&#34;https://rclone.org/overview/#restricted-characters&#34;&gt;default restricted characters set&lt;/a&gt;
the following characters are also replaced:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;\&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x5C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＼&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;*&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x2A&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＊&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＜&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;gt;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3E&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＞&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;?&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3F&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;？&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;:&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x3A&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;：&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;|&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x7C&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;｜&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;quot;&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x22&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;＂&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;File names can also not start or end with the following characters.
These only get replaced if they are the first or last character in the
name:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Character&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Value&lt;/th&gt;
&lt;th style=&#34;text-align:center&#34;&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SP&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x20&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;␠&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;0x2E&lt;/td&gt;
&lt;td style=&#34;text-align:center&#34;&gt;．&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Invalid UTF-8 bytes will also be &lt;a href=&#34;https://rclone.org/overview/#invalid-utf8&#34;&gt;replaced&lt;/a&gt;,
as they can&#39;t be used in JSON strings.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to sharefile (Citrix Sharefile).&lt;/p&gt;
&lt;h4 id=&#34;sharefile-client-id&#34;&gt;--sharefile-client-id&lt;/h4&gt;
&lt;p&gt;OAuth Client Id.&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      client_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_CLIENT_ID&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-client-secret&#34;&gt;--sharefile-client-secret&lt;/h4&gt;
&lt;p&gt;OAuth Client Secret.&lt;/p&gt;
&lt;p&gt;Leave blank normally.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      client_secret&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_CLIENT_SECRET&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-root-folder-id&#34;&gt;--sharefile-root-folder-id&lt;/h4&gt;
&lt;p&gt;ID of the root folder.&lt;/p&gt;
&lt;p&gt;Leave blank to access &amp;quot;Personal Folders&amp;quot;.  You can use one of the
standard values here or any folder ID (long hex number ID).&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      root_folder_id&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_ROOT_FOLDER_ID&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;li&gt;Examples:
&lt;ul&gt;
&lt;li&gt;&amp;quot;&amp;quot;
&lt;ul&gt;
&lt;li&gt;Access the Personal Folders (default).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;favorites&amp;quot;
&lt;ul&gt;
&lt;li&gt;Access the Favorites folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;allshared&amp;quot;
&lt;ul&gt;
&lt;li&gt;Access all the shared folders.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;connectors&amp;quot;
&lt;ul&gt;
&lt;li&gt;Access all the individual connectors.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;quot;top&amp;quot;
&lt;ul&gt;
&lt;li&gt;Access the home, favorites, and shared folders as well as the connectors.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to sharefile (Citrix Sharefile).&lt;/p&gt;
&lt;h4 id=&#34;sharefile-token&#34;&gt;--sharefile-token&lt;/h4&gt;
&lt;p&gt;OAuth Access Token as a JSON blob.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      token&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_TOKEN&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-auth-url&#34;&gt;--sharefile-auth-url&lt;/h4&gt;
&lt;p&gt;Auth server URL.&lt;/p&gt;
&lt;p&gt;Leave blank to use the provider defaults.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      auth_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_AUTH_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-token-url&#34;&gt;--sharefile-token-url&lt;/h4&gt;
&lt;p&gt;Token server url.&lt;/p&gt;
&lt;p&gt;Leave blank to use the provider defaults.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      token_url&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_TOKEN_URL&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-upload-cutoff&#34;&gt;--sharefile-upload-cutoff&lt;/h4&gt;
&lt;p&gt;Cutoff for switching to multipart upload.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upload_cutoff&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_UPLOAD_CUTOFF&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     128Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-chunk-size&#34;&gt;--sharefile-chunk-size&lt;/h4&gt;
&lt;p&gt;Upload chunk size.&lt;/p&gt;
&lt;p&gt;Must a power of 2 &amp;gt;= 256k.&lt;/p&gt;
&lt;p&gt;Making this larger will improve performance, but note that each chunk
is buffered in memory one per transfer.&lt;/p&gt;
&lt;p&gt;Reducing this will reduce memory usage but decrease performance.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      chunk_size&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_CHUNK_SIZE&lt;/li&gt;
&lt;li&gt;Type:        SizeSuffix&lt;/li&gt;
&lt;li&gt;Default:     64Mi&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-endpoint&#34;&gt;--sharefile-endpoint&lt;/h4&gt;
&lt;p&gt;Endpoint for API calls.&lt;/p&gt;
&lt;p&gt;This is usually auto discovered as part of the oauth process, but can
be set manually to something like: &lt;a href=&#34;https://XXX.sharefile.com&#34;&gt;https://XXX.sharefile.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      endpoint&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_ENDPOINT&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-encoding&#34;&gt;--sharefile-encoding&lt;/h4&gt;
&lt;p&gt;The encoding for the backend.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/overview/#encoding&#34;&gt;encoding section in the overview&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      encoding&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_ENCODING&lt;/li&gt;
&lt;li&gt;Type:        Encoding&lt;/li&gt;
&lt;li&gt;Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;sharefile-description&#34;&gt;--sharefile-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_SHAREFILE_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;Note that ShareFile is case insensitive so you can&#39;t have a file called
&amp;quot;Hello.doc&amp;quot; and one called &amp;quot;hello.doc&amp;quot;.&lt;/p&gt;
&lt;p&gt;ShareFile only supports filenames up to 256 characters in length.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;rclone about&lt;/code&gt; is not supported by the Citrix ShareFile backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy &lt;code&gt;mfs&lt;/code&gt; (most free space) as a member of an rclone union
remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/overview/#optional-features&#34;&gt;List of backends that do not support rclone about&lt;/a&gt; and &lt;a href=&#34;https://rclone.org/commands/rclone_about/&#34;&gt;rclone about&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Combine</title>
      <link>https://rclone.org/combine/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <author>Nick Craig-Wood</author>
      <guid>https://rclone.org/combine/</guid>
      <description>&lt;h1 id=&#34;hahahugoshortcode10s0hbhb-combine&#34;&gt;&lt;i class=&#34;fa fa-folder-plus&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt;
 Combine&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;combine&lt;/code&gt; backend joins remotes together into a single directory
tree.&lt;/p&gt;
&lt;p&gt;For example you might have a remote for images on one provider:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone tree s3:imagesbucket
/
├── image1.jpg
└── image2.jpg
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And a remote for files on another:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone tree drive:important/files
/
├── file1.txt
└── file2.txt
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;combine&lt;/code&gt; backend can join these together into a synthetic
directory structure like this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ rclone tree combined:
/
├── files
│   ├── file1.txt
│   └── file2.txt
└── images
    ├── image1.jpg
    └── image2.jpg
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You&#39;d do this by specifying an &lt;code&gt;upstreams&lt;/code&gt; parameter in the config
like this&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;upstreams = images=s3:imagesbucket files=drive:important/files
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;During the initial setup with &lt;code&gt;rclone config&lt;/code&gt; you will specify the
upstreams remotes as a space separated list. The upstream remotes can
either be a local paths or other remotes.&lt;/p&gt;
&lt;h2 id=&#34;configuration&#34;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Here is an example of how to make a combine called &lt;code&gt;remote&lt;/code&gt; for the
example above. First run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; rclone config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will guide you through an interactive setup process:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; remote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
XX / Combine several remotes into one
   \ (combine)
...
Storage&amp;gt; combine
Option upstreams.
Upstreams for combining
These should be in the form
    dir=remote:path dir2=remote2:path
Where before the = is specified the root directory and after is the remote to
put there.
Embedded spaces can be added using quotes
    &amp;#34;dir=remote:path with space&amp;#34; &amp;#34;dir2=remote2:path with space&amp;#34;
Enter a fs.SpaceSepList value.
upstreams&amp;gt; images=s3:imagesbucket files=drive:important/files
Configuration complete.
Options:
- type: combine
- upstreams: images=s3:imagesbucket files=drive:important/files
Keep this &amp;#34;remote&amp;#34; remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;configuring-for-google-drive-shared-drives&#34;&gt;Configuring for Google Drive Shared Drives&lt;/h3&gt;
&lt;p&gt;Rclone has a convenience feature for making a combine backend for all
the shared drives you have access to.&lt;/p&gt;
&lt;p&gt;Assuming your main (non shared drive) Google drive remote is called
&lt;code&gt;drive:&lt;/code&gt; you would run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rclone backend -o config drives drive:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This would produce something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:

[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:

[AllDrives]
type = combine
upstreams = &amp;quot;My Drive=My Drive:&amp;quot; &amp;quot;Test Drive=Test Drive:&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you then add that config to your config file (find it with &lt;code&gt;rclone config file&lt;/code&gt;) then you can access all the shared drives in one place
with the &lt;code&gt;AllDrives:&lt;/code&gt; remote.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://rclone.org/drive/#drives&#34;&gt;the Google Drive docs&lt;/a&gt; for full info.&lt;/p&gt;

&lt;h3 id=&#34;standard-options&#34;&gt;Standard options&lt;/h3&gt;
&lt;p&gt;Here are the Standard options specific to combine (Combine several remotes into one).&lt;/p&gt;
&lt;h4 id=&#34;combine-upstreams&#34;&gt;--combine-upstreams&lt;/h4&gt;
&lt;p&gt;Upstreams for combining&lt;/p&gt;
&lt;p&gt;These should be in the form&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;dir=remote:path dir2=remote2:path
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where before the = is specified the root directory and after is the remote to
put there.&lt;/p&gt;
&lt;p&gt;Embedded spaces can be added using quotes&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;quot;dir=remote:path with space&amp;quot; &amp;quot;dir2=remote2:path with space&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      upstreams&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_COMBINE_UPSTREAMS&lt;/li&gt;
&lt;li&gt;Type:        SpaceSepList&lt;/li&gt;
&lt;li&gt;Default:&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advanced-options&#34;&gt;Advanced options&lt;/h3&gt;
&lt;p&gt;Here are the Advanced options specific to combine (Combine several remotes into one).&lt;/p&gt;
&lt;h4 id=&#34;combine-description&#34;&gt;--combine-description&lt;/h4&gt;
&lt;p&gt;Description of the remote.&lt;/p&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Config:      description&lt;/li&gt;
&lt;li&gt;Env Var:     RCLONE_COMBINE_DESCRIPTION&lt;/li&gt;
&lt;li&gt;Type:        string&lt;/li&gt;
&lt;li&gt;Required:    false&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;metadata&#34;&gt;Metadata&lt;/h3&gt;
&lt;p&gt;Any metadata supported by the underlying remote is read and written.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://rclone.org/docs/#metadata&#34;&gt;metadata&lt;/a&gt; docs for more info.&lt;/p&gt;

</description>
    </item>
    
  </channel>
</rss>
