Support "chunked" files stored in SeaweedFS
SeaweedFS reads the entire file into memory when serving it in order to be fast and achieve high concurrency, therefore it isn't very efficient at serving large files. The SeaweedFS wiki recommends splitting large files into smaller chunks and reading each chunk one-by-one.
weed
has a CLI command that does this with the upload
subcommand, i.e. weed upload -maxMB=64 file
. This will read up to 64MB of the file and write to a new file in SeaweedFS and will repeat this until the entire file is in SeaweedFS in parts. Then it will create another file in SeaweedFS with the metadata of the file and file IDs of all of the chunks.
More information about SeaweedFS' implementation can be found at the SeaweedFS wiki.