Support mounting compressed archives in the VFS (Virtual File System): save up to 95% disk space and hours of loading time (accumulated per year)


I’d suggest FS2020 to support mounting popular compressed archive formats
directly in the VFS (Virtual File System). At a minimum the the following ones
should be supported:

  • .7z because it is very efficient with FS2020 add-ons textures (see below).
  • .zip because it is readily available in Win10.

Here is what your community folder would look like: instead of this…

FS2020\Community\gaist-msfs-V2\... (thousands of folders and files)

you’d have only this…



FS2020 is using a VFS to manage different mounting points where files can be
loaded from. This is essential to support the Community folder or to replace
files with newer ones. However many add-ons comprise hundreds if not thousands
of files and this is not without causing a certain number of problems among
which hard-drive space size and FS2020 loading times:

Managing add-ons requires moving thousands of files

This can be partly worked around using symbolic links but this is not
conveniently done by hand, prone to errors, and this is still putting the
burden on the user to store and manually archive add-ons which are no longer
used, and to restore and manually unpack archived add-ons whenever the user
wants them back.

Loading add-ons consists in indexing thousands of files

Loading add-ons requires not only indexing the subfolders and their files but
also checking every single file size on the drive and compare them with the
layout.json content. This is causing lots of IOCTL which are adding up to
the total load time, and this has a very noticeable impact with no more than
20 “standard” add-on on an SSD, let alone on a HDD.


Mounting compressed archives directly in the VFS should solve a certain number
of problems, both at the user level and at the game level:

  • FS2020 add-on files indexing would be faster because these archive formats are indexing all the files and their sizes in the header already.
  • Add-on management would be easier because you’d move around a single compressed archive file instead of a folder with thousands of subfolders and files in it.
  • User drive space would be considerably reduced in the Community folder (see saving example below)
  • In addition you could archive your add-ons and restore them without having to decompress them first.


[Global AI Ship Traffic MSFS V2](
ship-traffic-msfs-v1) This is a “big” add-on with 11802 files and 1763 folders
| Total Size (bytes)| Compression ratio (%)
Community folder| 6,259,674,687|
.zip archive (std settings)| 927,505,718| 14.7 %
.7z archive (std settings)| 418,079,126| 6.7 %


Modernizing layout.json with wildcards

The add-on format requires using a layout.json file which contains the list
of all the files the add-on is using. It is meant to tell FS2020 which of the
files found into the folder and its subfolders are to load, and therefore
which files are to ignore. However, most of the time nearly all the files in a
folder must be loaded and sometimes all of the files in a folder must not
(docs, PDF). I’d suggest you support wildcards so that… you could replace

  "content": [
      "path": "ContentInfo/global-ai-ship-traffic-v2/Anzac Destroyer MSFS.JPG",
      "size": 248068,
      "date": 132752292850201898
      "path": "ContentInfo/global-ai-ship-traffic-v2/Arctic Princess LNG MSFS tanker.JPG",
      "size": 249930,
      "date": 132734818938055004
      "path": "ContentInfo/global-ai-ship-traffic-v2/Blue Marlin Heavyload Ship MSFS.JPG",
      "size": 393298,
      "date": 132747380300622415
      "path": "ContentInfo/global-ai-ship-traffic-v2/Blue Star Ferry Naxos MSFS.JPG",
      "size": 338063,
      "date": 132747391718720969

with this instead…

  "content": [
      "path": "ContentInfo/global-ai-ship-traffic-v2/*.*",

and exclude subfolders like this…

  "content": [
      "path-exclude": "SimObjects/Boats/AI_Alice_Austen/extra_info/*.*",

Most add-ons aren’t replacing files, they are just adding theirs, and there is
only 1 instance of each files to be mounted in the VFS. The size and date
should be only necessary when explicitly wanting to override another existing
file (using date). Otherwise size which is meant to validate the add-on
content is adding an unnecessary burden not only at authoring time (regardless
of mounting archives or not), but also at loading time if support for archives
is added, because the file size is already included in the archive directory
and file table headers. In using wildcards, you’d only keep the more verbose
form using size and date for the cases where add-on vendors are
specifically overriding a file with the same virtual path. This is the only
case where this information is necessary in order to triage which one is the
newest. [initially published

Both good ideas, but I think you should split them into 2 separate posts, for
ease of tracking.

Glad to see that you brought this idea over. Hopefully it gets some more
direct attention this way.

You can find the separate appendix section in a new topic here:

Hi there, We thought about supporting such a feature quite early in the MSFS
development process but finally decided not to do it. Although we may change
our mind in the future there are no plans to do so for now. Just a few
comments on what’s been written:

  • " Loading add-ons requires not only indexing the subfolders and their files but also checking every single file size on the drive and compare them with the layout.json content. This is causing lots of IOCTL which are adding up to the total load time " - this is incorrect. We do not check each file size upon mounting packages - it would rather defeat the purpose of the JSON file!
  • We actually use the size information to avoid costly disk accesses when each file size is required internally - not a burden at all! Since the proposed filter idea discards the size information completely, this is not something we would consider.
  • Distributing archives would mean that package updates cannot be done through patches anymore (unless we want an update to decompress / patch / recompress).
  • Although it is true that using archives would (significantly) reduce the hard drive footprint, it is unclear to me if “loading compressed data + decompress” would really be faster than “loading uncompressed data”. There may be some overhead in terms of memory requirements too.

I guess we will offer improvements to packages management in the future but I
am not convinced it will be provided through direct support of archives. Best
regards, Eric / Asobo

@EPelissier Thank you for the precisions and your time Eric. You’re raising
interesting points which are telling me I might have not have been clear
enough about some of these. Please let me explain in more details: - I can
read the console logs reporting when the layout.json size and the actual file
size differ. This tells that the file size must be actually checked at some
point in time, but I understand from your remarks it is not checked at the
time the game is loading the .json file, only when the add-on is effectively
loaded. Is this correct? - Do I also understand correctly that in turn, the
layout.json file, is actually serving the same purpose as the .zip directory
section in my suggestion? - Whether supporting and loading from an archive
gives a speed advantage or not, like anything else in computers, it is a
balance where you’d trade size for speed. The suggestion is not about debating
whether it is faster (which it is from my tests implementing this with X-Plane
11, especially when dealing with lots of small files), but it is about giving
the users a choice to either use flat files and folders which might be faster
eventually, or use compressed folders* even if this loads slower in practice.
From my experience, users know what they are doing and whether they prefer
incurring any load cost or storage cost when it is clearly explained to them.
Those who don’t know or can’t know won’t be using the system anyhow and
therefore this won’t change anything. Those who know will take advantage of
it, where “advantage” could mean taking less disk space and managing add-ons
easily for them for example. - The filter idea makes probably more sense when
used in the context of an archive, because in this case, the information set
forth in the .json would already be found in the archive directory header
(file path, name, size, flags, attributes, date). Having said this, the filter
idea I’m exposing here is also about having a practical inclusion/exclusion
system, given that nearly 99% addons are only adding and not replacing files,
and these files are all in the addons subfolders, and some of these files
(documentation pdf etc…) have usually no reason to be referenced in the
.json in my opinion. If the purpose of setting the size in the json file is to
preemptively reduce disk access cost when loading the files, then if I
understand correctly it is only when the actual files are loaded. In this
case, if this adds delays or access cost, the same reasoning as above still
remains: the informed user knows it is easier to reference with a wildcard,
but it can cause additional lookup and loading time. - It seemed to me the
archives you’re distributing are already the equivalent of .zip files in
disguise with an XML companion telling what files to add, what files to
delete, and what files to patch, and this is not what this is about here. The
idea above is solely for users managing their add-ons (i.e. the addon you
download from an add-on site like and/or
for vendors publishing their add-ons outside the Microsoft Marketplace.
Therefore I do not see this suggestion as something to widely apply to all the
FS2020 own distributed packages. This idea is only about adding a VFS mounting
option supporting archives (.zip/7z) and in practice, this will most likely be
only used by users putting files in the Community folder themselves. *In the
form of a .zip/7z for both the convenience at the user side and at the
implementation side because these archive formats are very well supported API
wise and have readily available user facing tooling.