DevMode insist on showing floating point numbers with a fairly ridiculous amount of significant digits, and writing numbers to the .cfg files in that way. Surely the relevant precision would be two or perhaps three digits in most cases. It just a waste of precious screen space for the UI, and makes reading the information harder when perusing cfg files of bundled or add-on aircraft to learn.
Having numbers like 13.958436 or 41.799999 in an engines.cfg, for instance, is just silly. They could well be rounded to two or three significant digits. Much easier to read. I doubt there would be any noticeable difference in the aircraft behaviour if these numbers were 14 and 42 instead.
And while at it, the cfg file writer could add some whitespace, like after commas in table lines and around = signs.
Remember, cfg files are basically like source code. Source code is read much more often than it is written. Readability and cosmetics are important. In my humble opinion, with a long software engineering background.
Mathematics is not always beautiful to look at and read. To get the maximum precision, the numbers have to be ugly to be precise. The significant figures differences might only be tiny but can change the overall accuracy. This is the reason the world doesn’t operate on two significant figures.
How do you then explain that the same extremely precise numbers show up copied all over the place, in extremely dissimilar aircraft? You really claim that the developer of an aircraft has determined that yes, even though for instance the dimensions and engine model, power or thrust of their aircraft is completely different than some other, they still need to use the exact same numbers in engine-related parameters?
That’s a good question @tml! I can’t speak for developers motivation.
Solving mathematical equations of complexity requires both mathematical knowledge and persistence. Maybe many of these developers choose gobbledygook for certain significant figures to be easier. Does it effect the overall accuracy? it might. This is up to the developers to decide.
The editor should not destroy precision on any values read from the cfgs.
Deciding how many digits each parameter should have may have unintended effects as not everything is used in the originally imagined fashion, and sometimes we need to do “interesting” things to achieve certain effects.
I don’t see the extra precision really being a problem for any developer.
The editor should not destroy precision on any values read from the cfgs.
OK, I can agree with that, kinda. But if Asobo would agree with my point, would you see it as a problem if the bundled aircraft and the sample ones in the future had cfg files with numbers with less (fake) precision?
Let’s take a concrete example. The B787-10 flight_model.cfg has:
Look at 40.641644 and 5.000003. I am dead sure that nobody has measured where the centre of gravity of the business class passengers is with a precision of a millionth of a foot. And does the exact zero for lateral position have the same precision? After all, 0 = 0.000000.
Surely the silly number 5.000003 is the result of just a very rough estimate of 5 without any decimals having been entered initially and then it was written out into the file with unnecessary fake precision. Not sure what the 40.641644 comes from, but it surely can’t have been what was initially written into the file or typed into dev mode.
Deciding how many digits each parameter should have may have unintended effects
As if there wasn’t lots of other things that have random unintended consequences all the time in large and complicated software like this…
I don’t see the extra precision really being a problem for any developer
I am a developer. The fake “precision” makes it harder for me to read .cfg files. That annoys me. Thus your generalisation is proven invalid.
That’s exactly what it is “Fake Precision” and totally unnecessary.
As already pointed out, makes no detectable difference to any sim calculation but only servers to make it more difficult for humans to read.
I really cannot understand why this is being debated here is a SDK Forum, and these ridiculous over precise value, are being supported as being required precision.
What is more interesting to me is how many of these “Fake Precision”: values get copied (Blindly) from plane to plane and developer to developer
I get the feeling that few people have an idea how floating-point numbers in computers work. I certainly don’t claim to be an expert myself, but I know enough to know that there is much I don’t understand about it.
Now, that doesn’t explain how 5 has turned into 5.000003. 5 can be exactly represented in single precision floating point, and naturally then also in double precision.
I suspect that MSFS 2020 uses SI (“metric”) units internally (because many of the Dev Mode debug windows display values also in SI units, so clearly the developers “like” them, which is great), even though all input and output is in Imperial units, so some conversion back and forth might also introduce numeric arbitrariness. But just converting 5 ft to metres and back does not either explain the 5.000003. But that such conversion back and forth probably is going on is also something to keep in mind.
Maybe there are some additional older layers of conversion involved, too? Maybe FS9 used furlongs for length and fortnights for time? (That was a joke.) (The use of slugs and degrees Rankine in the cfg files is sadly not a joke, though.) Maybe numbers pass through some 16-bit floating point representation in the deepest legacy layers, thanks to history where even single precision floating point math was considered “too slow”? We might never know.
I had always 'assumed" that dev added these extra digits to try to see if anyone else was copying their “valuable” parameter values !!
That or they got generated programmatically, and nobody even went back to read them, or considered it was worthwhile making them shorter = easier to read by humans.
There are many examples in .cfg file where the data COULD be formatted/spaced, to i make it far more readable,
and only maybe decrease processing performance by 0 0000004528765434 % : for the 0.00000026538764 second time that it runs once on load.
As you noted, computer math in general is not exact, and they have to make many “fixes” to be able to do “good” math in computers. It’s actually quite frustrating how easy it is to get a wrong math answer.
This comment here from the links you provided explains exactly where the numbers you don’t like are coming from…
IEEE 754 is great for expressing mostly-accurate quantities, not so great when you want decimal precision. The following examples will all be done in Python, which uses IEEE 754 for its floating-point operations.
(I'm using Python 3.6.3 / IPython 6.2.1 if you're interested in reproducing my setup, but I think these examples should be the same in all versions of Python >= 3.0.)
In [1]: 0.1 + 0.2
Out[1]: 0.30000000000000004
There's actually a whole website dedicated to showcasing this "bug" in different languages: http://0.30000000000000004.com/
You could always write a little script to clean up numbers to the way you like to read them?
If the Aircraft Editor displayed numbers with sane precision I could understand that it is seen as irrelevant how they are stored in the .cfg file. But it, too, shows way too many pseudo-significant digits.