No - such inaccuracies arise as an inherent consequence of the limited precision of the double-precision floating point format (for which the significand can represent decimal values to around 15-17 significant figures of precision).
Consider how some numbers cannot be perfectly represented in base10 (decimal) - e.g. multiples of 1/3 represented in decimal would require a infinite number of decimal places to be represented exactly; whereas, these same numbers can be represented exactly in other bases - e.g. 1/3 in base3 (ternary) is exactly 0.1. The same applies to binary values, where other rational numbers cannot be represented exactly, but must instead be approximated to the precision afforded by the amount of memory allocated to the format used to store the value (in this case 64-bits used by the double-precision floating point format).
As such, rounding will inevitably occur at the limit of precision following arithmetic operations, which then introduces infinitesimal differences and can result in a calculated value not being exactly equal to its literal counterpart - you can observe these differences directly using the rtos function to display all available decimal places:
_$ (rtos x 2 16)
"0.8000000000000000"
_$ (* x x)
0.64
_$ (rtos (* x x) 2 16)
"0.6400000000000002"
_$ (= 0.64 (* x x))
nil
_$ (equal 0.64 (* x x) 1e-8)
T