Generally, IMC approaches perform computation with relatively low numerical precision. Hence, IMC does not aim to replace digital floating-point arithmetic units and, instead, targets applications, such as deep neural network inference, which are resilient to low precision. We also need to address the limitations arising from device variability and non-ideal device characteristics. Thus, the concept of mixed-precision in-memory computing [legallo2018] and its application to deep neural network training was proposed in [nandakumar2020] and [Eleftheriou2019].